Q&A with Peter Allen
October 13, 2017
Columbia Technology Ventures (CTV) spoke with Peter Allen, Professor of Computer Science and head of the Columbia Robotics Laboratory about the future of robotic home assistants, robotic surgery, and how Big Data is transforming the way robots learn.
CTV: You’ve licensed two technologies developed at Columbia, both of which are medical robots. Would you explain those technologies and tell us where they are in terms of commercialization?
PA: Sure, the first is IREP, which stands for Insertable Robotic Effector Platform, and Titan Medical has licensed it for use in intestinal surgery. It’s an ultra-flexible platform with multiple arms that can be used for manipulation, along with a camera module that gives the surgeon a three-dimensional view of the surgical field. One of the most important parts of the system is the software we developed, which makes the tools much more intelligent. Instead of a person having to track a tool during an operation to keep it in view for the surgeon, we have an automated system that handles that. The other technology has been licensed through a startup company I am invovled with, Platform Imaging, and it’s a disposable, in-vivo camera for laparoscopic surgery. It’s compact, has a wider field of view than many other cameras, and has onboard lighting and motors that allow the surgeon to pan and tilt inside the body. We’ve completed animal trials and our next step is obtaining funding to proceed to human trials and FDA approval.
CTV: Medical robots are something that many people are familiar with, but you’re also interested in bringing robots into the home environment. What are the barriers to home robots, and how and when do you think we’ll overcome some of them?
PA: Home robotics is a huge area for research right now, and I expect we’ll see robotic home assistants relatively soon, definitely within the next decade. One of the major barriers to making convenience robots ubiquitous is that robots don’t know how to cope with the unexpected. The world is very complex, and even if you try to model a situation ahead of time, things are always changing. Making robots that can adapt to that, that are robust to error, is key. Right now, robots do badly when things don’t go as planned. Shared autonomy is a profound concept for getting around this—that essentially means that you let the robot do as much as it can, and a human intervenes at specific control points. We don’t need full autonomy, just enough that the robot can get around, and when it needs information, it can go back to the user—so rather than the whole system breaking down, the robot can solve the problem and keep going. There are also practical and cost issues—building a robot with wheels is much cheaper and easier than building one that walks, but a wheeled robot couldn’t navigate a home with stairs. These are the kinds of limitations we’ll have to overcome.
CTV: How are computer modeling and simulations helping robots learn to understand the complexities of the world?
PA: If we’re talking about home robots, simulation is so important, along with dexterous manipulation, especially for objects that are deformable—like clothing—and have millions of different states. It’s very hard for a robot to understand those states, so we use simulations to try to predict what will happen. For example, if you wanted to create a robot that could fold or iron clothing, you’d have to simulate the activity of clothing over many thousands of manipulations and put that information in a database, so when the robot sees a piece of clothing, it can tap into the database to figure out what the clothing is and what needs to be done to manipulate it and accomplish whatever task the robot is aiming to do. We’re already doing this in our lab.
CTV: How can we use information in the cloud to make smarter robots?
PA: Machine learning and big data are two very powerful aspects of teaching robots about the world. There’s so much data on the web about the three-dimensional world we live in, and by allowing robots to tap into that information, they can quickly gain knowledge that would otherwise take years. One example of this is to imagine a table in an industrial factory that’s cluttered with many different parts. A robot operating in that environment needs to understand the setting down to the millimeter, but if there’s clutter, that robot has no way to “see” beyond the clutter and make decisions about which part it’s looking for, where it is, and what it needs to do next. We can tap into 3D models on the web that will allow robots to formulate a near-human kind of intuition and understanding about its environment from multiple viewpoints, allowing it to perform tasks within that clutter.
CTV: What’s the most exciting thing in robotics at Columbia right now?
PA: There’s so much interesting research going on right now, but what’s also exciting is that we’re growing. We’ve brought on new faculty with expertise in machine learning and computer vision, and we have great synergies between our group and mechanical engineering and computer science. This interdisciplinary work is so important for the future of robotics, because human-robot interface is a huge area for research. If robots are capable and ubiquitous, then humans have to figure out how to interact with them, whether through voice, or gestures, or even brain interfaces—it’s an extremely complex issue. As robots become more capable, the question becomes not “can they do it?” but “how do you control them to do it?”
Columbia Technology Ventures works closely with Columbia researches to commercialize early-stage technology innovations, connecting industry and investment partners with researchers to bring impactful, in-demand robotic technologies to market as quickly as possible. To see Columbia’s Robotics portfolio available for licensing please click here: http://innovation.columbia.edu/robotics
Please send all inquiries to: firstname.lastname@example.org