Preliminary Results

When I started this blog, I intended for a mix of adventure and academics. Well here is my first attempt at the mix.

I’m quite excited about current progress in my research. I guess I should take a step back for context.

Background

At the time of writing this post I am in my 5th year as a PhD student in the Department of Aeronautics and Astronautics at Stanford University. My research has meandered through the topics of interplanetary trajectory optimization; advanced propulsion and power systems for human missions to Mars; low-gravity robotic exploration (think comets, asteroids, or small moons); and finally – hopefully ‘finally’ otherwise I’ll never finish my PhD – to  motion planning for autonomous robotic systems such as driverless cars, quadcopters, and spacecraft. To frame my research with current evernts, consider the questions: how does Google’s driverless car navigate a crowded road? The answer: through the techniques of motion planning.

I’d like follow a brief tangent to opine on the term ‘motion planning’ since it is the broad description of my research. I find it the most dull, uninspiring name one could give to such a fascinating field. I hate to tell people I study ‘motion planning’ for my PhD. It brings to mind an image of me looking at a map and drawing lines for a robot to follow. What’s the challenge in that? Of course this is not an accurate description of motion planning.

Motion planning does not involve me sitting down and developing a step-by-step plan for how some robot will achieve some goal or move from A to B. That is no different than just controlling an RC airplane with a remote control; which is not autonomy. Motion planning is the development of the mathematics and algorithms that enable a  autonomous system – a.k.a “robot” – to plan for itself. It is the set of tools given to a robot so it can decide how it will safely/efficiently move through the world and interact with it. To use a buzzword, it is the artificial intelligence of motion, of movement.

It may be unclear why this would be a field of research at all seeing as it is something that most of us take for granted. A motion planning problem for a person may be something as simple as walking across the room without running into the table; something that many people can do in their sleep, literally. Why would it be hard for a robot? It is because robots are, essentially, computers that can move and, subsequently, computers are just really fast idiots. There is nothing intrinsic to a computer/robot that says “don’t walk into that table”. There is no inherent logic that tells a driverless car “I cannot move through that car in front of me”. In fact, about the only thing inherent or intrinsic about a computer is that it is fast and stupid. When I say “fast” I mean that they do simple math incredibly, almost unbelievably, fast. When I say “stupid” I mean that it does exactly as it’s told. I cannot be creative or intuitive. If you tell it to count to infinity, a computer won’t stop and think “well this is futile since I’ll never get there”, it will just continue counting forever. If you tell a robot to walk into a table or drive into a wall, it will mindlessly obey.

So when we want a robot to execute some action, we can just tell the robot what to do at every point in time. This is essentially just remote control like the RC cars and planes you may have grown up playing with. To use another buzzword, we may call it human-in-the-loop. These systems are not autonomous. The challenge of motion planning is to bestow upon the robot the ability to actually choose a plan of action for itself.

To give you an idea of the types of systems that I’ve been working on in simulation, the below plot shows the trajectory for a small, pilot-less aircraft – or uninhabited aerial vehicle (UAV) – as it attempts to navigate through a forest. I could have sat down and chosen a path through the trees for it, but what if I’m not around to walk it through everything? What if I don’t know where the trees will be until the moment I turn on the UAV? What if we need the UAV to do something on its own? For this we need motion planning.

SimpleUAV_LongForest_ICRA15_diag

Current Progress

Well that was a longer tangent than expected but worthwhile. My current work is to take some of my algorithms out of simulation and apply them in the real world. In the last few weeks I’ve been building a quadcopter and establishing communication between it and a motion capture system. The motion capture system acts as “eyes” for the quadcopter, but instead of looking out at from the quadcopter, they look in at the quadcopter and then send messages to let it know where it is located. Believe it or not, one of the most challenging problems in robotics is just for the robot to know where it is in the world. To get a better idea of what the motion capture system does, imagine you are standing in the middle of a room blindfolded (you represent the quadcopter) with a group of friends standing around you representing the motion capture. As you move around, the friends call out information like “you are 10 feet north and 3 feet east from the southwest corner of the room”. Now repeat this process 50 times a second with centimeter or better accuracy and you get an idea of what a motion capture system does. It is the same technology used in recent movie animation.

All of this is just to say that I have the quadcopter and motion capture working together! Nothing ground breaking for robotics, but a big step forward in my research. Here is the video of the system working together.

QuadViconThumbnail

It’s not particularly exciting unless you know how much work has gone into getting everything working together.  The demonstration has a simple objective: maintain a constant position even in the presence of disturbances – i.e. me pushing it around.

This is not an example of motion planning but it is a first step toward enabling me to test my motion planning algorithms.