In the ongoing quest to shrink drones, engineers have built a flying robot so small it can sit in the palm of your hand. But don’t let its size fool you—these intelligent nanoquadrotors are far smarter than any drone we've seen so far.
Micro-robotic technology is a rapidly growing area of exploration, and one in which the Defense Department is eager to delve into. The government is pouring money into researching and developing artificially intelligent swarming robots, and the looming question is whether this cutting-edge technology will be used for good, or bring about a dystopian future of smart and lethal machines that could be used against humans.
As a host of scintillating YouTube videos show, these quadrotors are impressive autonomous vehicles that have mastered aerial manipulation, flipping through the air, whizzing through windows, building structures, even playing the James Bond theme in concert.
They boast tremendous potential in both civilian spaces and on the battlefield. At the General Robotics, Automation, Sensing and Perception (GRASP) Laboratory at the University of Pennsylvania, mathematicians, computer scientists, and engineers are working to create these complex robots and their operating systems.
Allow Justin Thomas, a third year Ph.D. student in the Mechanical Engineering and Applied Mechanics (MEAM) Department to give you a tour of the GRASP lab. Quadrotors, computer screens, and circuit boards pepper the room. Notebooks and folders spill out of the corner shelf. Under stark white lights, a black net droops from the ceiling, sectioning off an area where quadrotors can whiz around, encircled by the state-of-the-art Vicon motion capture system.
The lab’s autonomous quadrotors are controlled by algorithms, rather than operated by humans like the military and commercial drones commonly used today. (The GRASP team prefers the term "robot" to “drone” altogether because of the latter's violent stigma.)
While Thomas may give the objective—for instance, “Go from Point A to Point B”—complex algorithms piloting the machine determine the specific steps to achieve that objective. Humans don’t manipulate the tiny UAV in its low-level decision-making process.
The Hummingbird quadrotor is equipped with a monocular camera and circular target so that a team of similar robots can fly in formation for collaborative exploration or transportation of a large object. Image: Justin Thomas/GRASP
Thomas demonstrated the ‘Hummingbird’ nanoquadrotor. As it took off from the palm of its hand like a tiny bird, the red lights from the cameras bounced off its retroreflective markers (think: shiny spheres), to locate it on a computer screen, pointing its whereabouts in space with much superior accuracy than current GPS technology.
While GPS is accurate to a couple of meters, this system promises a spatial resolution to an accuracy of a few millimeters. Using the system, GRASP has successfully achieved incredibly precise positioning of a group of up to 20 quadrotors simultaneously in flight.
The ‘Pelican,’ another model, is equipped with seven-inch-long arms and a laser scanner and camera. It can fly into a building, scope out the digs, and construct real-time 3D maps, identifying features like doorways, people, and furniture, estimating its position respect to these features one hundred times a second, and navigating around them.
With these capabilities, the microbots can fly into tiny, dark spaces inaccessible or dangerous to humans, such as collapsed buildings or buildings with hazardous or radioactive materials.
Vijay Kumar, a professor in the School of Engineering & Applied Sciences at UPenn, who heads the research on quadrotors at GRASP, is optimistic about the benevolent applications in “search and rescue and disaster recovery” missions, from acting as first responders in dangerous situations to gathering intel in a hostage crisis.
But not everyone is as optimistic. The technology is at the cutting edge of robotics, and like any new technology, the hopes of its benefit to society are counterbalanced by the fear of how it could be used for harm.
The team at GRASP is adamant they’re interested in what’s possible, and in only benevolent humanitarian applications, for both civilians and the military. But there’s no doubt that an aerial fleet of intelligent microquadrotors could be a very useful weapon in combat.
Indeed, much of the funding for micro UAV research comes from the Department of Defense. The US Army Research Laboratory told me in an email that they’re studying autonomous systems “to enable the teaming of autonomous systems with soldiers.”
“Scientists and engineers at US Army Research Laboratory explore technology that has the potential to give soldiers an edge, to better protect them, or to increase efficiency or reduce costs for the Army,” a spokesperson said.
The GRASP lab received a $5 million grant from the DoD in 2005 to study swarming groups of networked autonomous robots—the Scalable Swarms of Autonomous Robots and Mobile Sensors, or SWARMS project. The goal is to create a swarm of bug-like quadrotors so intelligent it will re-form to avoid obstacles.
For instance, 20 quadrotors flying in a 4 x 5 formation will automatically adjust their positions to travel through a small gap such as a window. Once through the gap, they can move back into formation.
Kumar heads up another micro-drone research project, called the Micro Autonomous System Technologies Collaborative Technology Alliance. It too is funded by the Army Research Lab, thanks to a $22 million grant—the single largest grant in the history of the university’s engineering school. The stated intent is “to help create the fundamental networks and technologies that will put unmanned machines on the front lines of battle.”
Military funding, however, doesn’t mean the machines will be used as weapons. Robots are already widespread on the battlefield, generally doing dangerous and dirty tasks so humans don’t have to. This, Thomas said, is the main advantage of creating the micro-UAVs.
Quadrotors are “most useful in replacing roles that put humans in danger,” he said. “If used in the military that is the one application they would be most used for.”
For example, Richard Zhang, CEO of IDENTIFIED, a spinoff from the Kumar’s Lab, is developing an Improvised Explosive Device (IED) detection solution that should reduce risk to soldier’s lives. Swarms of flying robots will be able to detect an IED, localize the threat and guide the soldiers around it.
“There is a broader humanitarian application too,” Zhang said. “For instance in the case of land mines—anything in the soil that has a different density than the soil can be detected.” He stressed, “There is no scenario where we want to be doing harm with this technology.”
The controversial question is whether that’s true, or if the tiny drones will someday be used for more offensive strategies, like surveillance, or even as lethal weapons. It’s certainly something the Pentagon is pursuing, at least at the R&D level.
DARPA, the Defense Department’s futurist research agency, has challenged engineers to build drones that mimic the size and behavior of bugs. Although these bug-like drones, which “hide in plain sight,” are different from quadrotors, they show off the scintillating array of applications for micro-UAV technology on the battlefield— ranging from surveillance to targeted killings.
An Air Force simulation video showcasing micro-UAVs shows off these applications. The video leaves you with this ominous message: “Unobtrusive, pervasive, lethal: micro air vehicles—enhancing the capabilities of the future war-fighter.”
Tiny quadrotors have incredible surveillance capabilities because of their size, agility, and speed. Equipped with on-board cameras, the avian-inspired UAVs are able to hover or perch with the help of “claws” onto flat, vertical or inverted surfaces.
Image: Justin Thomas/GRASP
But who ultimately decides who can use this surveillance capability, and on whom? Quadrotors could theoretically be able to plant bugs and tag people in a way that could never be done before, according to Patrick Lin, director of the Ethics + Emerging Sciences Group at California Polytechnic State University, and lead editor of Robot Ethics. However, there are privacy laws that come into play there.
Meanwhile, Kumar is working hard to distance his work from any military applications. He told me he has no direct connection to them: “I do these interviews so it doesn’t seem like I’m doing some secret research for the military.”
In his February 2012 TED Talk, which has over three million views, Kumar said he intends for the technology to be used for the good of humanity.
“You can carry them around in a backpack and deploy them. They can hover whereas airplanes need to constantly fly and can’t stay still,” Thomas explained. This makes them useful for monitoring situations, like forest fires.
The team at GRASP prefer to think about the potential humanitarian applications—“We don’t want to see the technology be used to end lives. We’d rather see it be used to save lives,” said Thomas.
When I talked to Peter Singer, director of the Center for 21st Century Security and Intelligence at Brookings and author of Cybersecurity and Cyberwar: What Everyone Needs to Know about this tension between funding sources and applications, he suggested that trying to separate the two is naive.
“There are two poles in this discourse,” Singer said. “There is one pole which is proud of their link to the military, and talk about how they’re working in international security, how they think it might save soldiers' lives. On the other end is what I call ‘refuseniks’—those not willing to take military funding because they don’t want to be involved in warfare.”
“Then there is a group in the middle,” he went on. “My favorite example of this is a group that was receiving funding from the Navy to make a baseball playing robot, saying that they had no connection to war. Do you think the Navy is interested in this for the naval baseball team? The military is not funding it because they think it’s cool. They’re funding it because they think it has an application in something that they’re interested in.”
Battlefield applications include surveillance reconnaissance and intelligence, from looking for insurgents in a city to looking for earthquake victims in (what remains of ) a city, Singer explained.
“Once you see that target you make a decision based on what’s the next task—and that task could be anything from delivering medicine to delivering munition,” Singer said.
And here’s where the panic-ridden headlines come in. “Mommy, the Drones Here!” via The New York Times; “I love you, Killer Robot,” courtesy of Slate; “It’s a killer swarm of quadrotor drones!” warned Gizmodo.
“These quadrotors can have strike and lethal capabilities, and can be weaponized if they can carry the payload."
At this point, the most common reaction to nano-quadrotors is more along the lines of, “Hey check out this video, Cool!” The question is, would GRASP’s quadrotors ever wander into the category of killer robots? While it’s possible, the prevailing opinion is that it’s not the intention.
“People are afraid of robots being armed with weapons and having the autonomous ability to shoot. I don’t think that will happen,” Thomas said, “We don’t want these robots weaponized.”
But Lin noted that it’s not beyond the realm of possibility. “These quadrotors can have strike and lethal capabilities, and can be weaponized if they can carry the payload,” he said. Not necessarily a sub-machine gun as in this fake viral video of a machine gun-toting drone, “but something smaller, or biological weapons like ricin.”
The other viewpoint to consider is that lethal capabilities may not be a bad thing. Quadrotors can be used to target individuals, theoretically minimizing the risk to civilians.
“Quadcopters and other small UAVs can enable precise urban fighting without the collateral and property damage that usually follows—that's pretty revolutionary,” said Lin. For a small surgical strike which requires boots on the ground, the military may be able to send in a swarm when “you don’t want to risk another Black Hawk down,” he said.
However the potential for mistakes, accidents and inaccuracy of intelligence doesn’t eliminate the risk altogether. The other concern is that by blurring the lines between target killings, which are legal (albeit problematic), and assassinations, which are illegal, a state may be able to carry out increasing numbers of secret assassinations, with low risk of discovery and accountability.
Given the CIA’s prolific record on covert killing operations, it is not a leap to think this is a realistic future.
It all raises a somewhat existential question: To what extent can the researchers tasked with developing a new technology be held responsible for the applications of their technology? If the potential nefarious uses of their discoveries kept scientists up at night, they might not make it to the lab the next day to continue the work. As a researcher, distancing yourself from applications almost becomes a necessity.
“Any technology can be used for military purposes—you can’t worry about where it will be used, otherwise no research on anything will ever be done,” Kumar points out.
At GRASP, they’re focused on what’s possible, not how the technology will be used, said Thomas. “If you focus on applications, you’re looking at the end goal rather than developing overall capabilities. Our intention is to develop capabilities.”
And that might be a necessity. At the end of the day, it’s impossible to predict or control the future technological progress will bring about.
“Any major technology we can think of has been used in both good and bad ways,” said Singer. “Whether we’re talking about airplanes, the internet, atomic energy, and now robotics.”