Toggle light / dark theme

March 12, 2009 10:00 AM PDT

Q&A: The robot wars have arrived

P.W. Singer

P.W. Singer

Just as the computer and ARPAnet evolved into the PC and Internet, robots are poised to integrate into everyday life in ways we can’t even imagine, thanks in large part to research funded by the U.S. military.

Many people are excited about the military’s newfound interest and funding of robotics, but few are considering its ramifications on war in general.

P.W. Singer, senior fellow and director of the 21st Century Defense Initiative at the Brookings Institution, went behind the scenes of the robotics world to write “Wired for War: The Robotics Revolution and Conflict in the 21st Century.”

Singer took time from his book tour to talk with CNET about the start of a revolution tech insiders predicted, but so many others missed.

Q: Your book is purposely not the typical think tank book. It’s filled with just as many humorous anecdotes about people’s personal lives and pop culture as it is with statistics, technology, and history. You say you did this because robotic development has been greatly influenced by the human imagination?
Singer: Look, to write on robots in my field is a risky thing. Robots were seen as this thing of science fiction even though they’re not. So I decided to double down, you know? If I was going to risk it in one way, why not in another way? It’s my own insurgency on the boring, staid way people talk about this incredibly important thing, which is war. Most of the books on war and its dynamics–to be blunt–are, oddly enough, boring. And it means the public doesn’t actually have an understanding of the dynamics as they should.

It seems like we’re just at the beginning here. You quote Bill Gates comparing robots now to what computers were in the eighties.
Singer: Yes, the military is a primary buyer right now and it’s using them (robots) for a limited set of applications. And yes, in each area we prove they can be utilized you’ll see a massive expansion. That’s all correct, but then I think it’s even beyond what he was saying. No one sitting back with a computer in 1980 said, “Oh, yes, these things are going to have a ripple effect on our society and politics such that there’s going to be a political debate about privacy in an online world, and mothers in Peoria are going to be concerned about child predators on this thing called Facebook.” It’ll be the same way with the impact on war and in robotics; a ripple effect in areas we’re not even aware of yet.

Right now, rudimentary as they are, we have autonomous and remote-controlled robots while most of the people we’re fighting don’t. What’s that doing to our image?
Singer: The leading newspaper editor in Lebanon described–and he’s actually describing this as there is a drone above him at the time–that these things show you’re afraid, you’re not man enough to fight us face-to-face, it shows your cowardice, all we have to do to defeat you is just kill a few of your soldiers.

It’s playing like cowardice?
Singer: Yeah, it’s like every revolution. You know, when gunpowder is first used people think that’s cowardly. Then they figure it out and it has all sorts of other ripple effects.

What’s war going to look like once robot warriors become autonomous and ubiquitous for both sides?
Singer: I think if we’re looking at the realm of science fiction, less so “Star Wars: The Clone Wars” and more so the world of “Blade Runner” where it’s this mix between incredible technologies, but also the dirt and grime of poverty in the city. I guess this shows where I come down on these issues. The future of war is more and more machines, but it’s still also insurgencies, terrorism, you name it.

What seems most likely in this scenario–at least in the near term–is this continuation of teams of robots and humans working together, each doing what they’re good at…Maybe the human as the quarterback and the robots as the players with the humans calling out plays, making decisions, and the robots carrying them out. However, just like on a football field, things change. The wide receivers can alter the play, and that seems to be where we’re headed.

How will robot warfare change our international laws of war? If an autonomous robot mistakenly takes out 20 little girls playing soccer in the street and people are outraged, is the programmer going to get the blame? The manufacturer? The commander who sent in the robot fleet?
Singer: That’s the essence of the problem of trying to apply a set of laws that are so old they qualify for Medicare to these kind of 21st-century dilemmas that come with this 21st-century technology. It’s also the kind of question that you might have once only asked at Comic-Con and now it’s a very real live question at the Pentagon.

I went around trying to get the answer to this sort of question meeting with people not only in the military but also in the International Committee of the Red Cross and Human Rights Watch. We’re at a loss as to how to answer that question right now. The robotics companies are only thinking in terms of product liability…and international law is simply overwhelmed or basically ignorant of this technology. There’s a great scene in the book where two senior leaders within Human Rights Watch get in an argument in front of me of which laws might be most useful in such a situation.

Is this where they bring up Star Trek?
Singer: Yeah, one’s bringing up the Geneva Conventions and the other one’s pointing to the Star Trek Prime Directive.

You say in your book that except for a few refusenicks, most scientists are definitely not subscribing to Isaac Asimov’s laws. What then generally are the ethics of these roboticists?
Singer: The people who are building these systems are excited by the possibilities of the technology. But the field of robotics, it’s a very young field. It’s not like medicine that has an ethical code. It’s not done what the field of genetics has, where it’s begun to wrestle with the ethics of what they’re working on and the ripple effects it has on the society. That’s not happening in the robotics field, except in isolated instances.

What military robotic tech is likely to migrate over to local law enforcement or the consumer world?
Singer: I think we’re already starting to see some of the early stages of that…I think this is the other part that Gates was saying: we get to the point where we stop calling them computers. You know, I have a computer in my pocket right now. It’s a cell phone. I just don’t call it a computer. The new Lexus parallel-parks itself. Do we call it a robot car? No, but it’s kind of doing something robotic.

You know, I’m the guy coming out of the world of political science, so it opens up these fun debates. Take the question of ethics and robots. How about me? Is it my second amendment right to have a gun-armed robot? I mean, I’m not hiring my own gun robots, but Homeland Security is already flying drones, and police departments are already purchasing them.

Explain how robotic warfare is “open source” warfare.
Singer: It’s much like what’s happened in the software industry going open source, the idea that this technology is not something that requires a massive industrial structure to build. Much like open source software, not only can almost anyone access it, but also anyone with an entrepreneurial spirit, and in this case of very wicked entrepreneurial spirit, can improve upon it. All sorts of actors, not just high-end military, can access high-end military technologies…Hezbollah is not a state. However, Hezbollah flew four drones at Israel. Take this down to the individual level and I think one of the darkest quotes comes from the DARPA scientist who said, and I quote, “For $50,000 I could shut down Manhattan.” The potential of an al-Qaeda 2.0 is made far more lethal with these technologies, but also the next generation of a Timothy McVeigh or Unabomber is multiplying their capability with these technologies.

The U.S. military said in a statement this week that it plans to pull 12,000 troops out of Iraq by the fall. Do you think robots will have a hand in helping to get to that number?
Singer: Most definitely.

How?
Singer: The utilization of the Predator operations is allowing us to accomplish certain goals there without troops on the grounds.

Is this going to lead to more of what you call the cubicle warriors or the armchair warriors? They’re in the U.S. operating on this end, and then going to their kid’s PTA meeting at the end of the day?
Singer: Oh, most definitely. Look, the Air Force this year is putting out more unmanned pilots that manned pilots.

Explain how soldiers now come ready-trained because of our video games.
Singer: The military is very smartly free-riding off of the video game industry, off the designs in terms of the human interface, using the Xbox controllers, PlayStation controllers. The Microsofts and Sonys of the world have spent millions designing the system that fits perfectly in your hand. Why not use it? They’re also free-riding off this entire generation that’s come in already trained in the use of these systems.

There’s another aspect though, which is the mentality people bring to bear when using these systems. It really struck me when one of the people involved in Predator operations described what it was like to take out an enemy from afar, what it was like to kill. He said, “It’s like a video game.” That’s a very odd reference, but also a telling reference for this experience of killing and how it’s changing in our generation.

It’s making them more removed from the morality of it?
Singer: It’s the fundamental difference between the bomber pilots of WWII and even the bomber pilots of today. It’s disconnection from risk on both a physical and psychological plain.

When my grandfather went to war in the Pacific, he went to a place where there was such danger he might not ever come home again. You compare that to the drone pilot experience. Not only what it’s like to kill, but the whole experience of going to war is getting up, getting into their Toyota Corolla, going in to work, killing enemy combatants from afar, getting in their car, and driving home. So 20 minutes after being at war, they’re back at home and talking to their kid about their homework at the dinner table. So this whole meaning of the term “going to war” that’s held true for 5,000 years is changing.

What do you think is the most dangerous military robot out there now?
Singer: It all hinges on the definition of the term dangerous. The system that’s been most incredibly lethal in terms of consequences on the battlefield so far if you ask military commanders is the Predator. They describe it as the most useful system, manned or unmanned, in our operations in Afghanistan and Iraq. Eleven out of the twenty al-Qaeda leaders we’ve gotten, we’ve gotten via a drone strike. Now, dangerous can have other meanings. The work on evolutionary software scares the shit out of me.

You’re saying we’re gonna get to a HAL situation?
Singer: Maybe it’s just cause I’ve grown up on a diet of all that sci-fi, but the evolutionary software stuff does spook me out a little bit. Oh, and robots that can replicate themselves. We’re not there yet, but that’s another like “whoa!”

People have finally got the attention of companies and governments to look ahead to 2020, 2040, 2050 in terms of the environment and green technology. But as you said in your book, that’s not happening with robotics issues. Why do you think that is?
Singer: When it comes to the issue of war, we’re exceptionally uncomfortable looking forward, mainly because so many people have gotten it so wrong. People in policymaker positions, policy adviser positions, and the people making the decisions are woefully ignorant in what’s happening in technology not only five years from now, not only now, but where we were five years ago. You have people describing robotics as “mere science fiction” when we’re talking about having already 12,000 (robots) on the ground, 7,000 in the air. During this book tour, I was in this meeting with a very senior Pentagon adviser, top of the field, very big name. He said, “Yeah this technology stuff is so amazing. I bet one day we’ll have this technology where like one day the Internet will be able to look like a video game, and it will be three-dimensional, I’ll bet.”

(laughing) And meanwhile, your wife’s at Linden Labs.
Singer: (laughing) Yeah, it’s Second Life. And that’s not anything new.

At least five years old, yeah.
Singer: And you don’t have to be a technology person to be aware of it. I mean, it’s been covered by CNN. It appeared on “The Office” and “CSI.” You just have to be aware of pop culture to know. And so it was this thing that he was describing as it might happen one day, and it happened five years ago. Then the people that do work on the technology and are aware of it, they tend to either be: head-in-the-sand in terms of “I’m just working on my thing, I don’t care about the effects of it”; or “I’m optimistic. Oh these systems are great. They’re only gonna work out for the best.” They forget that this is a real world. They’re kind of like the atomic scientists.

Obviously the hope is that robots will do all the dirty work of warfare. But warfare is inherently messy, unpredictable, and often worse than expectations. How would a roboticized war be any different in that respect?
Singer: In no way. That’s the fundamental argument of the book. While we may have Moore’s Law in place, we still haven’t gotten rid of Murphy’s Law. So we have a technology that is giving us incredible capabilities that we couldn’t even have imagined a few years ago, let alone had in place. But the fog of war is not being lifted as Rumsfeld once claimed absurdly.

You may be getting new technological capabilities, but you are also creating new human dilemmas. And it’s those dilemmas that are really the revolutionary aspect of this. What are the laws that surround this and how do you insure accountability in this setting? At what point do we have to become concerned about our weapons becoming a threat to ourselves? This future of war is again a mix of more and more machines being used to fight, but the wars themselves are still about our human realities. They’re still driven by our human failings, and the ripple effects are still because of our human politics, our human laws. And it’s the cross between the two that we have to understand.

Candace Lombardi is a journalist who divides her time between the U.S. and the U.K. Whether it’s cars, robots, personal gadgets, or industrial machines, she enjoys examining the moving parts that keep our world rotating. Email her at [email protected]. She is a member of the CNET Blog Network and is not a current employee of CNET.

Jetfuel powerpack, armour… shoulder turret?

Free whitepaper – Data center projects: standardized process

US weaponry globocorp Lockheed is pleased to announce the unveiling of its newly-acquired powered exoskeleton intended to confer superhuman strength and endurance upon US soldiers.

Needless to say, corporate promo vid of the Human Universal Load Carrier (HULC™) is available:

The exoskeleton is based on a design from Berkeley Bionics of California, but Lockheed say they have brought significant pimpage to the basic HULC. The enhanced version is now on show at the Association of the United States’ Army Winter Symposium in Florida.

“With our enhancements to the HULC system, Soldiers will be able to carry loads up to 200 pounds with minimal effort,” according to Lockheed’s Rich Russell.

From the vid, the HULC certainly seems a step forward on Raytheon’s rival XOS mechwarrior suit, which at last report still trails an inconvenient power cable to the nearest wall socket.

Not so the HULC; four pounds of lithium polymer batteries will run the exoskeleton for an hour walking at 3mph, according to Lockheed. Speed marching at up to 7mph reduces this somewhat; a battery-draining “burst” at 10mph is the maximum speed.

The user can hump 200lb with relative ease while marching in a HULC, however, well in excess of even the heaviest combat loads normally carried by modern infantry. There’d be scope to carry a few spare batteries. Even if the machine runs out of juice, Lockheed claims that its reinforcement and shock absorption still helps with load carrying rather than hindering.

There are various optional extras, too. The HULC can be fitted with armour plating, heating or cooling systems, sensors and “other custom attachments”. We particularly liked that last one: our personal request would be a powered gun or missile mount of some kind above the shoulder, linked to a helmet or monocle laser sight.

One does note that remote-controlled gun mounts weighing as little as 55lb are available, able to handle various kinds of normally tripod- or bipod-mounted heavy weapons.

You’d need more power, but that’s on offer. According to the Lockheed spec sheet (pdf) there’s an extended-endurance HULC fitted with a “silent” generator running on JP8 jet fuel. A tankful will run this suit for three days, marching eight hours per day — though presumably at the cost of some payload.

Doubtless other power options could be developed: Lockheed says the HULC needs 250 watts on average.

It’s important to note that the HULC is basically a legs and body system only: there’s no enhancement to the user’s arms, though an over-shoulder frame can be fitted allowing a wearer to hoist heavy objects such as artilery shells with the aid of a lifting strop.

The HULC may not be quite ready for prime time yet. But the military exoskeleton as a concept does seem to be getting to the stage of usefulness, at least in niche situations for specific jobs.

The BigDog petrol packmule, an alternative strategy for helping footsoldiers carry their increasingly heavy loads, may now have a serious rival. ®



Two of Britain’s leading environmental thinkers say it is time to develop a quick technical fix for climate change. Writing in the journal Nature, Science Museum head Chris Rapley and Gaia theorist James Lovelock suggest looking at boosting ocean take-up of CO2.

Floating pipes reaching down from the top of the ocean into colder water below move up and down with the swell.

As the pipe moves down, cold water flows up and out onto the ocean surface. A simple valve blocks any downward flow when the pipe is moving upwards.

Colder water is more “productive” — it contains more life, and so in principle can absorb more carbon.

Finally some practical solutions are being introduced to mitigate global warming. The BBC article mention the US company, Atmocean, that is already testing such a system.

Read the articles from BBC or the New York Times based on the same article from Nature.

Whether we like it or not, geoengineering — a process I’ve taken to calling “(re)terraforming the Earth” — is now on the table as a strategy for dealing with onrushing climate disaster. This isn’t because it’s a particularly good idea; as far as we presently know, the potential negative impacts of geoengineering projects seem to significantly outweigh any benefits. Nonetheless, (re)terraforming has drawn an increasing amount of attention over the past few months. One key reason is that, if it could be made to work, it wouldn’t just moderate climate change — i.e., slow it or stop it — it would actually serve as a climate change remediation method, reversing global warming.

The cynical and the insipid apparently believe that pursuing the geoengineering option would allow us to avoid making any changes in technology or behavior intended to reduce greenhouse gas output. This sort of logic is wrong, utterly wrong. For any plausible geoengineering project to succeed, we’d have to have already stabilized the climate. As it turns out, the brilliant and clearly-needed advances in technology and changes in behavior supported by those of us who proudly wear the label “bright green” will do exactly this, reducing, even eventually eliminating, anthropogenic emissions of greenhouse gases. We need to do this as quickly as possible. As the saying goes, if you want to get out of the hole you’re in, the first thing to do is stop digging.

But none of the bright green solutions — ultra-efficient buildings and vehicles, top-to-bottom urban redesigns, local foods, renewable energy systems, and the like — will do anything to reduce the anthropogenic greenhouse gases that have already been emitted. The best result we get is stabilizing at an already high greenhouse gas level. And because of ocean thermal inertia and other big, slow climate effects, the Earth will continue to warm for a couple of decades even after we stop all greenhouse gas emissions. Transforming our civilization into a bright green wonderland won’t be easy, and under even the most optimistic estimates will take at least a decade; by the time we finally stop putting out additional greenhouse gases, we could well have gone past a point where globally disastrous results are inevitable. In fact, given the complexity of climate feedback systems, we may already have passed such a tipping point, even if we stopped all emissions today.

In other words, while stopping digging is absolutely necessary, it won’t actually refill the hole.

I’m hopeful that eliminating anthropogenic greenhouse gas emissions will be enough; if more optimistic scenarios are correct, ceasing to emit additional greenhouse gases in the next decade or two will be sufficient to avoid real disaster. This would be a wonderful outcome, and not just because we would have dodged the global warming bullet. Many of the best steps we can take along these lines are distributed, incremental, collaborative, and quite often make use of open systems and standards: all very good things, with larger social implications than just for climate moderation, and the heart of what my blog Open the Future is all about.

But if we learn that we’ve already passed the climate disaster tipping point, if we want to avoid a civilization-threatening outcome, we’ll have to figure out how to refill the hole — to reduce overall temperature increases, or to remove methane, CO2 or other greenhouse gases from the atmosphere. And that means that we’d have to look at geoengineering.

Or, to be more accurate, we’ll have to keep looking at geoengineering. As it happens, the “(re)terraforming to fix global warming” genie is already out of the bottle. It happened just last week.

On February 9, 2007, Virgin Corporation honcho Richard Branson announced that he would give $25 million to the winner of the “Virgin Earth Challenge:”

The Virgin Earth Challenge will award $25 million to the individual or group who are able to demonstrate a commercially viable design which will result in the net removal of anthropogenic, atmospheric greenhouse gases each year for at least ten years without countervailing harmful effects. This removal must have long term effects and contribute materially to the stability of the Earth’s climate.

Reaction in the green blogosphere has been cautiously optimistic, with most responses noting a comparison to the “X-Prize” for private space flight, and some observing that air travel, such as that provided by Virgin Airways, remains a big source of greenhouse gases. Much to my surprise, however, none of the major green blogs noted the most significant aspect of this competition:

This is explicitly a call for geoengineering projects.

The Virgin Earth Challenge isn’t simply looking for better ways to reduce or eliminate new greenhouse gas emissions, it’s looking for ways to remove existing CO2 and other greenhouse gases from the atmosphere — that’s what “net removal” means. This competition seeks ways to make an active, substantial change to the Earth’s geophysical systems. Richard Branson is underwriting terraforming, and given that the consensus new mainstream environmentalist position is to be solidly anti-geoengineering, the lack of reaction to what is essentially the “Terraforming Challenge” is a bit surprising.

But if we’re already looking at geoengineering, and may potentially need to consider it as a necessary path to survival, how can we do it in a way that has the best chance to avoid making matters worse?

I’ve already given away the answer in the title: open up the process.

I’ve long argued that openness is the best way to ensure the safe development and deployment of transformative technologies like molecular nanotechnology, general machine intelligence, and radical human bioenhancements. Geoengineering technologies should be added to this list. The reasons are clear: the more people who can examine and evaluate the geotechnological proposals, the greater the likelihood of finding subtle flaws or dangers, and the greater the pool of knowledge that can offer solutions.

As I put it in my 2003 essay for the final Whole Earth magazine (and the source of my blog’s name), “Open the Future,”

Opening the books on emerging technologies, making the information about how they work widely available and easily accessible, in turn creates the possibility of a global defense against accidents or the inevitable depredations of a few. Openness speaks to our long traditions of democracy, free expression, and the scientific method, even as it harnesses one of the newest and best forces in our culture: the power of networks and the distributed-collaboration tools they evolve.Broad access to… [transformative] tools and knowledge would help millions of people examine and analyze emerging information, nano- and biotechnologies [and geotechnologies], looking for errors and flaws that could lead to dangerous or unintended results. This concept has precedent: it already works in the world of software, with the “free software” or “open source” movement. A multitude of developers, each interested in making sure the software is as reliable and secure as possible, do a demonstrably better job at making hard-to-attack software than an office park’s worth of programmers whose main concerns are market share, liability, and maintaining trade secrets.

[…]The more people participate, even in small ways, the better we get at building up our knowledge and defenses. And this openness has another, not insubstantial, benefit: transparency. It is far more difficult to obscure the implications of new technologies (or, conversely, to oversell their possibilities) when people around the world can read the plans.

The idea of opening transformative technologies is controversial. One argument often leveled against it is that it puts dangerous “knowledge-enabled” technologies into the hands of people who would abuse them. Fortunately, such a charge isn’t likely to apply in any significant way to discussions of geotechnology, largely because the industrial capacity required to take advantage of these technologies is well beyond most countries, let alone super-empowered individuals and small groups. Another criticism of the open approach attacks it for undermining the market. But concerns about proprietary information and profit potential are hard to fathom with terraforming — there would be no plausible way to limit access to climate change remediation only to those who pay for it. Ultimately, the downsides of making potential geoengineering methods open are tiny, while the benefits are massive.

It’s not entirely clear if an open source approach for terraforming technology would be allowed within the Virgin Earth Challenge rules. The “terms and conditions” appear to require secrecy during the development process, but leave open the possibility of a variety of licensing conditions afterwards. Presumably, this would include open source/free access licenses. This is better than nothing, but the secrecy-during-development requirements should have an exception for open source competitors. The value of the “many eyes” approach is enhanced if it isn’t limited to after-the-fact analysis. Discovery of a flaw requiring a redesign is less costly — and less likely to be ignored — if it happens early in the development process.

Let me be clear: I am not calling for geoengineering as the solution to global warming. We know nowhere near enough to make (re)terraforming a plausible or safe option. Our best pathway to avoiding climate disaster remains the rapid reduction and elimination of anthropogenic greenhouse gases. But I am calling for us to learn more about geotechnologies. Like it or not, we’ve entered the era of intentional geoengineering. The people who believe that (re)terraforming is a bad idea need to be part of the discussion about specific proposals, not simply sources of blanket condemnations. We need their insights and intelligence. The best way to make that happen, the best way to make sure that any terraforming effort leads to a global benefit, not harm, is to open the process of studying and developing geotechnological tools.

It may well be the best example yet seen of the importance of opening the future.