Menu

Blog

Dec 22, 2006

UK Government Report Talks Robot Rights

Posted by in categories: robotics/AI, supercomputing

In an important step forward for acknowledging the possibility of real AI in our immediate future, a report by the UK government that says robots will have the same rights and responsibilities as human citizens. The Financial Times reports:

The next time you beat your keyboard in frustration, think of a day when it may be able to sue you for assault. Within 50 years we might even find ourselves standing next to the next generation of vacuum cleaners in the voting booth. Far from being extracts from the extreme end of science fiction, the idea that we may one day give sentient machines the kind of rights traditionally reserved for humans is raised in a British government-commissioned report which claims to be an extensive look into the future. Visions of the status of robots around 2056 have emerged from one of 270 forward-looking papers sponsored by Sir David King, the UK government’s chief scientist.

The paper covering robots’ rights was written by a UK partnership of Outsights, the management consultancy, and Ipsos Mori, the opinion research organisation. “If we make conscious robots they would want to have rights and they probably should,” said Henrik Christensen, director of the Centre of Robotics and Intelligent Machines at the Georgia Institute of Technology. The idea will not surprise science fiction aficionados.

It was widely explored by Dr Isaac Asimov, one of the foremost science fiction writers of the 20th century. He wrote of a society where robots were fully integrated and essential in day-to-day life.In his system, the ‘three laws of robotics’ governed machine life. They decreed that robots could not injure humans, must obey orders and protect their own existence – in that order.

Robots and machines are now classed as inanimate objects without rights or duties but if artificial intelligence becomes ubiquitous, the report argues, there may be calls for humans’ rights to be extended to them.It is also logical that such rights are meted out with citizens’ duties, including voting, paying tax and compulsory military service.

Mr Christensen said: “Would it be acceptable to kick a robotic dog even though we shouldn’t kick a normal one? There will be people who can’t distinguish that so we need to have ethical rules to make sure we as humans interact with robots in an ethical manner so we do not move our boundaries of what is acceptable.”

The Horizon Scan report argues that if ‘correctly managed’, this new world of robots’ rights could lead to increased labour output and greater prosperity. “If granted full rights, states will be obligated to provide full social benefits to them including income support, housing and possibly robo-healthcare to fix the machines over time,” it says.

But it points out that the process has casualties and the first one may be the environment, especially in the areas of energy and waste.

Human-level AI could be invented within 50 years, if not much sooner. Our supercomputers are already approaching the computing power of the human brain, and the software end of things is starting to progress steadily. It’s time for us to start thinking about AI as a positive and negative factor in global risk.

4

Comments — comments are now closed.


  1. D. Berleant says:

    The article states: “Human-level AI could be invented within 50 years, if not much sooner. Our supercomputers are already approaching the computing power of the human brain, and the software end of things is starting to progress steadily.”

    Well…“could be” also means “might not be”.

    Even worse, “is starting to progress steadily” is an oxymoron.

    In fact, software productivity is increasing by several percent per year, in marked contrast to commonly quoted hardware metrics (e.g. “Moore’s Law”) which increase much faster. So software is likely to be the bottleneck and who knows how long it will take to solve the intelligent software problem. One thing’s for sure, the timing of the AI singularity is not governed by Moore’s Law. So this little article is fun to read but suffers from journalistically simple-minded overhype. No point in writing your congressman about robot rights just yet.

  2. Whether or not I can kick something depends on whether or not it feels. Because of the “other minds problem” (and Descartes) the only one that I can be certain feels is myself. Other people almost certainly feel, because they act exactly the way I do. So I assume they feel too. In fact it feels as if they feel (because I am biologically programmed by my genes to “mind-read” certain behaviors as signaling certain feelings: smiles, frowns, screams).

    So I’m almost certain you feel, and almost as certain a dog does too, and that a rock doesn’t, and probably not a plant either. Or a toaster. Or a computer. Or any of today’s robots. When will I become as doubtful that a robot doesn’t feel as I am confident that you do? When I can’t tell the two of you apart — in terms of what you can (both) do.

    It is our doings on which the judgment is based in both cases, or rather on our capacity for doing. That is what Turing’s Test is based on, and it is designing a candidate that can pass Turing’s Test (of indistinguishability from human beings to that is both the goal of cognitive science and the criterion for whether or not there should be laws, one day, to give robots equal protection under the law (or at least as much as their animal counterparts are accorded).

    Harnad, S. (2001) Spielberg’s AI: Another Cuddly No-Brainer. http://cogprints.org/2131/

    Harnad, S. (2006) The Annotation Game: On Turing (1950) on Computing, Machinery and Intelligence. In: Epstein, Robert & Peters, Grace (Eds.) The Turing Test Sourcebook: Philosophical and Methodological Issues in the Quest for the Thinking Computer. Kluwer http://eprints.ecs.soton.ac.uk/7741/

  3. To “Mir Private server”(xiaonanok March 3rd,2007 12:13 am): They have a real background in the eighties as we
    used the Russian Molnar satellite systens as an uplink.

  4. John Kennard says:

    So we’re posturing about robot rights without a glimmer on the legal horizon of civil rights for animals? not even for those which dream (all mammals, all birds, some reptiles)? not even for primates, cats and dogs?