Toggle light / dark theme

Newsweek is reporting the results of a scientific study by researchers at Carnegie Mellon who used MRI technology to scan the brains of human subjects. The subjects were shown a series of images of various tools (hammer, drill, pliers, etc). The subjects were then asked to think about the properties of the tools and the computer was tasked with determining which item the subject was thinking about. To make the computer task even more challenging, the researchers excluded information from the brain’s visual cortex which would have made the problem a simpler pattern recognition exercise in which decoding techniques are already known. Instead, they focused the scanning on higher level cognitive areas.

The computer was able to determine with 78 percent accuracy when a subject was thinking about a hammer, say, instead of a pair of pliers. With one particular subject, the accuracy reached 94 percent.

Planning for the first Lifeboat Foundation conference has begun. This FREE conference will be held in Second Life to keep costs down and ensure that you won’t have to worry about missing work or school.

While an exact date has not yet been set, we intend to offer you an exciting line up of speakers on a day in the late spring or early summer of 2008.

Several members of Lifeboat’s Scientific Advisory Board (SAB) have already expressed interest in presenting. However, potential speakers need not be Lifeboat Foundation members.

If you’re interested in speaking, want to help, or you just want to learn more, please contact me at [email protected].

darpaachievements.jpg

DARPA (the defense advanced research projects agency) is the R&D arm of he US military for far-reaching future technology. What most people do not realize is how much revolutionary medical technology comes out of this agency’s military R&D programs. For those in need of background, you can read about the Army & DARPA’s future soldier Landwarrior program and its medtech offshoots as well as why DARPA does medical research and development that industry won’t. Fear of these future military technologies runs high with a push towards neural activation as a weapon, direct brain-computer interfaces, and drones. However, the new program has enormous potential for revolutionary medical progess as well.

It has been said technology is neutral, it is the application that is either good or evil. (It is worth a side-track to read a discussion on this concept)

The Areas of Focus for DARPA in 2007 and Forward Are:

  1. Chip-Scale Atomic Clock
  2. Global War on TerrorismUnmanned Air Vehicles
  3. Militarization of Space
  4. Supercomputer Systems
  5. Biological Warfare Defense
  6. Prosthetics
  7. Quantum Information Science
  8. Newton’s Laws for Biology
  9. Low-Cost Titanium
  10. Alternative Energy
  11. High Energy Liquid Laser Area Defense System

the potential for the destructive use of these technologies is obvious, for a a complete review of these projects and the beneficial medical applications of each visit docinthemachine.com

In an important step forward for acknowledging the possibility of real AI in our immediate future, a report by the UK government that says robots will have the same rights and responsibilities as human citizens. The Financial Times reports:

The next time you beat your keyboard in frustration, think of a day when it may be able to sue you for assault. Within 50 years we might even find ourselves standing next to the next generation of vacuum cleaners in the voting booth. Far from being extracts from the extreme end of science fiction, the idea that we may one day give sentient machines the kind of rights traditionally reserved for humans is raised in a British government-commissioned report which claims to be an extensive look into the future. Visions of the status of robots around 2056 have emerged from one of 270 forward-looking papers sponsored by Sir David King, the UK government’s chief scientist.

The paper covering robots’ rights was written by a UK partnership of Outsights, the management consultancy, and Ipsos Mori, the opinion research organisation. “If we make conscious robots they would want to have rights and they probably should,” said Henrik Christensen, director of the Centre of Robotics and Intelligent Machines at the Georgia Institute of Technology. The idea will not surprise science fiction aficionados.

It was widely explored by Dr Isaac Asimov, one of the foremost science fiction writers of the 20th century. He wrote of a society where robots were fully integrated and essential in day-to-day life.In his system, the ‘three laws of robotics’ governed machine life. They decreed that robots could not injure humans, must obey orders and protect their own existence – in that order.

Robots and machines are now classed as inanimate objects without rights or duties but if artificial intelligence becomes ubiquitous, the report argues, there may be calls for humans’ rights to be extended to them.It is also logical that such rights are meted out with citizens’ duties, including voting, paying tax and compulsory military service.

Mr Christensen said: “Would it be acceptable to kick a robotic dog even though we shouldn’t kick a normal one? There will be people who can’t distinguish that so we need to have ethical rules to make sure we as humans interact with robots in an ethical manner so we do not move our boundaries of what is acceptable.”

The Horizon Scan report argues that if ‘correctly managed’, this new world of robots’ rights could lead to increased labour output and greater prosperity. “If granted full rights, states will be obligated to provide full social benefits to them including income support, housing and possibly robo-healthcare to fix the machines over time,” it says.

But it points out that the process has casualties and the first one may be the environment, especially in the areas of energy and waste.

Human-level AI could be invented within 50 years, if not much sooner. Our supercomputers are already approaching the computing power of the human brain, and the software end of things is starting to progress steadily. It’s time for us to start thinking about AI as a positive and negative factor in global risk.