Menu

Blog

Page 11781

Sep 18, 2014

It’s Time to Take Artificial Intelligence Seriously

Posted by in category: robotics/AI

Christopher Mims — Wall Street Journal

The age of intelligent machines has arrived—only they don’t look at all like we expected. Forget what you’ve seen in movies; this is no HAL from “2001: A Space Odyssey,” and it’s certainly not Scarlett Johansson’s disembodied voice in “Her.” It’s more akin to what happens when insects, or even fungi, do when they “think.” (What, you didn’t know that slime molds can solve mazes?)

Artificial intelligence has lately been transformed from an academic curiosity to something that has measurable impact on our lives. Google Inc. used it to increase the accuracy of voice recognition in Android by 25%. The Associated Press is printing business stories written by it. Facebook Inc. is toying with it as a way to improve the relevance of the posts it shows you.

Continue reading “It's Time to Take Artificial Intelligence Seriously” »

Sep 18, 2014

Why Superintelligence May Not Help Us Think about Existential Risks — or Transhumanism

Posted by in categories: alien life, biological, cyborgs, defense, disruptive technology, ethics, existential risks, futurism, homo sapiens, human trajectories, internet, military, philosophy, policy, posthumanism, science, singularity, transhumanism

Among transhumanists, Nick Bostrom is well-known for promoting the idea of ‘existential risks’, potential harms which, were they come to pass, would annihilate the human condition altogether. Their probability may be relatively small, but the expected magnitude of their effects are so great, so Bostrom claims, that it is rational to devote some significant resources to safeguarding against them. (Indeed, there are now institutes for the study of existential risks on both sides of the Atlantic.) Moreover, because existential risks are intimately tied to the advancement of science and technology, their probability is likely to grow in the coming years.

Contrary to expectations, Bostrom is much less concerned with ecological suicide from humanity’s excessive carbon emissions than with the emergence of a superior brand of artificial intelligence – a ‘superintelligence’. This creature would be a human artefact, or at least descended from one. However, its self-programming capacity would have run amok in positive feedback, resulting in a maniacal, even self-destructive mission to rearrange the world in the image of its objectives. Such a superintelligence may appear to be quite ruthless in its dealings with humans, but that would only reflect the obstacles that we place, perhaps unwittingly, in the way of the realization of its objectives. Thus, this being would not conform to the science fiction stereotype of robots deliberately revolting against creators who are now seen as their inferiors.

I must confess that I find this conceptualisation of ‘existential risk’ rather un-transhumanist in spirit. Bostrom treats risk as a threat rather than as an opportunity. His risk horizon is precautionary rather than proactionary: He focuses on preventing the worst consequences rather than considering the prospects that are opened up by whatever radical changes might be inflicted by the superintelligence. This may be because in Bostrom’s key thought experiment, the superintelligence turns out to be the ultimate paper-clip collecting machine that ends up subsuming the entire planet to its task, destroying humanity along the way, almost as an afterthought.

But is this really a good starting point for thinking about existential risk? Much more likely than total human annihilation is that a substantial portion of humanity – but not everyone – is eliminated. (Certainly this captures the worst case scenarios surrounding climate change.) The Cold War remains the gold standard for this line of thought. In the US, the RAND Corporation’s chief analyst, Herman Kahn — the model for Stanley Kubrick’s Dr Strangelove – routinely, if not casually, tossed off scenarios of how, say, a US-USSR nuclear confrontation would serve to increase the tolerance for human biological diversity, due to the resulting proliferation of genetic mutations. Put in more general terms, a severe social disruption provides a unique opportunity for pursuing ideals that might otherwise be thwarted by a ‘business as usual’ policy orientation.

Continue reading “Why Superintelligence May Not Help Us Think about Existential Risks -- or Transhumanism” »

Sep 17, 2014

Artificial Intelligence: How Algorithms Make Systems Smart

Posted by in category: robotics/AI

By Stephen F. DeAngelis, Enterra Solutions — Wired

algorithmia-ft

“Algorithm” is a word that one hears used much more frequently than in the past. One of the reasons is that scientists have learned that computers can learn on their own if given a few simple instructions. That’s really all that algorithms are mathematical instructions. Wikipedia states that an algorithm “is a step-by-step procedure for calculations.

Algorithms are used for calculation, data processing, and automated reasoning.” Whether you are aware of it or not, algorithms are becoming a ubiquitous part of our lives. Some pundits see danger in this trend. For example, Leo Hickman (@LeoHickman) writes, “The NSA revelations highlight the role sophisticated algorithms play in sifting through masses of data. But more surprising is their widespread use in our everyday lives. So should we be more wary of their power?” [“How algorithms rule the world,” The Guardian, 1 July 2013] It’s a bit hyperbolic to declare that algorithms rule the world; but, I agree that their use is becoming more widespread. That’s because computers are playing increasingly important roles in so many aspects of our lives. I like the HowStuffWorks explanation:

Continue reading “Artificial Intelligence: How Algorithms Make Systems Smart” »

Sep 16, 2014

We Need to Pass Legislation on Artificial Intelligence Early and Often

Posted by in category: robotics/AI

By John Frank Weaver — Slate

152766339-google-self-driving-car-is-displayed-at-the-google

Not that long ago, Google announced something unheard of in the auto industry—at least in the part of the auto industry that makes moving cars. A car without a steering wheel or gas and brake pedals. To Google, this was the next step in self-driving cars. Why bother with a steering wheel if the driver isn’t driving? Some observers questioned whether this feature in the proposed the test vehicle violated the autonomous vehicle statute in California (where the vehicle would be tested), which required that the driver take control of the self-driving vehicle in case the autonomous system malfunctions. Google claimed that it installed an on/off button, which satisfied the California law.

California recently weighed in: Google, you’re wrong. The state has released regulations requiring that a test driver be able to take “active physical control” of the car, meaning with a steering wheel and brakes.

Continue reading “We Need to Pass Legislation on Artificial Intelligence Early and Often” »

Sep 15, 2014

Robots Aren’t Out to Get You. You Should Be Terrified of Them Anyway.

Posted by in category: robotics/AI

By — Slate

140910_FT_Superintelligence

n the recent discussion over the risks of developing superintelligent machines—that is, machines with general intelligence greater than that of humans—two narratives have emerged. One side argues that if a machine ever achieved advanced intelligence, it would automatically know and care about human values and wouldn’t pose a threat to us. The opposing side argues that artificial intelligence would “want” to wipe humans out, either out of revenge or an intrinsic desire for survival.

As it turns out, both of these views are wrong. We have little reason to believe a superintelligence will necessarily share human values, and no reason to believe it would place intrinsic value on its own survival either. These arguments make the mistake of anthropomorphising artificial intelligence, projecting human emotions onto an entity that is fundamentally alien.

Read more

Sep 14, 2014

What if Destiny were real? The UK’s cities imagined with space travel

Posted by in category: entertainment

Adam Gell — HITC

Destiny UK City Space Age Comparison London After

Destiny has been in players’ hands for the past few days now, and I’ve also been doing my part to fight The Darkness this week too. But, as the game uses our own galaxy as the setting to tell its story, complete with futuristic space travel and talk of a Golden Age brought by the arrival of The Traveller, Activision and Bungie have worked with the National Space Centre to see what the UK could look like in the future when space travel is real.

Similarly in the images below, Destiny lets you travel to the futuristic imaginings of the Russian Cosmodrome, which is the real-world site of Earth’s first and largest space facility, and where Sputnik 1 (the first artificial Earth satellite) was launched in 1957. In the game the site looks quite different from today’s real-world counterpart, as humanity has gone through a Golden Age of space travel, and reached the brink of extinction with the arrival of The Darkness years later.

Read more

Sep 13, 2014

Why the Ebola fire can almost not be stopped anymore

Posted by in category: biotech/medical

It needs an effort dwarfing all past peace-time and war-time efforts to be launched immediately, which prospect appears almost infinitely unlikely to be met in time.

The outbreak has long surpassed the threshold of instability and can only be spatially contained any more by the formation of uninfected (A) areas as large as possible and infected areas (B) as small as still possible. Water, food, gowns and disinfectants must be provided by international teams immediately in exponentially growing numbers and for whole countries. A supportive industry must be set in motion in a planet-wide action.

Diseased_Ebola_2014

The bleak prospect that the quenching of the disease is close to a point of no return stems from chaos theory which is essentially a theory of exponential growth (of differences in the initial conditions). “Exponential growth” means that a level that has been reached – in terms of the number of infected persons in the present case – will double after a constant number of time units for a long time to come. Here, we have an empirical doubling every 3 weeks for 5 months in a row by now with no abating in sight. See the precise graphs at the end of: http://en.wikipedia.org/wiki/Ebola_virus_epidemic_in_West_Africa

Sep 13, 2014

Century 21st Adaptability Into Outright Success, Regardless of Hugely Indebted Nations and a Global Economy Disrupted to Its Knees! (Image)

Posted by in category: futurism

Century 21st Adaptability Into Outright Success, Regardless of Hugely Indebted Nations and a Global Economy Disrupted to Its Knees! (Image)

0  ACROBATICS

Sep 13, 2014

The Three Simultaneous Foresight Imperatives Into Victory! [Graphic]

Posted by in category: futurism

0    FORESIGHT

Sep 13, 2014

Neuromodulation 2.0: New Developments in Brain Implants, Super Soldiers and the Treatment of Chronic Disease

Posted by in categories: biotech/medical, defense, transhumanism

Written By: — Sigularity Hub

neuro-modulation

Brain implants here we come.

DARPA just announced the ElectRX program, a $78.9 million attempt to develop miniscule electronic devices that interface directly with the nervous system in the hopes of curing a bunch of chronic conditions, ranging from the psychological (depression, PTSD) to the physical (Crohn’s, arthritis). Of course, the big goal here is to usher in a revolution in neuromodulation—that is, the science of modulating the nervous system to fix an underlying problem.

Read more