Toggle light / dark theme

BBC
Robot

Two robotics experts, Prof Ronald Arkin and Prof Noel Sharkey, will debate the efficacy and necessity of killer robots.

The meeting will be held during the UN Convention on Certain Conventional Weapons (CCW). A report on the discussion will be presented to the CCW meeting in November. This will be the first time that the issue of killer robots, or lethal autonomous weapons systems, will be addressed within the CCW.

Read more

— Popular Science

It happens quickly—more quickly than you, being human, can fully process.

A front tire blows, and your autonomous SUV swerves. But rather than veering left, into the opposing lane of traffic, the robotic vehicle steers right. Brakes engage, the system tries to correct itself, but there’s too much momentum. Like a cornball stunt in a bad action movie, you are over the cliff, in free fall.

Your robot, the one you paid good money for, has chosen to kill you. Better that, its collision-response algorithms decided, than a high-speed, head-on collision with a smaller, non-robotic compact. There were two people in that car, to your one. The math couldn’t be simpler.

Read more

transcendence
I recently saw the film Transcendence with a close friend. If you can get beyond Johnny Depp’s siliconised mugging of Marlon Brando and Rebecca Hall’s waddling through corridors of quantum computers, Transcendence provides much to think about. Even though Christopher Nolan of Inception fame was involved in the film’s production, the pyrotechnics are relatively subdued – at least by today’s standards. While this fact alone seems to have disappointed some viewers, it nevertheless enables you to focus on the dialogue and plot. The film is never boring, even though nothing about it is particularly brilliant. However, the film stays with you, and that’s a good sign. Mark Kermode at the Guardian was one of the few reviewers who did the film justice.

The main character, played by Depp, is ‘Will Caster’ (aka Ray Kurzweil, but perhaps also an allusion to Hans Castorp in Thomas Mann’s The Magic Mountain). Caster is an artificial intelligence researcher based at Berkeley who, with his wife Evelyn Caster (played by Hall), are trying to devise an algorithm capable of integrating all of earth’s knowledge to solve all of its its problems. (Caster calls this ‘transcendence’ but admits in the film that he means ‘singularity’.) They are part of a network of researchers doing similar things. Although British actors like Hall and the key colleague Paul Bettany (sporting a strange Euro-English accent) are main players in this film, the film itself appears to transpire entirely within the borders of the United States. This is a bit curious, since a running assumption of the film is that if you suspect a malevolent consciousness uploaded to the internet, then you should shut the whole thing down. But in this film at least, ‘the whole thing’ is limited to American cyberspace.

Before turning to two more general issues concerning the film, which I believe may have led both critics and viewers to leave unsatisfied, let me draw attention to a couple of nice touches. First, the leader of the ‘Revolutionary Independence from Technology’ (RIFT), whose actions propel the film’s plot, explains that she used to be an advanced AI researcher who defected upon witnessing the endless screams of a Rhesus monkey while its entire brain was being digitally uploaded. Once I suspended my disbelief in the occurrence of such an event, I appreciate it as a clever plot device for showing how one might quickly convert from being radically pro- to anti-AI, perhaps presaging future real-world targets for animal rights activists. Second, I liked the way in which quantum computing was highlighted and represented in the film. Again, what we see is entirely speculative, yet it highlights the promise that one day it may be possible to read nature as pure information that can be assembled according to need to produce what one wants, thereby rendering our nanotechnology capacities virtually limitless. 3D printing may be seen as a toy version of this dream.

Now on to the two more general issues, which viewers might find as faults, but I think are better treated as what the Greeks called aporias (i.e. open questions):

(1) I think this film is best understood as taking place in an alternative future projected from when, say, Ray Kurzweil first proposed ‘the age of spiritual machines’ (i.e. 1999). This is not the future as projected in, say, Spielberg’s Minority Report, in which the world has become so ‘Jobs-ified’, that everything is touch screen-based. In fact, the one moment where a screen is very openly touched proves inconclusive (i.e. when, just after the upload, Evelyn impulsively responds to Will being on the other side of the interface). This is still a world very much governed by keyboards (hence the symbolic opening shot where a keyboard is used as a doorstop in the cyber-meltdown world). Even the World Wide Web doesn’t seem to have the prominence one might expect in a film where computer screens are featured so heavily. Why is this the case? Perhaps because the script had been kicking around for a while (which is true). This may also explain why in Evelyn’s pep talk to funders includes a line about Einstein saying something ‘nearly fifty years ago’. (Einstein died in 1955.) Or, for that matter, why the FBI agent (played by Irish actor Cillian Murphy) looks like something out of a 1970s TV detective series, the on-site military commander looks like George C. Scott and the great quantum computing mecca is located in a town that looks frozen in the 1950s. Perhaps we are seeing here the dawn of ‘steampunk’ for the late 20th century.

(2) The film contains heavy Christian motifs, mainly surrounding Paul Bettany’s character, Max Waters, who turns out to be the only survivor of the core research team involved in uploading consciousness. He wears a cross around his neck, which pops up at several points in the film. Moreover, once Max is abducted by RIFT, he learns that his writings querying whether digital uploading enhances or obliterates humanity have been unwittingly inspirational. Max and Will can be contrasted in terms of where they stand in relation to the classic Faustian bargain: Max refuses what Will accepts (quite explicitly, in response to the person who turns out to be his assassin). At stake is whether our biblically privileged status as creatures entitles us to take the next step to outright deification, which in this case means merging with the source of all knowledge on the internet. To underscore the biblical dimension of dilemma, toward the end of the film, Max confronts Evelyn (Eve?) with the realization that she was the one who nudged Will toward this crisis. Yet, the film’s overall verdict on his Faustian fall is decidedly mixed. Once uploaded, Will does no permanent damage, despite the viewer’s expectations. On the contrary, like Jesus, he manages to cure the ill, and even when battling with the amassed powers of the US government and RIFT, he ends up not killing anyone. However, the viewer is led to think that Will 2.0 may have overstepped the line when he revealed his ability to monitor Evelyn’s thoughts. So the real transgression appears to lie in the violation of privacy. (The Snowdenistas would be pleased!) But the film leaves the future quite open, as what the viewer sees in the opening and final scenes looks more like the result of an extended blackout (and hints are given that some places have already begun the restore their ICT infrastructure) than anything resembling irreversible damage to life as we know it. One can read this as either a warning shot to greater damage ahead if we go down the ‘transcendence’ route, or that such a route might be worth pursuing if we get manage to sort out the ‘people issues’. Given that Max ends the film by eulogising Will and Evelyn’s attempts to benefit humanity, I read the film as cautiously optimistic about the prospects for ‘transcendence’, where the film’s plot is taken as offering a simulated trial run.

My own final judgement is that this film would be very good for classroom use to raise the entire range of issues surrounding what I have called ‘Humanity 2.0’.

Jordan Pearson — Motherboard

Superintelligent AI Could Wipe Out Humanity, If We're Not Ready for It
Impending technological change tends to elicit a Janus-faced reaction in people: part awe, part creeping sense of anxiety and terror. During the Industrial Revolution, Henry Ford called it the “the terror of the machine.” Today, it’s the looming advancements in artificial intelligence that promise to create programs with superhuman intelligence—the infamous singularity—that are starting to weigh on the public consciousness, as blockbuster ‘netsploitation flick Transcendence illustrates.

There’s a danger that sci-fi pulp like Transcendence is watering down the real risks of artificial intelligence in public discourse. But these threats are being taken very seriously by researchers who are studying the existential threat of AI on the human race.

Read more

— ars technica


Up until this point, the musical genre known as “drone rock” had been a weird, indie niche that consisted of slow, loud guitar noise. If machinists and programmers have their way, that description will need some major rewriting—but it’ll probably seem just as weird.

This week, a team at Philadelphia-based KMel Robotics, known for building airborne video recording solutions, turned their robot-making talents to creating a band. The company pre-programmed a six-aircraft ensemble to hover over instruments and strum or strike without any human interaction, other than the team’s initial strike of a “play” button.

Read more

Book Review: The Human Race to the Future by Daniel Berleant (2013) (A Lifeboat Foundation publication)

Posted in alien life, asteroid/comet impacts, biotech/medical, business, climatology, disruptive technology, driverless cars, drones, economics, education, energy, engineering, ethics, evolution, existential risks, food, futurism, genetics, government, habitats, hardware, health, homo sapiens, human trajectories, information science, innovation, life extension, lifeboat, nanotechnology, neuroscience, nuclear weapons, philosophy, policy, posthumanism, robotics/AI, science, scientific freedom, security, singularity, space, space travel, sustainability, transhumanismTagged , , , , , ,

From CLUBOF.INFO

The Human Race to the Future (2014 Edition) is the scientific Lifeboat Foundation think tank’s publication first made available in 2013, covering a number of dilemmas fundamental to the human future and of great interest to all readers. Daniel Berleant’s approach to popularizing science is more entertaining than a lot of other science writers, and this book contains many surprises and useful knowledge.

Some of the science covered in The Human Race to the Future, such as future ice ages and predictions of where natural evolution will take us next, is not immediately relevant in our lives and politics, but it is still presented to make fascinating reading. The rest of the science in the book is very linked to society’s immediate future, and deserves great consideration by commentators, activists and policymakers because it is only going to get more important as the world moves forward.

The book makes many warnings and calls for caution, but also makes an optimistic forecast about how society might look in the future. For example, It is “economically possible” to have a society where all the basics are free and all work is essentially optional (a way for people to turn their hobbies into a way of earning more possessions) (p. 6–7).

A transhumanist possibility of interest in The Human Race to the Future is the change in how people communicate, including closing the gap between thought and action to create instruments (maybe even mechanical bodies) that respond to thought alone. The world may be projected to move away from keyboards and touchscreens towards mind-reading interfaces (p. 13–18). This would be necessary for people suffering from physical disabilities, and for soldiers in the arms race to improve response times in lethal situations.

To critique the above point made in the book, it is likely that drone operators and power-armor wearers in future armies would be very keen to link their brains directly to their hardware, and the emerging mind-reading technology would make it possible. However, there is reason to doubt the possibility of effective teamwork while relying on such interfaces. Verbal or visual interfaces are actually more attuned to people as a social animal, letting us hear or see our colleagues’ thoughts and review their actions as they happen, which allows for better teamwork. A soldier, for example, may be happy with his own improved reaction times when controlling equipment directly with his brain, but his fellow soldiers and officers may only be irritated by the lack of an intermediate phase to see his intent and rescind his actions before he completes them. Some helicopter and vehicle accidents are averted only by one crewman seeing another’s error, and correcting him in time. If vehicles were controlled by mind-reading, these errors would increasingly start to become fatal.

Reading and research is also an area that could develop in a radical new direction unlike anything before in the history of communication. The Human Race to the Future speculates that beyond articles as they exist now (e.g. Wikipedia articles) there could be custom-generated articles specific to the user’s research goal or browsing. One’s own query could shape the layout and content of each article, as it is generated. This way, reams of irrelevant information will not need to be waded through to answer a very specific query (p. 19–24).

Greatly similar to the same view I have written works expressing, the book sees industrial civilization as being burdened above all by too much centralization, e.g. oil refineries. This endangers civilization, and threatens collapse if something should later go wrong (p. 32, 33). For example, an electromagnetic pulse (EMP) resulting from a solar storm could cause serious damage as a result of the centralization of electrical infrastructure. Digital sabotage could also threaten such infrastructure (p. 34, 35).

The solution to this problem is decentralization, as “where centralization creates vulnerability, decentralization alleviates it” (p. 37). Solar cells are one example of decentralized power production (p. 37–40), but there is also much promise in home fuel production using such things as ethanol and biogas (p. 40–42). Beyond fuel, there is also much benefit that could come from decentralized, highly localized food production, even “labor-free”, and “using robots” (p. 42–45). These possibilities deserve maximum attention for the sake of world welfare, considering the increasing UN concerns about getting adequate food and energy supplies to the growing global population. There should not need to be a food vs. fuel debate, as the only acceptable solution can be to engineer solutions to both problems. An additional option for increasing food production is artificial meat, which should aim to replace the reliance on livestock. Reliance on livestock has an “intrinsic wastefulness” that artificial meat does not have, so it makes sense for artificial meat to become the cheapest option in the long run (p. 62–65). Perhaps stranger and more profound is the option of genetically enhancing humans to make better use of food and other resources (p. 271–274).

On a related topic, sequencing our own genome may be able to have “major impacts, from medicine to self-knowledge” (p. 46–51). However, the book does not contain mention of synthetic biology and the potential impacts of J. Craig Venter’s work, as explained in such works as Life at the Speed of Light. This could certainly be something worth adding to the story, if future editions of the book aim to include some additional detail.

At least related to synthetic biology is the book’s discussion of genetic engineering of plants to produce healthier or more abundant food. Alternatively, plants could be genetically programmed to extract metal compounds from the soil (p. 213–215). However, we must be aware that this could similarly lead to threats, such as “superweeds that overrun the world” similar to the flora in John Wyndam’s Day of the Triffids (p. 197–219). Synthetic biology products could also accidentally expose civilization to microorganisms with unknown consequences, perhaps even as dangerous as alien contagions depicted in fiction. On the other hand, they could lead to potentially unlimited resources, with strange vats of bacteria capable of manufacturing oil from simple chemical feedstocks. Indeed, “genetic engineering could be used to create organic prairies that are useful to humans” (p. 265), literally redesigning and upgrading our own environment to give us more resources.

The book advocates that politics should focus on long-term thinking, e.g. to deal with global warming, and should involve “synergistic cooperation” rather than “narrow national self-interest” (p. 66–75). This is a very important point, and may coincide with the complex prediction that nation states in their present form are flawed and too slow-moving. Nation-states may be increasingly incapable of meeting the challenges of an interconnected world in which national narratives produce less and less legitimate security thinking and transnational identities become more important.

Close to issues of security, The Human Race to the Future considers nuclear proliferation, and sees that the reasons for nuclear proliferation need to be investigated in more depth for the sake of simply by reducing incentives. To avoid further research, due to thinking that it has already been sufficiently completed, is “downright dangerous” (p. 89–94). Such a call is certainly necessary at a time when there is still hostility against developing countries with nuclear programs, and this hostility is simply inflammatory and making the world more dangerous. To a large extent, nuclear proliferation is inevitable in a world where countries are permitted to bomb one another because of little more than suspicions and fears.

Another area covered in this book that is worth celebrating is the AI singularity, which is described here as meaning the point at which a computer is sophisticated enough to design a more powerful computer than itself. While it could mean unlimited engineering and innovation without the need for human imagination, there are also great risks. For example, a “corporbot” or “robosoldier,” determined to promote the interests of an organization or defeat enemies, respectively. These, as repeatedly warned through science fiction, could become runaway entities that no longer listen to human orders (p. 83–88, 122–127).

A more distant possibility explored in Berleant’s book is the colonization of other planets in the solar system (p. 97–121, 169–174). There is the well-taken point that technological pioneers should already be trying to settle remote and inhospitable locations on Earth, to perfect the technology and society of self-sustaining settlements (Antarctica?) (p.106). Disaster scenarios considered in the book that may necessitate us moving off-world in the long term include a hydrogen sulfide poisoning apocalypse (p. 142–146) and a giant asteroid impact (p. 231–236)

The Human Race to the Future is a realistic and practical guide to the dilemmas fundamental to the human future. Of particular interest to general readers, policymakers and activists should be the issues that concern the near future, such as genetic engineering aimed at conservation of resources and the achievement of abundance.

By Harry J. Bentham - More articles by Harry J. Bentham

Originally published on April 22 in h+ Magazine

Interested in this subject? Subscribe to receive free CLUBOF.INFO articles by Email

JESSE McKINLEY — NYTimes

EASTON, N.Y. — Something strange is happening at farms in upstate New York. The cows are milking themselves.

Desperate for reliable labor and buoyed by soaring prices, dairy operations across the state are charging into a brave new world of udder care: robotic milkers, which feed and milk cow after cow without the help of a single farmhand.

Read more

Joaquin Phoenix talking to his iOS girlfriend Samantha in Her.

Johnny Depp dies and is reborn as a computer brain in Transcendence, the latest science-fiction thriller about artificial intelligence. Smart machines that may serve or dominate mankind are as old as Samuel Butler’s 1872 novel Erewhon, or Karel Capek’s 1920 play R.U.R. — and as recent as this week’s episode of The Simpsons, in which Dr. Frink revives the dead Homer as a chatty screensaver. They have also inhabited some of the finest SF movies, including Dark Star, Star Wars, Star Trek the Motion Picture, Alien, Blade Runner, The Terminator and RoboCop. The list is inspiring and nearly endless.

Read more

AppleInsider Staff
Cortana, the Halo character after which Microsoft's Siri competitor is named
“User interface started with the command prompt, moved to graphics, then touch, and then gestures,” Microsoft research executive Yoram Yaakobi told the Wall Street Journal. “It’s now moving to invisible UI, where there is nothing to operate. The tech around you understands you and what you want to do. We’re putting this at the forefront of our efforts.”

With the push, dubbed “UI.Next,” Microsoft is pursuing a future in which users do not need to tell their device what to do — by touching or speaking to it, for instance — and instead passively consume information that the device has already prepared in anticipation of their needs.

Both Apple and Google have nodded in this direction already, though the technology is far from mature. Apple’s Passbook, for instance, can dynamically surface information like event tickets based on the user’s location, while Google’s Google Now will adjust a user’s schedule based on traffic conditions.

Read more