Blog

Archive for the ‘human trajectories’ category

Oct 24, 2014

Britons spend more time on tech than asleep, study suggests

Posted by in category: human trajectories

— BBC News

Woman on phone

Communications regulator Ofcom said UK adults spend an average of eight hours and 41 minutes a day on media devices, compared with the average night’s sleep of eight hours and 21 minutes.

Almost four hours a day are spent watching TV according to Ofcom’s survey of 2,800 UK adults and children.

Read more

Oct 4, 2014

Method of Sustainable Fuel-less Terra-forming of Venus & Mars

Posted by in categories: existential risks, futurism, human trajectories, solar power, space, sustainability

Terra Forming Venus & Mars by leveraging Asteroids
Inspired by: Lifeboat Foundation

Both Mars and Venus can be terra-formed to provide Earth-like gravity and atmospheres; Venus with an effort of about 100 years to terra-form the atmosphere, and Mars with an effort of about 2,000 years to terra-form the atmosphere. These are both potentially realized through the use of systems of solar sails. Asteroids provide many of the resources needed to seed related development.

Business model for interplanetary transport without fuel

Conceptual Space Elevator

Continue reading “Method of Sustainable Fuel-less Terra-forming of Venus & Mars” »


Oct 1, 2014

The Abolition of Medicine as a Goal for Humanity 2.0

Posted by in categories: aging, biological, bionic, biotech/medical, ethics, futurism, genetics, homo sapiens, human trajectories, life extension, medical, philosophy, policy, transhumanism

What follows is my position piece for London’s FutureFest 2013, the website for which no longer exists.

Medicine is a very ancient practice. In fact, it is so ancient that it may have become obsolete. Medicine aims to restore the mind and body to their natural state relative to an individual’s stage in the life cycle. The idea has been to live as well as possible but also die well when the time came. The sense of what is ‘natural’ was tied to statistically normal ways of living in particular cultures. Past conceptions of health dictated future medical practice. In this respect, medical practitioners may have been wise but they certainly were not progressive.

However, this began to change in the mid-19th century when the great medical experimenter, Claude Bernard, began to champion the idea that medicine should be about the indefinite delaying, if not outright overcoming, of death. Bernard saw organisms as perpetual motion machines in an endless struggle to bring order to an environment that always threatens to consume them. That ‘order’ consists in sustaining the conditions needed to maintain an organism’s indefinite existence. Toward this end, Bernard enthusiastically used animals as living laboratories for testing his various hypotheses.

Historians identify Bernard’s sensibility with the advent of ‘modern medicine’, an increasingly high-tech and aspirational enterprise, dedicated to extending the full panoply of human capacities indefinitely. On this view, scientific training trumps practitioner experience, radically invasive and reconstructive procedures become the norm, and death on a physician’s watch is taken to be the ultimate failure. Humanity 2.0 takes this way of thinking to the next level, which involves the abolition of medicine itself. But what exactly would that mean – and what would replace it?

Continue reading “The Abolition of Medicine as a Goal for Humanity 2.0” »


Sep 18, 2014

Why Superintelligence May Not Help Us Think about Existential Risks — or Transhumanism

Posted by in categories: alien life, biological, cyborg, defense, disruptive technology, ethics, existential risks, futurism, homo sapiens, human trajectories, internet, military, philosophy, policy, posthumanism, science, singularity, transhumanism

Among transhumanists, Nick Bostrom is well-known for promoting the idea of ‘existential risks’, potential harms which, were they come to pass, would annihilate the human condition altogether. Their probability may be relatively small, but the expected magnitude of their effects are so great, so Bostrom claims, that it is rational to devote some significant resources to safeguarding against them. (Indeed, there are now institutes for the study of existential risks on both sides of the Atlantic.) Moreover, because existential risks are intimately tied to the advancement of science and technology, their probability is likely to grow in the coming years.

Contrary to expectations, Bostrom is much less concerned with ecological suicide from humanity’s excessive carbon emissions than with the emergence of a superior brand of artificial intelligence – a ‘superintelligence’. This creature would be a human artefact, or at least descended from one. However, its self-programming capacity would have run amok in positive feedback, resulting in a maniacal, even self-destructive mission to rearrange the world in the image of its objectives. Such a superintelligence may appear to be quite ruthless in its dealings with humans, but that would only reflect the obstacles that we place, perhaps unwittingly, in the way of the realization of its objectives. Thus, this being would not conform to the science fiction stereotype of robots deliberately revolting against creators who are now seen as their inferiors.

I must confess that I find this conceptualisation of ‘existential risk’ rather un-transhumanist in spirit. Bostrom treats risk as a threat rather than as an opportunity. His risk horizon is precautionary rather than proactionary: He focuses on preventing the worst consequences rather than considering the prospects that are opened up by whatever radical changes might be inflicted by the superintelligence. This may be because in Bostrom’s key thought experiment, the superintelligence turns out to be the ultimate paper-clip collecting machine that ends up subsuming the entire planet to its task, destroying humanity along the way, almost as an afterthought.

But is this really a good starting point for thinking about existential risk? Much more likely than total human annihilation is that a substantial portion of humanity – but not everyone – is eliminated. (Certainly this captures the worst case scenarios surrounding climate change.) The Cold War remains the gold standard for this line of thought. In the US, the RAND Corporation’s chief analyst, Herman Kahn — the model for Stanley Kubrick’s Dr Strangelove – routinely, if not casually, tossed off scenarios of how, say, a US-USSR nuclear confrontation would serve to increase the tolerance for human biological diversity, due to the resulting proliferation of genetic mutations. Put in more general terms, a severe social disruption provides a unique opportunity for pursuing ideals that might otherwise be thwarted by a ‘business as usual’ policy orientation.

Continue reading “Why Superintelligence May Not Help Us Think about Existential Risks -- or Transhumanism” »


Jul 6, 2014

By 2045, Physicist Says ‘The Top Species Will No Longer Be Humans

Posted by in categories: human trajectories, posthumanism, singularity

Dylan Love — Business Insider

“Today there’s no legislation regarding how much intelligence a machine can have, how interconnected it can be. If that continues, look at the exponential trend. We will reach the singularity in the timeframe most experts predict. From that point on you’re going to see that the top species will no longer be humans, but machines.”

These are the words of Louis Del Monte, physicist, entrepreneur, and author of “The Artificial Intelligence Revolution.” Del Monte spoke to us over the phone about his thoughts surrounding artificial intelligence and the singularity, an indeterminate point in the future when machine intelligence will outmatch not only your own intelligence, but the world’s combined human intelligence too.

Read more

Jun 19, 2014

Mind uploading won’t lead to immortality

Posted by in categories: aging, bionic, biotech/medical, evolution, futurism, human trajectories, life extension, neuroscience, philosophy, posthumanism, robotics/AI, singularity, transhumanism

Uploading the content of one’s mind, including one’s personality, memories and emotions, into a computer may one day be possible, but it won’t transfer our biological consciousness and won’t make us immortal.

Uploading one’s mind into a computer, a concept popularized by the 2014 movie Transcendence starring Johnny Depp, is likely to become at least partially possible, but won’t lead to immortality. Major objections have been raised regarding the feasibility of mind uploading. Even if we could surpass every technical obstacle and successfully copy the totality of one’s mind, emotions, memories, personality and intellect into a machine, that would be just that: a copy, which itself can be copied again and again on various computers.

THE DILEMMA OF SPLIT CONSCIOUSNESS

Neuroscientists have not yet been able to explain what consciousness is, or how it works at a neurological level. Once they do, it is might be possible to reproduce consciousness in artificial intelligence. If that proves feasible, then it should in theory be possible to replicate our consciousness on computers too. Or is that jumpig to conclusions ?

Continue reading “Mind uploading won't lead to immortality” »


Jun 1, 2014

Is it possible to build an artificial superintelligence without fully replicating the human brain?

Posted by in categories: automation, computing, ethics, existential risks, futurism, hardware, human trajectories, neuroscience, robotics/AI, security

The technological singularity requires the creation of an artificial superintelligence (ASI). But does that ASI need to be modelled on the human brain, or is it even necessary to be able to fully replicate the human brain and consciousness digitally in order to design an ASI ?

Animal brains and computers don’t work the same way. Brains are massively parallel three-dimensional networks, while computers still process information in a very linear fashion, although millions of times faster than brains. Microprocessors can perform amazing calculations, far exceeding the speed and efficiency of the human brain using completely different patterns to process information. The drawback is that traditional chips are not good at processing massively parallel data, solving complex problems, or recognizing patterns.

Newly developed neuromorphic chips are modelling the massively parallel way the brain processes information using, among others, neural networks. Neuromorphic computers should ideally use optical technology, which can potentially process trillions of simultaneous calculations, making it possible to simulate a whole human brain.

Continue reading “Is it possible to build an artificial superintelligence without fully replicating the human brain?” »


May 14, 2014

Are you ready for contact with extraterrestrial intelligence?

Posted by in categories: first contact, human trajectories, space, space travel

Kurzweil Accelerating Intelligence

Some SETI (search for extraterrestrial intelligence) scientists are considering “Active SETI” to detect possible extraterrestrial civilizations.

Psychologist Gabriel G. de la Torre, professor at the University of Cádiz (Spain) questions this idea, based on results* from a survey taken by students, which revealed a general level of ignorance about the cosmos and the influence of religion on these matters.

Read more

May 14, 2014

New Museum Uses Algorithms To Visualize How 9/11 Still Shapes The World

Posted by in categories: big data, human trajectories, information science

Shaunacy Ferro — Fast Company


Time line

The terrorist attacks of September 11, 2001, forever changed the course of world history. More than a decade later, the scope of their impact is still evolving. American troops are still stationed in Afghanistan. Ground Zero workers are still filing for compensation for 9/11-related illnesses.

How exactly to incorporate this unfolding aftermath of the event is one of the major challenges facing the National September 11 Memorial Museum, which opens to the public on May 21. Local Projects, the studio behind the museum’s exhibit design (and the designers of the Ground Zero memorial’s thoughtful naming scheme), approached the task algorithmically.

Read more

May 10, 2014

What to make of the film ‘Transcendence’? Show it in classrooms.

Posted by in categories: 3D printing, augmented reality, bionic, computing, cyborg, disruptive technology, existential risks, fun, futurism, homo sapiens, human trajectories, innovation, nanotechnology, philosophy, posthumanism, privacy, robotics/AI, science, singularity, transhumanism

transcendenceI recently saw the film Transcendence with a close friend. If you can get beyond Johnny Depp’s siliconised mugging of Marlon Brando and Rebecca Hall’s waddling through corridors of quantum computers, Transcendence provides much to think about. Even though Christopher Nolan of Inception fame was involved in the film’s production, the pyrotechnics are relatively subdued – at least by today’s standards. While this fact alone seems to have disappointed some viewers, it nevertheless enables you to focus on the dialogue and plot. The film is never boring, even though nothing about it is particularly brilliant. However, the film stays with you, and that’s a good sign. Mark Kermode at the Guardian was one of the few reviewers who did the film justice.

The main character, played by Depp, is ‘Will Caster’ (aka Ray Kurzweil, but perhaps also an allusion to Hans Castorp in Thomas Mann’s The Magic Mountain). Caster is an artificial intelligence researcher based at Berkeley who, with his wife Evelyn Caster (played by Hall), are trying to devise an algorithm capable of integrating all of earth’s knowledge to solve all of its its problems. (Caster calls this ‘transcendence’ but admits in the film that he means ‘singularity’.) They are part of a network of researchers doing similar things. Although British actors like Hall and the key colleague Paul Bettany (sporting a strange Euro-English accent) are main players in this film, the film itself appears to transpire entirely within the borders of the United States. This is a bit curious, since a running assumption of the film is that if you suspect a malevolent consciousness uploaded to the internet, then you should shut the whole thing down. But in this film at least, ‘the whole thing’ is limited to American cyberspace.

Before turning to two more general issues concerning the film, which I believe may have led both critics and viewers to leave unsatisfied, let me draw attention to a couple of nice touches. First, the leader of the ‘Revolutionary Independence from Technology’ (RIFT), whose actions propel the film’s plot, explains that she used to be an advanced AI researcher who defected upon witnessing the endless screams of a Rhesus monkey while its entire brain was being digitally uploaded. Once I suspended my disbelief in the occurrence of such an event, I appreciate it as a clever plot device for showing how one might quickly convert from being radically pro– to anti-AI, perhaps presaging future real-world targets for animal rights activists. Second, I liked the way in which quantum computing was highlighted and represented in the film. Again, what we see is entirely speculative, yet it highlights the promise that one day it may be possible to read nature as pure information that can be assembled according to need to produce what one wants, thereby rendering our nanotechnology capacities virtually limitless. 3D printing may be seen as a toy version of this dream.

Now on to the two more general issues, which viewers might find as faults, but I think are better treated as what the Greeks called aporias (i.e. open questions):

Continue reading “What to make of the film 'Transcendence'? Show it in classrooms.” »


Page 1 of 1412345678Last