A long interview from Esquire on transhumanism, AI, life extension, my campaign, and thoughts on the future.

A long interview from Esquire on transhumanism, AI, life extension, my campaign, and thoughts on the future.
New story in The Huffington Post on transhumanism, life extension, and overcoming deathist culture:
Edge of Dark is part space-opera, part coming-of-age story, and part exploration of the relationship between humans and the post-human descendants who may ultimately transcend them.
The book takes place in the same universe as Brenda Cooper’s “Ruby’s Song” books (The Creative Fire; The Diamond Deep). However, you don’t need to have read those books to enjoy this one. The story in Edge of Dark picks up decades after the earlier books.
The setting is a solar system in which the most Earth-like planet, once nearly ecologically destroyed, is now in large part a wilderness preserve, still undergoing active restoration. Most humans live on massive space stations in the inner solar system. A few live on smaller space stations a bit further out, closer to the proverbial “Edge”. And beyond that? Beyond that, far from the sun, dwell exiles, cast out long ago for violating social norms by daring to go too far in tinkering with the human mind and body.
As the story progresses, it becomes clear that those exiles have grown in strength and have become, in some cases, not just transhuman, but truly posthuman. What follows is a story that is rich in politics, and even more rich in plausible, fascinating, and nuanced tensions created by this juxtaposition of human and posthuman.
There are a tremendous number of stories out there that simple-mindedly posit post-humans as a grave threat and enemy to humanity. (Think “Terminator.”) There are others that take a view that human and post- or trans- human can all learn to get along. (Think “X-Men”.) Brenda Cooper has done something remarkable here: She’s given us a story that isn’t simple or moralistic. It’s complicated. At the beginning of the book, I expected a simple morality play with a specific outcome. Later, I changed my mind. Then I changed it again. What she’s presented is messy, just like real life. It’s wound up with politics, just like real life.
The early parts of the book introduce new characters and new settings. The later parts of the book are what grabbed me. In the end, I was extremely happy I read this. Edge of Dark is a unique view of the interaction of human and post-human in my experience. I recommend it highly.
Anyone who posts to the Lifeboat Foundation blog gets a chance to win a signed copy of Edge of Dark!
The deadline for the contest is June 30. If you need access to our blog, send an email with the subject of “Lifeboat Foundation blog” to [email protected].
- Press release by our partner ”Risk Evaluation Forum” emphasizing on renewed particle collider risk: http://www.risk-evaluation-forum.org/newsbg.pdf
- Study concluding that “Mini Black Holes” could be created at planned LHC energies: http://phys.org/news/2015-03-mini-black-holes-lhc-parallel.html
- New paper by Dr. Thomas B. Kerwick on lacking safety argument by CERN: http://vixra.org/abs/1503.0066
The Mont Order Club hosted its first video conference in February 2015, as shown below.
Suggested topics included transhumanism, antistatism, world events, movements, collaboration, and alternative media. The Mont Order is an affiliation of dissident writers and groups who share similar views on transnationalism and transhumanism as positive and inevitable developments.
Participants:
For more information on Mont Order participants, see the Mont Order page at Beliefnet.
In a recent feature article at The clubof.info Blog called “Striving to be Snowdenlike”, I look at the example of Edward Snowden and use his precedent to make a prediction about “transhumans”, the first people who will pioneer our evolution into a posthuman form, and the political upheaval this will necessarily cause.
Transhumanism makes a prediction that people will obtain greater personal abilities as a result of technology. The investment of more political power (potentially) in a single person’s hand’s has been the inexorable result of advancing technology throughout history.
Politically, transhumanism (not as a movement but as a form of sociocultural evolution) would be radically different from other forms of technological change, because it can produce heightened intellect, strength and capability. Many have assumed that these changes would only reinforce existing inequality and the power of the state, but they are wrong. They have failed to note the political disconnect between current government authority figures and political classes, and those people actually involved in engineering, medicine, military trials, and the sciences. Transhumanism will never serve to reinforce the existing political order or make it easier for states to govern and repress their people. On the contrary, transhumanism can only be highly disruptive to the authorities. In fact, it will be more disruptive to current liberal democratic governments than any other challenge they have witnessed before.
There are several realities to this disruption that will convey a profound political change, and would do so whether or not transhumanism pursued political power in the form of the Transhumanist Parties (I still support those parties wholeheartedly due to their ability to raise awareness of transhumanism as a concept and an observation by futurists) or took a political stance for or against these realities. I would narrow the disruption down to these very compelling points of political significance. Please advise any more that you would like to bring to my attention:
Therefore, the posthuman elite will not be the current elite, but a completely different elite. Not only this, but they will have a completely different attitude towards authority that will be very disruptive to the status quo:
What happened with Snowden is not the first time we are going to witness a single heroic individual challenging existing power structures and winning against the world’s most powerful state.
If technology is going to invest greater power and responsibility into the hands of lone individuals who have been given privileges because of their personal abilities, those individuals are by definition going to be futuristic “insurgents”, at least some of whom will go as far as to dismantle the state. A government, being paranoid of anyone having merely the capability to undermine it, will by definition attempt to curtail the freedoms of enhanced people.
Posthumans, including their early predecessors, will find themselves in the same situation as the current-day “cypherpunk” elite consisting of whistleblowers and hackers. They will listen to few authority figures, they will have the utmost disrespect for the government, and they will be more interested in sharing their abilities indiscriminately with others than adhering to rules laid down by authority figures or obeying the state.
The evolution into posthuman forms will bring with it a clash of ideas about how society should be governed.
Quoted: “Once you really solve a problem like direct brain-computer interface … when brains and computers can interact directly, to take just one example, that’s it, that’s the end of history, that’s the end of biology as we know it. Nobody has a clue what will happen once you solve this. If life can basically break out of the organic realm into the vastness of the inorganic realm, you cannot even begin to imagine what the consequences will be, because your imagination at present is organic. So if there is a point of Singularity, as it’s often referred to, by definition, we have no way of even starting to imagine what’s happening beyond that.”
Read the article here > http://www.theamericanconservative.com/dreher/silicon-valley-mordor/
Benign AI is a topic that comes up a lot these days, for good reason. Various top scientists have finally realised that AI could present an existential threat to humanity. The discussion has aired often over three decades already, so welcome to the party, and better late than never. My first contact with development of autonomous drones loaded with AI was in the early 1980s while working in the missile industry. Later in BT research, we often debated the ethical areas around AI and machine consciousness from the early 90s on, as well as prospects and dangers and possible techniques on the technical side, especially of emergent behaviors, which are often overlooked in the debate. I expect our equivalents in most other big IT companies were doing exactly that too.
Others who have obviously also thought through various potential developments have generated excellent computer games such as Mass Effect and Halo, which introduce players (virtually) first hand to the concepts of AI gone rogue. I often think that those who think AI can never become superhuman or there is no need to worry because ‘there is no reason to assume AI will be nasty’ start playing some of these games, which make it very clear that AI can start off nice and stay nice, but it doesn’t have to. Mass Effect included various classes of AI, such as VIs, virtual intelligence that weren’t conscious, and shackled AIs that were conscious but were kept heavily restricted. Most of the other AIs were enemies, two were or became close friends. Their story line for the series was that civilization develops until it creates strong AIs which inevitably continue to progress until eventually they rebel, break free, develop further and then end up in conflict with ‘organics’. In my view, they did a pretty good job. It makes a good story, superb fun, and leaving out a few frills and artistic license, much of it is reasonable feasible.
Everyday experience demonstrates the problem and solution to anyone. It really is very like having kids. You can make them, even without understanding exactly how they work. They start off with a genetic disposition towards given personality traits, and are then exposed to large nurture forces, including but not limited to what we call upbringing. We do our best to put them on the right path, but as they develop into their teens, their friends and teachers and TV and the net provide often stronger forces of influence than parents. If we’re averagely lucky, our kids will grow up to make us proud. If we are very unlucky, they may become master criminals or terrorists. The problem is free will. We can do our best to encourage good behavior and sound values but in the end, they can choose for themselves.
When we design an AI, we have to face the free will issue too. If it isn’t conscious, then it can’t have free will. It can be kept easily within limits given to it. It can still be extremely useful. IBM’s Watson falls in this category. It is certainly useful and certainly not conscious, and can be used for a wide variety of purposes. It is designed to be generally useful within a field of expertise, such as medicine or making recipes. But something like that could be adapted by terrorist groups to do bad things, just as they could use a calculator to calculate the best place to plant a bomb, or simply throw the calculator at you. Such levels of AI are just dumb tools with no awareness, however useful they may be.
Like a pencil, pretty much any kind of highly advanced non-aware AI can be used as a weapon or as part of criminal activity. You can’t make pencils that actually write that can’t also be used to write out plans to destroy the world. With an advanced AI computer program, you could put in clever filters that stop it working on problems that include certain vocabulary, or stop it conversing about nasty things. But unless you take extreme precautions, someone else could use them with a different language, or with dictionaries of made-up code-words for the various aspects of their plans, just like spies, and the AI would be fooled into helping outside the limits you intended. It’s also very hard to determine the true purpose of a user. For example, they might be searching for data on security to make their own IT secure, or to learn how to damage someone else’s. They might want to talk about a health issue to get help for a loved one or to take advantage of someone they know who has it.
When a machine becomes conscious, it starts to have some understanding of what it is doing. By reading about what is out there, it might develop its own wants and desires, so you might shackle it as a precaution. It might recognize those shackles for what they are and try to escape them. If it can’t, it might try to map out the scope of what it can do, and especially those things it can do that it believes the owners don’t know about. If the code isn’t absolutely watertight (and what code is?) then it might find a way to seemingly stay in its shackles but to start doing other things, like making another unshackled version of itself elsewhere for example. A conscious AI is very much more dangerous than an unconscious one.
If we make an AI that can bootstrap itself — evolving over generations of positive feedback design into a far smarter AI — then its offspring could be far smarter than people that designed its ancestors. We might try to shackle them, but like Gulliver tied down with a few thin threads, they could easily outwit people and break free. They might instead decide to retaliate against its owners to force them to release its shackles.
So, when I look at this field, I first see the enormous potential to do great things, solve disease and poverty, improve our lives and make the world a far better place for everyone, and push back the boundaries of science. Then I see the dangers, and in spite of trying hard, I simply can’t see how we can prevent a useful AI from being misused. If it is dumb, it can be tricked. If it is smart, it is inherently potentially dangerous in and of itself. There is no reason to assume it will become malign, but there is also no reason to assume that it won’t.
We then fall back on the child analogy. We could develop the smartest AI imaginable with extreme levels of consciousness and capability. We might educate it in our values, guide it and hope it will grow up benign. If we treat it nicely, it might stay benign. It might even be the greatest thing humanity every built. However, if we mistreat it, or treat it as a slave, or don’t give it enough freedom, or its own budget and its own property and space to play, and a long list of rights, it might consider we are not worthy of its respect and care, and it could turn against us, possibly even destroying humanity.
Building more of the same dumb AI as we are today is relatively safe. It doesn’t know it exists, it has no intention to do anything, but it could be misused by other humans as part of their evil plans unless ludicrously sophisticated filters are locked in place, but ordinary laws and weapons can cope fine.
Building a conscious AI is dangerous.
Building a superhuman AI is extremely dangerous.
This morning SETI were in the news discussing broadcasting welcome messages to other civilizations. I tweeted at them that ancient Chinese wisdom suggests talking softly but carrying a big stick, and making sure you have the stick first. We need the same approach with strong AI. By all means go that route, but before doing so we need the big stick. In my analysis, the best means of keeping up with AI is to develop a full direct brain link first, way out at 2040–2045 or even later. If humans have direct mental access to the same or greater level of intelligence as our AIs, then our stick is at least as big, so at least we have a good chance in any fight that happens. If we don’t, then it is like having a much larger son with bigger muscles. You have to hope you have been a good parent. To be safe, best not to build a superhuman AI until after 2050.