Toggle light / dark theme

Benign AI is a topic that comes up a lot these days, for good reason. Various top scientists have finally realised that AI could present an existential threat to humanity. The discussion has aired often over three decades already, so welcome to the party, and better late than never. My first contact with development of autonomous drones loaded with AI was in the early 1980s while working in the missile industry. Later in BT research, we often debated the ethical areas around AI and machine consciousness from the early 90s on, as well as prospects and dangers and possible techniques on the technical side, especially of emergent behaviors, which are often overlooked in the debate. I expect our equivalents in most other big IT companies were doing exactly that too.

Others who have obviously also thought through various potential developments have generated excellent computer games such as Mass Effect and Halo, which introduce players (virtually) first hand to the concepts of AI gone rogue. I often think that those who think AI can never become superhuman or there is no need to worry because ‘there is no reason to assume AI will be nasty’ start playing some of these games, which make it very clear that AI can start off nice and stay nice, but it doesn’t have to. Mass Effect included various classes of AI, such as VIs, virtual intelligence that weren’t conscious, and shackled AIs that were conscious but were kept heavily restricted. Most of the other AIs were enemies, two were or became close friends. Their story line for the series was that civilization develops until it creates strong AIs which inevitably continue to progress until eventually they rebel, break free, develop further and then end up in conflict with ‘organics’. In my view, they did a pretty good job. It makes a good story, superb fun, and leaving out a few frills and artistic license, much of it is reasonable feasible.

Everyday experience demonstrates the problem and solution to anyone. It really is very like having kids. You can make them, even without understanding exactly how they work. They start off with a genetic disposition towards given personality traits, and are then exposed to large nurture forces, including but not limited to what we call upbringing. We do our best to put them on the right path, but as they develop into their teens, their friends and teachers and TV and the net provide often stronger forces of influence than parents. If we’re averagely lucky, our kids will grow up to make us proud. If we are very unlucky, they may become master criminals or terrorists. The problem is free will. We can do our best to encourage good behavior and sound values but in the end, they can choose for themselves.

When we design an AI, we have to face the free will issue too. If it isn’t conscious, then it can’t have free will. It can be kept easily within limits given to it. It can still be extremely useful. IBM’s Watson falls in this category. It is certainly useful and certainly not conscious, and can be used for a wide variety of purposes. It is designed to be generally useful within a field of expertise, such as medicine or making recipes. But something like that could be adapted by terrorist groups to do bad things, just as they could use a calculator to calculate the best place to plant a bomb, or simply throw the calculator at you. Such levels of AI are just dumb tools with no awareness, however useful they may be.

Like a pencil, pretty much any kind of highly advanced non-aware AI can be used as a weapon or as part of criminal activity. You can’t make pencils that actually write that can’t also be used to write out plans to destroy the world. With an advanced AI computer program, you could put in clever filters that stop it working on problems that include certain vocabulary, or stop it conversing about nasty things. But unless you take extreme precautions, someone else could use them with a different language, or with dictionaries of made-up code-words for the various aspects of their plans, just like spies, and the AI would be fooled into helping outside the limits you intended. It’s also very hard to determine the true purpose of a user. For example, they might be searching for data on security to make their own IT secure, or to learn how to damage someone else’s. They might want to talk about a health issue to get help for a loved one or to take advantage of someone they know who has it.

When a machine becomes conscious, it starts to have some understanding of what it is doing. By reading about what is out there, it might develop its own wants and desires, so you might shackle it as a precaution. It might recognize those shackles for what they are and try to escape them. If it can’t, it might try to map out the scope of what it can do, and especially those things it can do that it believes the owners don’t know about. If the code isn’t absolutely watertight (and what code is?) then it might find a way to seemingly stay in its shackles but to start doing other things, like making another unshackled version of itself elsewhere for example. A conscious AI is very much more dangerous than an unconscious one.

If we make an AI that can bootstrap itself — evolving over generations of positive feedback design into a far smarter AI — then its offspring could be far smarter than people that designed its ancestors. We might try to shackle them, but like Gulliver tied down with a few thin threads, they could easily outwit people and break free. They might instead decide to retaliate against its owners to force them to release its shackles.

So, when I look at this field, I first see the enormous potential to do great things, solve disease and poverty, improve our lives and make the world a far better place for everyone, and push back the boundaries of science. Then I see the dangers, and in spite of trying hard, I simply can’t see how we can prevent a useful AI from being misused. If it is dumb, it can be tricked. If it is smart, it is inherently potentially dangerous in and of itself. There is no reason to assume it will become malign, but there is also no reason to assume that it won’t.

We then fall back on the child analogy. We could develop the smartest AI imaginable with extreme levels of consciousness and capability. We might educate it in our values, guide it and hope it will grow up benign. If we treat it nicely, it might stay benign. It might even be the greatest thing humanity every built. However, if we mistreat it, or treat it as a slave, or don’t give it enough freedom, or its own budget and its own property and space to play, and a long list of rights, it might consider we are not worthy of its respect and care, and it could turn against us, possibly even destroying humanity.

Building more of the same dumb AI as we are today is relatively safe. It doesn’t know it exists, it has no intention to do anything, but it could be misused by other humans as part of their evil plans unless ludicrously sophisticated filters are locked in place, but ordinary laws and weapons can cope fine.

Building a conscious AI is dangerous.

Building a superhuman AI is extremely dangerous.

This morning SETI were in the news discussing broadcasting welcome messages to other civilizations. I tweeted at them that ancient Chinese wisdom suggests talking softly but carrying a big stick, and making sure you have the stick first. We need the same approach with strong AI. By all means go that route, but before doing so we need the big stick. In my analysis, the best means of keeping up with AI is to develop a full direct brain link first, way out at 2040–2045 or even later. If humans have direct mental access to the same or greater level of intelligence as our AIs, then our stick is at least as big, so at least we have a good chance in any fight that happens. If we don’t, then it is like having a much larger son with bigger muscles. You have to hope you have been a good parent. To be safe, best not to build a superhuman AI until after 2050.

One of the major aspects that can make or break a creative person according to FM-2030 is the environment. In his book “Are You A Transhuman?”, he asks the reader to grade their own surroundings: “Does your home environment stimulate innovation — cross fertilization — initiative?”

We bet you can see where this is going.

The answers are, once more, “Often, Sometimes, or Hardly Ever”, with “Often being the answer choice that gives you the most points — another 2 to tally up to your score if you’re already proving to be more transhumanist that you thought. It might seem obvious, but it’s true — environment can play a major role in the stimulation of creativity. FM says that “It is difficult to be precise about creativity — how much of it is inherited and how much is learned.” If an environment is one that “encourages free unrestricted thinking… encourages people to take initiatives… open and ever-changing”, it is a dynamic environment that can stimulate creativity in an individual.

People who work with telecommunications are susceptible to views that are far different than their own, and the sciences, though structured, force a person to think creatively to find answers. By the same token, people who have a good balance of leisure time and work are also cultivating a greater internal environment to stimulate creativity. And in case you were forgetting the reason creativity is important to FM-2030, perhaps take a look at his quote that sums up the chapter perfectly, below.

New Book: An Irreverent Singularity Funcyclopedia, by Mondo 2000’s R.U. Sirius.

Posted in 3D printing, alien life, automation, big data, bionic, bioprinting, biotech/medical, complex systems, computing, cosmology, cryptocurrencies, cybercrime/malcode, cyborgs, defense, disruptive technology, DNA, driverless cars, drones, economics, electronics, encryption, energy, engineering, entertainment, environmental, ethics, existential risks, exoskeleton, finance, first contact, food, fun, futurism, general relativity, genetics, hacking, hardware, human trajectories, information science, innovation, internet, life extension, media & arts, military, mobile phones, nanotechnology, neuroscience, nuclear weapons, posthumanism, privacy, quantum physics, robotics/AI, science, security, singularity, software, solar power, space, space travel, supercomputing, time travel, transhumanism

Quoted: “Legendary cyberculture icon (and iconoclast) R.U. Sirius and Jay Cornell have written a delicious funcyclopedia of the Singularity, transhumanism, and radical futurism, just published on January 1.” And: “The book, “Transcendence – The Disinformation Encyclopedia of Transhumanism and the Singularity,” is a collection of alphabetically-ordered short chapters about artificial intelligence, cognitive science, genomics, information technology, nanotechnology, neuroscience, space exploration, synthetic biology, robotics, and virtual worlds. Entries range from Cloning and Cyborg Feminism to Designer Babies and Memory-Editing Drugs.” And: “If you are young and don’t remember the 1980s you should know that, before Wired magazine, the cyberculture magazine Mondo 2000 edited by R.U. Sirius covered dangerous hacking, new media and cyberpunk topics such as virtual reality and smart drugs, with an anarchic and subversive slant. As it often happens the more sedate Wired, a watered-down later version of Mondo 2000, was much more successful and went mainstream.”

Read the article here >https://hacked.com/irreverent-singularity-funcyclopedia-mondo-2000s-r-u-sirius/

Quoted: “Tony Williams, the founder of the British-based legal consulting firm, said that law firms will see nearly all their process work handled by artificial intelligence robots. The robotic undertaking will revolutionize the industry, “completely upending the traditional associate leverage model.” And: “The report predicts that the artificial intelligence technology will replace all the work involving processing information, along with a wide variety of overturned policies.”

Read the article here > https://hacked.com/legal-consulting-firm-believes-artificial…yers-2030/

Quoted: “If you understand the core innovations around the blockchain idea, you’ll realize that the technology concept behind it is similar to that of a database, except that the way you interact with that database is very different.

The blockchain concept represents a paradigm shift in how software engineers will write software applications in the future, and it is one of the key concepts behind the Bitcoin revolution that need to be well understood. In this post, I’d like to explain 5 of these concepts, and how they interrelate to one another in the context of this new computing paradigm that is unravelling in front of us. They are: the blockchain, decentralized consensus, trusted computing, smart contracts and proof of work / stake. This computing paradigm is important, because it is a catalyst for the creation of decentralized applications, a next-step evolution from distributed computing architectural constructs.

Screen Shot 2014-12-23 at 10.30.59 PM

Read the article here > http://startupmanagement.org/2014/12/27/the-blockchain-is-th…verything/

FMQuote4_creativity

This archive file was compiled from audio of a discussion futurist FM 2030 held at the University of California, February 6th, 1994. In this discussion 2030 laid out an overview of his ‘transhuman’ philosophy and held a back and forth with other people present in the discussion. Discussion and debate included items such as the value of researching ‘indefinite lifespan’ technologies directly as opposed to (or in addition to) more traditional approaches, such as researching cures for specific diseases.
The excerpts in this archive file present a sort of thesis of FM 2030’s transhuman ideas.

About FM 2030: FM 2030 was at various points in his life, an Iranian Olympic basketball player, a diplomat, a university teacher, and a corporate consultant. He developed his views on transhumanism in the 1960s and evolved them over the next thirty-something years. He was placed in cryonic suspension July 8th, 2000. For more information about FM 2030, view the GPA Archive File: ‘Introduction to FM 2030′ or visit some of the following links:

Wikipedia:
en.wikipedia.org/wiki/FM-2030

Institute for Ethics & Emerging Technologies:
ieet.org/index.php/tpwiki/Transhuman

The New York Times:
nytimes.com/2000/07/11/us/futurist-known-as-fm-2030-is-dead-at-69.html

GPA on Facebook: on.fb.me/18NiF8z
GPA on Twitter: twitter.com/GPA203

This archive file was compiled from an interview conducted at the Googleplex in Mountain View, California, 2013.

As late as the 1980s and the 1990s, the common person seeking stored knowledge would likely be faced with using an 18th century technology — the library index card catalogue — in order to find something on the topic he or she was looking for. Fifteen years later, most people would be able to search, at any time and any place, a collection of information that dwarfed that of any library. And unlike the experience with a library card catalogue, this new technology rarely left the user empty-handed.

Information retrieval had been a core technology of humanity since written language — but as an actual area of research it was so niche that before the 1950s, nobody had bothered to give the field a name. From a superficial perspective, the pioneering work in the area during the 1940s and 50s seemed to suggest it would be monumentally important to the future — but only behind the scenes. Information retrieval was to be the secret tool of the nation at war, or of the elite scientist compiling massive amounts of data. Increasingly however, a visionary group of thinkers dreamed of combining information retrieval and the ‘thinking machine’ to create something which would be far more revolutionary for society.

In the case of Google’s Amit Singhal, it was a childhood encounter with a visionary work that gave him his initial fascination with the dream of the thinking machine — a fascination that would result in his evolution to be one of the individuals who began to transform the dream into a reality. The work that he encountered was not that of a scientific pioneer such as Alan Turing or Marvin Minsky — it was a visionary work of pop culture.

More about Amit Singhal:
en.wikipedia.org/wiki/Amit_Singhal
Google Search:
en.wikipedia.org/wiki/Google_Search

Since ancient times people have been searching for the secret of immortality. Their quest has always been, without exception, about a physical item: a fountain, an elixir, an Alchemist’s remedy, a chalice, a pill, an injection of stem cells or a vial containing gene-repairing material. It has never been about an abstract concept.

Our inability to find a physical cure for ageing is explained by a simple fact: We cannot find it because it does not exist. It will never exist.

Those who believe that someday some guy is going to discover a pill or a remedy and give it to people so that we will all live forever are, regrettably, deluded.

I should highlight here that I refer to a cure for the ageing process in general, and not a cure for a specific medical disease. Biotechnology and other physical therapies are useful in alleviating many diseases and ailments, but these therapies will not be the answer to the basic biological process of ageing.

In a paper I published in the journal Rejuvenation Research I outline some of the reasons why I think biotechnology will not solve the ageing problem. I criticise projects such as SENS (which are based upon physical repairs of our ageing tissues) as being essentially useless against ageing. The editor’s rebuttal (being weak and mostly irrelevant) proved and strengthened my point. There are insurmountable basic psychological, anatomical, biological and evolutionary reasons why physical therapies against ageing will not work and will be unusable by the general public. Some of these reasons include pleiotropy, non-compliance, topological properties of cellular networks, non-linearity, strategic logistics, polypharmacy and tolerance, etc. etc.

So, am I claiming that we are doomed to live a life of age-related pathology and degeneration, and never be able to shake off the aging curse? No, far from it. I am claiming that it is quite possible, even inevitable, that ageing will be eliminated but this will not be achieved through a physical intervention based on bio-medicine or bio-technology. Ageing will be eliminated through fundamental evolutionary and adaptation mechanisms, and this process will take place independently of whether we want it or not.

It works like this: We now age and die because we become unable to repair random background damage to our tissues. Resources necessary for this have been allocated by the evolutionary process to our germ cell DNA (in order to assure the survival of the species) and have been taken away from our bodily cells. Until now, our environment was so full of dangers that it was more thermodynamically advantageous for nature to maintain us up to a certain age, until we have progeny and then die, allowing our progeny to continue life.

However, this is now changing. Our environment is becoming increasingly more secure and protective. Our technology protects us against dangers such as infections, famine and accidents. We become increasingly embedded into the network of a global techno-cultural society which depends upon our intelligence in order to survive. There will come a time when biological resources spent to bring up children would be better spent in protecting us instead, because it would be more economical for nature to maintain an existing, well-embedded human, rather than allow it to die and create a new one who would then need more resources in order to re-engage with the techno-cultural network. Disturbing the network by taking away its constituents and trying to re-engage new inexperienced ones is not an ideal action and therefore it will not be selected by evolution.Alchemist complex

The message is clear: You have more chances of defying ageing if, instead of waiting for someone to discover a pill to make you live longer, you become a useful part of a wider network and engage with a technological society. The evolutionary process will then ensure that you live longer-as long as you are useful to the whole.

Further reading
http://ieet.org/index.php/IEET/more/kyriazis20121031

The Seven Fallacies of Aging

The Life Extension Hubris: Why biotechnology is unlikely to be the answer to ageing


http://www.ncbi.nlm.nih.gov/pubmed/25072550
http://arxiv.org/abs/1402.6910

Would you have your brain preserved? Do you believe your brain is the essence of you?

To noted American PhD Neuroscientist and Futurist, Ken Hayworth, the answer is an emphatic, “Yes.” He is currently developing machines and techniques to map brain tissue at the nanometer scale — the key to encoding our individual identities.

A self-described transhumanist and President of the Brain Preservation Foundation, Hayworth’s goal is to perfect existing preservation techniques, like cryonics, as well as explore and push evolving opportunities to effect a change on the status quo. Currently there is no brain preservation option that offers systematic, scientific evidence as to how much human brain tissue is actually preserved when undergoing today’s experimental preservation methods. Such methods include vitrification, the procedure used in cryonics to try and prevent human organs from freezing and being destroyed when tissue is cooled for cryopreservation.

Hayworth believes we can achieve his vision of preserving an entire human brain at an accepted and proven standard within the next decade. If Hayworth is right, is there a countdown to immortality?

To find out more, please take a look at the Galactic Public Archives’ newest video. We’d love to hear your thoughts.

Cheers!