Menu

Blog

Dec 7, 2015

Can The Existential Risk Of Artificial Intelligence Be Mitigated?

Posted by in categories: ethics, existential risks, futurism, government, human trajectories, robotics/AI

It seems like every day we’re warned about a new, AI-related threat that could ultimately bring about the end of humanity. According to Author and Oxford Professor Nick Bostrom, those existential risks aren’t so black and white, and an individual’s ability to influence those risks might surprise you.

Image Credit: TED

Image Credit: TED

Bostrom defines an existential risk as one distinction of earth originating life or the permanent and drastic destruction of our future development, but he also notes that there is no single methodology that is applicable to all the different existential risks (as more technically elaborated upon in this Future of Humanity Institute study). Rather, he considers it an interdisciplinary endeavor.

“If you’re wondering about asteroids, we have telescopes, we can study them with, we can look at past crater impacts and derive hard statistical data on that,” he said. “We find that the risk of asteroids is extremely small and likewise for a few of the other risks that arrive from nature. But other really big existential risks are not in any direct way susceptible to this kind of rigorous quantification.”

In Bostrom’s eyes, the most significant risks we face arise from human activity and particularly the potential dangerous technological discoveries that await us in the future. Though he believes there’s no way to quantify the possibility of humanity being destroyed by a super-intelligent machine, a more important variable is human judgment. To improve assessment of existential risk, Bostrom said we should think carefully about how these judgments are produced and whether the biases that affect those judgments can be avoided.

“If your task is to hammer a nail into a board, reality will tell you if you’re doing it right or not. It doesn’t really matter if you’re a Communist or a Nazi or whatever crazy ideologies you have, you’ll learn quite quickly if you’re hammering the nail in wrong,” Bostrom said. “If you’re wrong about what the major threats are to humanity over the next century, there is not a reality click to tell you if you’re right or wrong. Any weak bias you might have might distort your belief.”

Noting that humanity doesn’t really have any policy designed to steer a particular course into the future, Bostrom said many existential risks arise from global coordination failures. While he believes society might one day evolve into a unified global government, the question of when this uniting occurs will hinge on individual contributions.

“Working toward global peace is the best project, just because it’s very difficult to make a big difference there if you’re a single individual or a small organization. Perhaps your resources would be better put to use if they were focused on some problem that is much more neglected, such as the control problem for artificial intelligence,” Bostrom said. “(For example) do the technical research to figure that, if we got the ability to create super intelligence, the outcome would be safe and beneficial. That’s where an extra million dollars in funding or one extra very talented person could make a noticeable difference… far more than doing general research on existential risks.”

Looking to the future, Bostrom feels there is an opportunity to show that we can do serious research to change global awareness of existential risks and bring them into a wider conversation. While that research doesn’t assume the human condition is fixed, there is a growing ecosystem of people who are genuinely trying to figure out how to save the future, he said. As an example of how much influence one can have in reducing existential risk, Bostrom noted that a lot more people in history have believed they were Napoleon, yet there was actually only one Napoleon.

“You don’t have to try to do it yourself… it’s usually more efficient to each do whatever we specialize in. For most people, the most efficient way to contribute to eliminating existential risk would be to identify the most efficient organizations working on this and then support those,” Bostrom said. “The values on the line in terms of how many happy lives could exist in humanity’s future, even a very small probability of impact in that, would probably be worthwhile in pursuing”.

1

Comment — comments are now closed.


  1. Bostrom, not unusually, remainins blithely unaware of the evolutionary processes of which we snoutless apes and machines are both part.

    Most folk still seem unable to break free from the traditional science fiction based notions involving individual robots/computers/systems. Either as potential threats, beneficial aids or serious basis for “artificial intelligence”.

    In actuality, the real next cognitive entity quietly self assembles in the background, mostly unrecognized for what it is. And, contrary to our usual conceits, is not stoppable or directly within our control.

    We are very prone to anthropocentric distortions of objective reality. This is perhaps not surprising, for to instead adopt the evidence based viewpoint now afforded by “big science” and “big history” takes us way outside our perceptive comfort zone.

    The fact is that the evolution of the Internet (and, of course, major components such as Google) is actually an autonomous process. The difficulty in convincing people of this “inconvenient truth” seems to stem partly from our natural anthropocentric mind-sets and also the traditional illusion that in some way we are in control of, and distinct from, nature. Contemplation of the observed realities tend to be relegated to the emotional “too hard” bin.

    This evolution is not driven by any individual software company or team of researchers, but rather by the sum of many human requirements, whims and desires to which the current technologies react. Among the more significant motivators are such things as commerce, gaming, social interactions, education and sexual titillation.

    Virtually all interests are catered for and, in toto provide the impetus for the continued evolution of the Internet. Netty is still in her larval stage, but we “workers” scurry round mindlessly engaged in her nurture.

    By relinquishing our usual parochial approach to this issue in favor of the overall evolutionary “big picture” provided by many fields of science, the emergence of a new predominant cognitive entity (from the Internet, rather than individual machines) is seen to be not only feasible but inevitable.

    The separate issue of whether it well be malignant, neutral or benign towards we snoutless apes is less certain, and this particular aspect I have explored elsewhere.

    Stephen Hawking, for instance, is reported to have remarked “Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.”

    Such statements reflect the narrow-minded approach that is so common-place among those who make public comment on this issue. In reality, as much as it may offend our human conceits, the march of technology and its latest spearhead, the Internet is, and always has been, an autonomous process over which we have very little real control.

    Seemingly unrelated disciplines such as geology, biology and “big history” actually have much to tell us about the machinery of nature (of which technology is necessarily a part) and the kind of outcome that is to be expected from the evolution of the Internet.

    This much broader “systems analysis” approach, freed from the anthropocentric notions usually promoted by the cult of the “Singularity”, provides a more objective vision that is consistent with the pattern of autonomous evolution of technology that is so evident today.

    Very real evidence indicates the rather imminent implementation of the next, (non-biological) phase of the on-going evolutionary “life” process from what we at present call the Internet. It is effectively evolving by a process of self-assembly.

    The “Internet of Things” is proceeding apace and pervading all aspects of our lives. We are increasingly, in a sense, “enslaved” by our PCs, mobile phones, their apps and many other trappings of the increasingly cloudy net. We are already largely dependent upon it for our commerce and industry and there is no turning back. What we perceive as a tool is well on its way to becoming an agent.

    There are at present more than 3 billion Internet users. There are an estimated 10 to 80 billion neurons in the human brain. On this basis for approximation the Internet is even now only one order of magnitude below the human brain and its growth is exponential.

    That is a simplification, of course. For example: Not all users have their own computer. So perhaps we could reduce that, say, tenfold. The number of switching units, transistors, if you wish, contained by all the computers connecting to the Internet and which are more analogous to individual neurons is many orders of magnitude greater than 3 Billion. Then again, this is compensated for to some extent by the fact that neurons do not appear to be binary switching devices but instead can adopt multiple states.

    We see that we must take seriously the possibility that even the present Internet may well be comparable to a human brain in at least raw processing power. And, of course, the all-important degree of interconnection and cross-linking of networks and supply of sensory inputs is also growing exponentially.

    We are witnessing the emergence of a new and predominant cognitive entity that is a logical consequence of the evolutionary continuum that can be traced back at least as far as the formation of the chemical elements in stars.

    This is the main theme of my latest book “The Intricacy Generator: Pushing Chemistry and Geometry Uphill” which is now available as a 336 page illustrated paperback from Amazon, etc.

    Netty, as you may have guessed by now, is the name I choose to identify this emergent non-biological cognitive entity. In the event that we can subdue our natural tendencies to belligerence and form a symbiotic relationship with this new phase of the “life” process then we have the possibility of a bright future.

    If we don’t become aware of these realities and mend our ways, however, then we snout-less apes could indeed be relegated to the historical rubbish bin within a few decades. After all, our infrastructures are becoming increasingly Internet dependent and Netty will only need to “pull the plug” to effect pest eradication.

    So it is to our advantage to try to effect the inclusion of desirable human behaviors in Netty’s psyche. In practice that equates to our species firstly becoming aware of our true place in nature’s machinery and, secondly, making a determined effort to “straighten up and fly right”.