Menu

Special Report

Kinds of Minds

by Lifeboat Foundation Scientific Advisory Board member J. Storrs “Josh” Hall.
 
In Beyond AI, J. Storrs Hall offers “a must-read for anyone interested in the future of the human-machine civilization,” says Ray Kurzweil. In this excerpt, Hall suggests a classification of the different stages an AI might go through, from “hypohuman” (most existing AIs) to “hyperhuman” (similar to “superintelligence”).
 
This is chapter 15 of Beyond AI: Creating the Conscience of the Machine.
 
“Perhaps our questions about artificial intelligence are a bit like inquiring after the temperament and gait of a horseless carriage.”
 
K. Eric Drexler
 

Classifications


Now we will classify the different stages AI might go through by using the Greek prepositions. These have been adopted into English as prefixes, particularly in scientific usage. In some cases the concepts have been applied to advancing AI before and in other cases not. The reason for introducing these new terms is they provide a framework that puts any given level of expected AI capability in perspective vis-à-vis the other levels, and in comparison to human intelligence.
 
 

Hypohuman AI


Hypo means below or under (think hypodermic, under the skin; hypothermia or hypoglycemia, below normal temperature or blood sugar), including, in the original Greek, under the moral or legal subjection of. Isaac Asimov’s robots are (mostly) hypohuman, in both senses of hypo: they are not quite as smart as humans, and they are subject to our rule. Most existing AI is arguably hypohuman, as well (Deep Blue to the contrary notwithstanding). As long as it stays that way, the only thing we have to worry about is that there will be human idiots putting their AI idiots in charge of things they both don’t understand. All the discussion of formalist float applies, especially the part about feedback.
 
 

Diahuman AI


Image courtesy of 20th Century Fox.

Dia means through or across in Greek (diameter, diagonal), and the Latin trans means the same thing, but the commonly heard transhuman doesn’t apply here. Transhuman refers to humans as opposed to AIs, humans who have been enhanced (by whatever means) and are in a transitional state between human and fully posthuman, whatever that may be. Neither concept is very useful here.
 
By diahuman, I mean AIs in the stage where AI capabilities are crossing the range of human intelligence. It’s tempting to call this human-equivalent, but the idea of equivalence is misleading. It’s already apparent that some AI abilities (e.g., chess playing) are beyond the human scale, while others (e.g., reading and writing) haven’t reached it yet.
 
Thus diahuman refers to a phase of AI development (and only by extension to an individual AI in that phase), and this is fuzzy because the limits of human (and AI) capability are fuzzy. It’s hard to say which capabilities are important in the comparison. I would claim that AI is entering the early stages of the diahuman phase right now; there are humans who, like today’s AIs, don’t learn well and who function competently only at simple jobs for which they must be trained.
 
The core of the diahuman phase, however, will be the development of autogenous learning. In the latter stages, AIs, like the brightest humans, will be completely autonomous, not only learning what they need to know but also deciding what they need to learn.
 
Diahuman AIs will be valuable and will undoubtedly attract significant attention and resources to the AI enterprise. They are likely to cause something of a stir in philosophy and perhaps religion, as well. However, they will not have a significant impact on the human condition. (The one exception might be economically, in the case that diahuman AI lingers so long that Moore’s law makes human-equivalent robots very cheap compared to human labor. But I’m assuming that we will probably have advanced past the diahuman stage by then.)
 
 

Parahuman AI


Image courtesy of MIT.

Para means alongside (paralegal, paramedic). The concept of designing a system that a human is going to be part of dates back to cybernetics (although all technology throughout history had to be designed so that humans could operate it, in some sense).
 
Parahuman AI will be built around more and more sophisticated theories of how humans work. The PC of the future ought to be a parahuman AI. MIT roboticist Cynthia Brazeal’s sociable robots are the likely forerunners of a wide variety of robots that will interact with humans in many kinds of situations.
 
The upside of parahuman AI is that it will enhance the interface between our native senses and abilities, adapted as they are for a hunting and gathering bipedal ape, and the increasingly formalized and mechanized world we are building. The parahuman AI should act like a lawyer, a doctor, an accountant, and a secretary, all with deep knowledge and endless patience. Once AI and cognitive science have acquired a solid understanding of how we learn, parahuman AI teachers could be built which would model in detail how each individual student was absorbing the material, ultimately finding the optimal presentation for understanding and motivation.
 
The downside is simply the same effect, put to work with slimier motives: the parahuman advertising AI, working for corporations or politicians, could know just how to tweak your emotions and gain your trust without actually being trustworthy. It would be the equivalent of an individualized artificial con man. Note by the way that of the two human elements that were part of the original cybernetic anti-aircraft control theory, one of them, the pilot of the plane being shot at, didn’t want to be part of the system but was, willy-nilly.
 
Parahuman is a characterization that does not specify a level of intellectual capability compared to humans; it can be properly applied to AIs at any level. Humans are fairly strongly parahuman intelligences as well; many of our innate skills involve interacting with other humans. Parahuman can be largely contrasted with the following term, allohuman.
 
 

Allohuman AI


Image courtesy of The Jim Henson Company.

Allo means other or different (allomorph, allonym, allotrope). Although I have argued that human intelligence is universal, there remains a vast portion of our minds that is distinctively human. This includes the genetically programmed representation modules, the form of our motivations, and the sensory modalities, of which several are fairly specific to running a human body.
 
It will certainly be possible to create intelligences that while being universal nevertheless have different lower-level hardwired modalities for sense and representation, and different higher-level motivational structure. One simple possibility is that universal mechanism may stand in for a much greater portion of the cognitive mechanism so that, for example, the AI would use learned physics instead of instinctive concepts and learned psychology instead of our folk models.
 
Such differences could reasonably make the AI better at certain tasks; consider the ability to do voluminous calculations in you head. However, if you have ever watched an experienced accountant manipulate a calculator, you can see that the numbers almost flow through his fingers. Built-in modalities may provide some increment of effectiveness compared to learned ones, but not as much as you might think. Consider reading — it’s a learned activity, and unlike talking, we don’t just “pick it up”. But with practice, we read much faster than we can talk or understand spoken language.
 
Motivations and the style and the volume of communication could also differ markedly from the human model. The allohuman AI might resemble Mr. Spock, or it might resemble an intelligent ant. This likely will form the bulk of the difference between allohuman AIs and humans rather than the varying modalities.
 
Like parahuman, allohuman does not imply a given level of intellectual competence. In the fullness of time, however, the parahuman/allohuman distinction will make less and less difference. More advanced AIs, whether they need to interact with humans or to do something weirdly different, will simply obtain or deduce whatever knowledge is necessary and synthesize the skills on the fly.
 
 

Epihuman AI


Epi means upon or after (epidermis, epigram, epitaph, epilogue). I’m using it here in a combination of senses to mean AI that is just above the range of individual human capabilities but that still forms a continuous range with them, and also in the sense of what comes just after diahuman AI. That gives us what can be a useful distinction versus further-out possibilities. (See hyper below.)
 
Science fiction writer Charles Stross introduced the phrase “weakly godlike AI”. Weakly presumably refers to the fact that such AIs would still be bound by the laws of physics — they couldn’t perform miracles, for example. As a writer, I’m filled with admiration for the phrase, since weakly and godlike have such contrasting meanings that it forces you to think when you read it for the first time, and the term weakly is often used in a similar way, with various technical meanings, in scientific discourse, giving a vague sense of rigor (!) to the phrase.
 
The word posthuman is often used to describe what humans may be like after various technological enhancements. Like transhuman, posthuman is generally used for modified humans instead of synthetic AIs.
 
My model for what an epihuman AI would be like is to take the ten smartest people you know, remove their egos, and duplicate them a hundred times, so that you have a thousand really bright people willing to apply themselves all to the same project. Alternatively, simply imagine a very bright person given a thousand times as long to do any given task. We can straightforwardly predict, from Moore’s law, that ten years after the advent of a learning but not radically self-improving human-level AI, the same software running on machinery of the same cost would do the same human-level tasks a thousand times as fast as we. It could, for example:
  • read an average book in one second with full comprehension;
  • take a college course and do all the homework and research in ten minutes;
  • write a book, again with ample research, in two or three hours;
  • produce the equivalent of a human’s lifetime intellectual output, complete with all the learning, growth, and experience involved, in a couple of weeks.
A thousand really bright people are enough to do some substantial and useful work. An epihuman AI could probably command an income of $100 million or more in today’s economy by means of consulting and entrepreneurship, and it would have a net present value in excess of a $1 billion. Even so, it couldn’t take over the world or even an established industry. It could probably innovate well enough to become a standout in a nascent field, though, as in Google’s case.
 
A thousand top people is a reasonable estimate for what the current field of AI research is applying to the core questions and techniques — basic, in contrast to applied, research. Thus an epihuman AI could probably improve itself about as fast as current AI is improving. Of course, if it did that, it wouldn’t be able to spend its time making all that money; the opportunity cost is pretty high. It would need to make exactly the same kind of decision that any business faces with respect to capital reinvestment.
 
Whichever it may choose to do, the epihuman level characterizes an AI that is able to stand in for a given fairly sizeable company or for a field of academic inquiry. As more and more epihuman AIs appear, they will enhance economic and scientific growth so that by the later stages of the phase the total stock of wealth and knowledge will be significantly higher than it would have been without the AIs. AIs will be a significant sector, but no single AI would be able to rock the boat to a great degree.
 
 

Hyperhuman AI


Hyper means over or above. In common use as an English prefix, hyper tends to denote a greater excess than super, which means the same thing but comes from Latin instead of Greek. (Contrast, e.g., supersonic, more than Mach 1, and hypersonic, more than Mach 5.)
 
In the original Singularity paper, The Coming Technological Singularity, Vernor Vinge used the phrase superhuman intelligence. Nick Bostrom has used the term superintelligence. Like some of the terms above, however, superhuman has a wide range of meanings (think about Kryptonite), and most of them are not applicable to the subject at hand. We will stay with our Greek prefixes and finish the list with hyperhuman.
 
Imagine an AI that is a thousand epihuman AIs, all tightly integrated together. Such an intellect would be capable of substantially outstripping the human scientific community at any given task and of comprehending the entirety of scientific knowledge as a unified whole. A hyperhuman AI would soon begin to improve itself significantly faster than humans could. It could spot the gaps in science and engineering where there was low-hanging fruit and instigate rapid increases in technological capability across the board.
 
It is as yet poorly understood even in the scientific community just how much headroom remains for improvement with respect to the capabilities of current physical technology. A mature nanotechnology, for example, could replace the entire capital stock — all the factories, buildings, roads, cars, trucks, airplanes, and other machines — of the United States in a week. And that’s just using currently understood science, with a dollop of engineering development thrown in.
 
Any sufficiently advanced technology, Arthur Clarke wrote, is indistinguishable from magic. Although, I believe, any specific thing the hyperhuman AIs might do could be understood by humans, the total volume of work and the rate of advance would become harder and harder to follow. Please note that any individual human is already in a similar relationship with the whole scientific community; our understanding of what is going on is getting more and more abstract. The average person understands cell phones at a level of knowing that batteries have limited lives and coverage has gaps, but not at the level of field-effect transistor gain figures and conductive trace electromigration phenomena.
 
Ten years ago the average scientist, much less the average user, could not have predicted that most cell phones would contain cameras and color screens today. But we can follow, if not predict, by understanding things at a very high level of abstraction, as if they were magic.
 
Any individual hyperhuman AI would be productive, intellectually or industrially, on the scale of the human race as a whole. As the number of hyperhuman AIs increased, our efforts would shrink to more and more modest proportions of the total.
 
Where does an eight-hundred-pound gorilla sit? According to the old joke, anywhere he wants to. Much the same thing will be true of a hyperhuman AI, except in instances where it has to interact with other AIs. The really interesting question then will be, what will it want?