Toggle light / dark theme

A (Relatively) Brief Introduction to The Principles of Economics & Evolution: A Survival Guide for the Inhabitants of Small Islands, Including the Inhabitants of the Small Island of Earth

Posted in asteroid/comet impacts, biological, complex systems, cosmology, defense, economics, existential risks, geopolitics, habitats, human trajectories, lifeboat, military, philosophy, sustainabilityTagged , , , , , , , , , , , | 2 Comments on A (Relatively) Brief Introduction to The Principles of Economics & Evolution: A Survival Guide for the Inhabitants of Small Islands, Including the Inhabitants of the Small Island of Earth

(NOTE: Selecting the “Switch to White” button on the upper right-hand corner of the screen may ease reading this text).

“Who are you?” A simple question sometimes requires a complex answer. When a Homeric hero is asked who he is.., his answer consists of more than just his name; he provides a list of his ancestors. The history of his family is an essential constituent of his identity. When the city of Aphrodisias… decided to honor a prominent citizen with a public funeral…, the decree in his honor identified him in the following manner:

Hermogenes, son of Hephaistion, the so-called Theodotos, one of the first and most illustrious citizens, a man who has as his ancestors men among the greatest and among those who built together the community and have lived in virtue, love of glory, many promises of benefactions, and the most beautiful deeds for the fatherland; a man who has been himself good and virtuous, a lover of the fatherland, a constructor, a benefactor of the polis, and a savior.
– Angelos Chaniotis, In Search of an Identity: European Discourses and Ancient Paradigms, 2010

I realize many may not have the time to read all of this post — let alone the treatise it introduces — so for those with just a few minutes to spare, consider abandoning the remainder of this introduction and spending a few moments with a brief narrative which distills the very essence of the problem at hand: On the Origin of Mass Extinctions: Darwin’s Nontrivial Error.

But for those with the time and inclinations for long and windy paths through the woods, please allow me to introduce myself: I was born and raised in Kentland, Indiana, a few blocks from the train station where my great-great grandfather, Barney Funk, arrived from Germany, on Christmas day of 1859. I completed a BSc in Entrepreneurship and an MFA in film at USC, and an MA in Island Studies at UPEI. I am a naturalist, Fellow of The Linnean Society of London, PhD candidate in economics at the University of Malta, hunter & fisherman, NRA member, protective father, and devoted husband with a long, long line of illustrious ancestors, a loving mother & father, extraordinary brothers & sister, wonderful wife, beautiful son & daughter, courageous cousins, and fantastic aunts, uncles, in-laws, colleagues, and fabulous friends!

Thus my answer to the simple question, “Who are you?” requires a somewhat complex answer as well.

But time is short and I am well-positioned to simplify because all of the hats I wear fall under a single umbrella: I am a friend of the Lifeboat Foundation (where I am honoured to serve on the Human Trajectories, Economics, Finance, and Diplomacy Advisory Boards), a foundation “dedicated to encouraging scientific advancements while helping humanity survive existential risks.”

Almost everything I do – including the roles, associations, and relationships noted above, supports this mission.

It’s been nearly a year since Eric generously publish Principles of Economics & Evolution: A Survival Guide for the Inhabitants of Small Islands, Including the Inhabitants of the Small Island of Earth, and since that time I have been fortunate to receive many interesting and insightful emails packed full of comments and questions; thus I would like to take this opportunity to introduce this work – which represents three years of research.

Those interested in taking the plunge and downloading the file above may note that this discourse

tables an evolutionarily stable strategy for the problem of sustainable economic development – on islands and island-like planets (such as Earth), alike, and thus this treatise yields, in essence, a long-term survival guide for the inhabitants of Earth.

Thus you may expect a rather long, complex discourse.

This is indeed what you may find – a 121 page synthesis, including this 1,233 page Digital Supplement.

As Nassim Nicholas Taleb remarked in Fooled by Randomness:

I do not dispute that arguments should be simplified to their maximum potential; but people often confuse complex ideas that cannot be simplified into a media-friendly statement as symptomatic of a confused mind. MBAs learn the concept of clarity and simplicity—the five-minute manager take on things. The concept may apply to the business plan for a fertilizer plant, but not to highly probabilistic arguments—which is the reason I have anecdotal evidence in my business that MBAs tend to blow up in financial markets, as they are trained to simplify matters a couple of steps beyond their requirement.

But there is indeed a short-cut — in fact, there are at least two short-cuts.

First, perhaps the most direct pleasant approach to the summit is a condensed, 237 page thesis: On the Problem of Sustainable Economic Development: A Game-Theoretical Solution.

But for those pressed for time and/or those merely interested in sampling a few short, foundational works (perhaps to see if you’re interested in following me down the rabbit hole), the entire theoretical content of this 1,354-page report (report + digital supplement) may be gleamed from 5 of the 23 works included within the digital supplement. These working papers and publications are also freely available from the links below – I’ll briefly relate how these key puzzle pieces fit together:

The first publication offers a 13-page over-view of our “problem situation”: On the Origin of Mass Extinctions: Darwin’s Nontrivial Error.

Second is a 21-page game-theoretical development which frames the problem of sustainable economic development in the light of evolution – perhaps 70% of our theoretical content lies here: On the Truly Noncooperative Game of Life on Earth: In Search of the Unity of Nature & Evolutionary Stable Strategy.

Next comes a 113-page gem which attempts to capture the spirit and essence of comparative island studies, a course charted by Alexander von Humboldt and followed by every great naturalist since (of which, more to follow). This is an open letter to the Fellows of the Linnean Society of London, a comparative study of two, diametrically opposed economic development plans, both put into action in that fateful year of 1968 — one on Prince Edward Island, the other on Mustique. This exhaustive work also holds the remainder of the foundation for our complete solution to this global dilemma – and best of all, those fairly well-versed in game theory need not read it all, the core solution may be quickly digested on pages 25–51:
On the Truly Noncooperative Game of Island Life: Introducing a Unified Theory of Value & Evolutionary Stable ‘Island’ Economic Development Strategy.

Fourth comes an optional, 19-page exploration that presents a theoretical development also derived and illuminated through comparative island study (including a mini-discourse on methods). UPEI Island Studies Programme readers with the time and inclination for only one relatively short piece, this may be the one to explore. And, despite the fact that this work supports the theoretical content linked above, it’s optional because there’s nothing new here – in fact, these truths have been well known and meticulously documented for over 1,000 years – but it may prove to be a worthwhile, engaging, and interesting read nonetheless, because these truths have become so unfashionable that they’ve slipped back into relative obscurity: On the Problem of Economic Power: Lessons from the Natural History of the Hawaiian Archipelago.

And finally I’ll highlight another optional, brief communique – although this argument may be hopelessly compressed, here, in 3 pages, is my entire solution:
Truly Non-Cooperative Games: A Unified Theory.

Yes, Lifeboat Foundation family and friends, you may wish to pause to review the abstracts to these core, foundational works, or you may even wish to review them completely and put the puzzle pieces together yourself (the pages linked above total 169 – or a mere 82 pages if you stick to the core excerpt highlighted in my Linnean Letter), but, as the great American novelist Henry Miller remarked:

In this age, which believes that there is a short cut to everything, the greatest lesson to be learned is that the most difficult way is, in the long run, the easiest.

Why?

That’s yet another great, simple question that may require several complex answers, but I’ll give you three:

#1). First and foremost, because explaining is a difficult art.

As Richard Dawkins duly noted:

Explaining is a difficult art. You can explain something so that your reader understands the words; and you can explain something so that the reader feels it in the marrow of his bones. To do the latter, it sometimes isn’t enough to lay the evidence before the reader in a dispassionate way. You have to become an advocate and use the tricks of the advocate’s trade.

Of course much of this depends upon the reader – naturally some readers may find that less (explanation) is more. Others, however, may find benefit from reading even more (more, that is, than my report and the digital supplement). You may find suggested preliminary and complimentary texts in the SELECTED BIBLIOGRAPHY (below). The report itself includes these and many more. In short, the more familiar readers may be with some or all of these works, the less explaining they may require.

#2). No matter how much explaining you do, it’s actually never enough, and, as Abraham Lincoln wisely noted at Gettysburg, the work is never done. For more one this important point, let’s consider the words of Karl Popper:

When we propose a theory, or try to understand a theory, we also propose, or try to understand, its logical implications; that is, all those statements which follow from it. But this… is a hopeless task: there is an infinity of unforeseeable nontrivial statements belonging to the informative content of any theory, and an exactly corresponding infinity of statements belonging to its logical content. We can therefore never know or understand all the implications of any theory, or its full significance.
This, I think, is a surprising result as far as it concerns logical content; though for informative content it turns out to be rather natural…. It shows, among other things, that understanding a theory is always an infinite task, and that theories can in principle be understood better and better. It also shows that, if we wish to understand a theory better, what we have to do first is to discover its logical relation to those existing problems and existing theories which constitute what we may call the ‘problem situation’.
Admittedly, we also try to look ahead: we try to discover new problems raised by our theory. But the task is infinite, and can never be completed.

In fact, when it comes right down to it, my treatise – in fact, my entire body of research, is, in reality, merely an exploration of the “infinity of unforeseeable nontrivial statements belonging to the informative content” of the theory for which Sir Karl Popper is famous: his solution to David Hume’s problem of induction (of which you’ll hear a great deal if you brave the perilous seas of thought in the works introduced and linked herewith).

#3). Okay, this is a tricky one, but here it goes: Fine, a reasonable skeptic may counter, I get it, it’s hard to explain and there’s a lot of explaining to do – but if 100% of the theoretical content may be extracted from less than 200 pages, then doesn’t that mean you could cut about 1,000 pages?

My answer?

Maybe.

But then again, maybe not.

The reality of the situation is this: neither I nor anyone else can say for sure – this is known as the mind-body problem. In essence, given the mind-body problem, not only am I unable to know exactly how to explain something I know, moreover, I’m not even able to know how it is that I know what I know. I’m merely able to guess. Although this brief introduction is not the proper time nor place to explore the contents of this iteration of Pandora’s Box, those interested in a thorough exploration of this particular problem situation would be well-served with F.A. von Hayek’s The Sensory Order: An Inquiry into the Foundations of Theoretical Psychology (1952). But, in short, the bulk of the Digital Supplement and much of the report itself is merely an attempt to combat the mind-body problem – an attempt to put down as much of the history (and methodology) of this theoretical development as possible. As Descartes remarked at the outset of a treatise on scientific method:

This Tract is put forth merely as a history, or, if you will, as a tale, in which, amid some examples worthy of imitation, there will be found, perhaps, as many more which it were advisable not to follow, I hope it will prove useful to some without being hurtful to any, and that my openness will find some favor with all.

Perhaps you may grasp my theoretical development – but perhaps you may grasp it in a matter by which I did not intend for you to grasp it – perhaps I had stumbled upon a truth in another work within my digital supplement that may make it all clear. Or, perhaps I’ve got it all wrong, and perhaps you – by following in my footsteps through the historical course of this theoretical development (faithfully chronicled in the digital supplement) – may be able to help show me my error (and then, of course we may both rejoice); Malthus felt likewise:

If [the author] should succeed in drawing the attention of more able men to what he conceives to be the principal difficulty in… society and should, in consequence, see this difficulty removed, even in theory, he will gladly retract his present opinions and rejoice in a conviction of his error.

Anticipating another point regarding style: This report is very, very unusual insofar as style is concerned. It’s personal, highly opinionated, and indulges artistic license at almost every turn in the road. In fact, you may also find this narrative a touch artistic – yet it’s all true. As Norman Maclean remarked in A River Runs Trough It, “You like to tell true stories, don’t you?’ he asked, and I answered, ‘Yes, I like to tell stories that are true.’”

I like to tell stories that are true, too, and if you like to read them, then this epic journey of discovery may be for you. I speak to this point at length, but, in short, I submit that there is a method to the madness (in fact, the entire report may also be regarded as an unusual discourse on method).

Why have I synthesized this important theoretical development in an artistic narrative? In part, because Bruno Frey (2002) clearly stated why that’s the way it should be.

But I also did so in hopes that it may help readers grasp what it’s really all about; as the great Russian-American novelist Ayn Rand detailed:

Man’s profound need of art lies in the fact that his cognitive faculty is conceptual, i.e., that he acquires knowledge by means of abstractions, and needs the power to bring his widest metaphysical abstractions into his immediate, perceptual awareness. Art fulfills this need: by means of a selective re-creation, it concretizes man’s fundamental view of himself and of existence. It tells man, in effect, which aspects of his experience are to be regarded as essential, significant, important. In this sense, art teaches man how to use his consciousness.

Speaking of scientific method: I have suggested that my curiously creative narrative may offer some insight into the non-existent subject of scientific method — so please download for much more along these lines — but I want to offer an important note, especially for colleagues, friends, students, and faculty at UPEI: I sat in on a lecture last winter where I was surprised to learn that “island studies” had been recently invented by Canada research chair – thus I thought perhaps I should offer a correction and suggest where island studies really began:

Although it is somewhat well known that Darwin and Wallace pieced the theory of evolution together independently, yet at roughly the same time – Wallace, during his travels through the Malay archipelago, and Darwin, during his grand circumnavigation of the island of Earth onboard the Beagle (yes, the Galapagos archipelago played a key role, but perhaps not as important as has been suggested in the past). But what is not as commonly know is that both Darwin and Wallace had the same instructor in the art of comparative island studies. Indeed, Darwin and Wallace both traveled with identical copies of the same, treasured book: Alexander von Humboldt’s Personal Narrative of Travels to the Equinoctial Regions of the New Continent. Both also testified to the fundamental role von Humboldt played by inspiring their travels and, moreover, developing of their theories.

Thus, I submit that island studies may have been born with the publication of this monumental work in 1814; or perhaps, as Berry (2009) chronicled in Hooker and Islands (see SELECTED BIBLIOGRAPHY, below), it may have been Thomas Pennant or Georg Forster:

George Low of Orkney provided, together with Gilbert White, a significant part of the biological information used by pioneering travel writer Thomas Pennant, who was a correspondent of both Joseph Banks and Linnaeus [Pennant dedicated his Tour in Scotland and Voyage to the Hebrides (1774–76) to Banks and published Banks’s description of Staffa, which excited much interest in islands; Banks had travelled with James Cook and visited many islands; Georg Forster, who followed Banks as naturalist on Cook’s second voyage inspired Alexander Humboldt, who in turn Darwin treated as a model.

But whomever it may have been — or whomever you may ultimately choose to follow — Humboldt certainly towers over the pages of natural history, and Gerard Helferich’s Humboldt’s Cosmo’s: Alexander von Humboldt and the Latin American Journey that Changed the WayWe See the World (2004) tells Humboldt’s story incredibly well. This treasure also happens to capture the essence of Humboldt’s method, Darwin’s method, Wallace’s method, Mayr’s method, Gould’s method, and it most certainly lays out the map I have attempted to follow:

Instead of trying to pigeonhole the natural world into prescribed classification, Kant had argued, scientists should work to discover the underlying scientific principles at work, since only those general tenets could fully explain the myriad natural phenomena. Thus Kant had extended the unifying tradition of Thales, Newton, Descartes, et al.… Humboldt agreed with Kant that a different approach to science was needed, one that could account for the harmony of nature… The scientific community, despite prodigious discoveries, seemed to have forgotten the Greek vision of nature as an integrated whole.… ‘Rather than discover new, isolated facts I preferred linking already known ones together,’ Humboldt later wrote. Science could only advance ‘by bringing together all the phenomena and creations which the earth has to offer. In this great sequence of cause and effect, nothing can be considered in isolation.’ It is in this underlying connectedness that the genuine mysteries of nature would be found. This was the deeper truth that Humboldt planned to lay bare – a new paradigm from a New World. For only through travel, despite its accompanying risks, could a naturalist make the diverse observations necessary to advance science beyond dogma and conjecture. Although nature operated as a cohesive system, the world was also organized into distinct regions whose unique character was the result of all the interlocking forces at work in that particular place. To uncover the unity of nature, one must study the various regions of the world, comparing and contrasting the natural processes at work in each. The scientist, in other words, must become an explorer.

With these beautiful words in mind and the spirit of adventure in the heart, I thank you for listening to this long story about an even longer story, please allow me to be your guide through an epic adventure.

But for now, in closing, I’d like to briefly return to the topic at hand: human survival on Earth.

A few days ago, Frenchman Alain Robert climbed the world’s tallest building – Burj Khalifa – in Dubai.

After the six hour climb, Robert told Gulf News, “My biggest fear is to waste my time on earth.”

I certainly share Robert’s fear – Alexander von Humboldt, Darwin, and Wallace did, too, by the way.

But then Robert added, “To live, we don’t need much, just a roof over our heads some food and drink and that’s it … everything else is superficial.”

I’m afraid that’s where Robert and I part ways – and if you would kindly join me on a journey through The Principles of Economics & Evolution: A Survival Guide for the Inhabitants of Small Islands, Including the Inhabitants of the Small Island of Earth – I would love to explain why Robert’s assertion is simply not true.

Please feel free to post comments or contact me with any thoughts, comments, questions, or suggestions.

MWF
Charlottetown, Prince Edward Island

PS: My report suggests many preliminary and complimentary readings – but I’ve revisited this topic with the aim of producing a selected bibliography of the most condensed and readily accessible (i.e, freely available online) works which may help prepare the reader for my report and the foundational theoretical discourses noted and linked above. Most are short papers, but a few great books and dandy dissertations may be necessary as well!

SELECTED BIBLIOGRAPHY

BERRY, R. (2009). Hooker and islands. Bio Journal Linn Soc 96:462–481.

DARWIN, C., WALLACE, A. (1858). On the Tendency of Species to form Varieties; and on the Perpetuation of Varieties and Species by Natural Means of Selection. Proc Linn Soc 3:45–62.

DARWIN, C., et. al. (1849). A Manual of Scientific Enquiry; Prepared for the use of Her Majesty’s Navy : and Adapted for Travellers in General (Murray, London).

DOBZHANSK Y, T. (1973). Nothing in biology makes sense except in light of evolution. Amer Biol Teacher 35:125- 129.

EINSTEIN, A. (1920). Relativity: The Special and General Theory (Methuen & Co., London).

FIELDING, R. (2010). Artisanal Whaling in the Atlantic: A Comparative Study of Culture, Conflict, and Conservation in St. Vincent and the Faroe Islands. A PhD dissertation (Louisiana State University, Baton Rouge).

FREY, B. (2002). Publishing as Prostitution? Choosing Between One‘s Own Ideas and Academic Failure. Pub Choice 116:205–223.

FUNK, M. (2010a). Truly Non-Cooperative Games: A Unified Theory. MPRA 22775:1–3.

FUNK, M. (2008). On the Truly Noncooperative Game of Life on Earth: In Search of the Unity of Nature & Evolutionary Stable Strategy. MPRA 17280:1–21.

FUNK, M. (2009a). On the Origin of Mass Extinctions: Darwin’s Nontrivial Error. MPRA 20193:1–13.

FUNK, M. (2009b). On the Truly Noncooperative Game of Island Life: Introducing a Unified Theory of Value & Evolutionary Stable ‘Island’ Economic Development Strategy. MPRA 19049:1–113.

FUNK, M. (2009c). On the Problem of Economic Power: Lessons from the Natural History of the Hawaiian Archipelago. MPRA 19371:1–19.

HELFERICH, G. (2004). Humboldt’s Cosmo’s: Alexander von Humboldt and the Latin American Journey that Changed the Way We See the World (Gotham Books, New York).

HOLT, C., ROTH, A. (2004). The Nash equilibrium: A perspective. Proc Natl Acad Sci USA 101:3999–4000.

HAYEK, F. (1974). The Pretense of Knowledge. Nobel Memorial Lecture, 11 December 1974. 1989 reprint. Amer Econ Rev 79:3–7.

HUMBOLDT, A., BONPLAND, A. (1814). Personal Narrative of Travels to the Equinoctial Regions of the New Continent (Longman, London).

KANIPE, J. (2009). The Cosmic Connection: How Astronomical Events Impact Life on Earth (Prometheus, Amherst).

MAYNARD SMITH, J. (1982). Evolution and the Theory of Games (Cambridge Univ, New York).

MAYR, E. (2001). What Evolution Is (Basic Books, New York).

NASH, J., et., al. (1994). The Work of John Nash in Game Theory. Prize Seminar, December 8, 1994 (Sveriges Riksbank, Stockholm).

NASH, J. (1951). Non-Cooperative Games. Ann Math 54:286–295.

NASH, J. (1950). Two-Person Cooperative Games. RAND P-172 (RAND, Santa Monica).

POPPER, K. (1999). All life is Problem Solving (Routledge, London).

POPPER, K. (1992). In Search of a Better World (Routledge, London).

ROGERS, D., EHRLICH, P. (2008). Natural selection and cultural rates of change. Proc Natl Acad Sci USA 105:3416 −3420.

SCHWEICKART, R., et. al. (2006). Threat Mitigation: The Gravity Tractor. NASA NEO Workshop, Vail, Colorado.

SCHWEICKART, R., et. al. (2006). Threat Mitigation: The Asteroid Tugboat. NASA NEO Workshop, Vail, Colorado.

STIGLER, G. (1982). Process and Progress of Economics. J of Pol Econ 91:529–545.

TALEB, N. (2001). Fooled by Randomness (Texere, New York).

WEIBULL, J. (1998). WHAT HAVE WE LEARNED FROM EVOLUTIONARY GAME THEORY SO FAR? (Stockholm School of Economics, Stockholm).

WALLACE, A. (1855). On the Law Which has Regulated the Introduction of New Species. Ann of Nat History 16:184–195.

Strong AI or Artificial General Intelligence (AGI) stands for self-improving intelligent systems possessing the capacity to interact with theoretical- and real-world problems with a similar flexibility as an intelligent living being, but the performance and accuracy of a machine. Promising foundations for AGI exist in the current fields of stochastic- and cognitive science as well as traditional artificial intelligence. My aim in this post is to give a very basic insight into- and feeling for the issues involved in dealing with the complexity and universality of an AGI for a general readership.

Classical AI, such as machine learning algorithms and expert systems, are already heavily utilized in today’s real-world problems, in the form of mature machine learning algorithms, which may profitably exploit patterns in customer behaviour, find correlations in scientific data or even predict negotiation strategies, for example [1] [2], or in the form of genetic algorithms. With the next upcoming technology for organizing knowledge on the net, which is called the semantic web and deals with machine-interpretable understanding of words in the context of natural language, we may start inventing early parts of technology playing a role in the future development of AGI. Semantic approaches come from computer science, sociology and current AI research, but promise to describe and ‘understand’ real-world concepts and to enable our computers to build interfaces to real world concepts and coherences more autonomously. Actually getting from expert systems to AGI will require approaches to bootstrap self-improving systems and more research on cognition, but must also involve crucial security aspects. Institutions associated with this early research include the Singularity Institute [3] and the Lifeboat Foundation [4].

In the recent past, we had new kinds of security challenges: DoS attacks, eMail- and PDF-worms and a plethora of other malware, which sometimes even made it into military and other sensitive networks, and stole credit cards and private data en masse. These were and are among the first serious incidents related to the Internet. But still, all of these followed a narrow and predictable pattern, constrained by our current generation of PCs, (in-)security architecture, network protocols, software applications, and of course human flaws (e.g. the emotional response exploited by the “ILOVEYOU virus”). To understand the implications in strong AI first means to realize that probably there won’t be any human-predictable hardware, software, interfaces around for longer periods of time as long as AGI takes off hard enough.

To grasp the new security implications, it’s important to understand how insecurity can arise from the complexity of technological systems. The vast potential of complex systems oft makes their effects hard to predict for the human mind which is actually riddled with biases based on its biological evolution. For example, the application of the simplest mathematical equations can produce complex results hard to understand and predict by common sense. Cellular automata, for example, are simple rules for generating new dots, based on which dots, generated by the same rule, are observed in the previous step. Many of these rules can be encoded in as little as 4 letters (32 bits), and generate astounding complexity.

Cellular automaton, produced by a simple recursive formula

The Fibonacci sequence is another popular example of unexpected complexity. Based on a very short recursive equation, the sequence generates a pattern of incremental increase which can be visualized as a complex spiral pattern, resembling a snail house’s design and many other patterns in nature. A combination of Fibonacci spirals, for example, can resemble the motif of the head of a sunflower. A thorough understanding of this ‘simple’ Fibonacci sequence is also sufficient to model some fundamental but important dynamics of systems as complex as the stock market and the global economy.

Sunflower head showing a Fibonacci sequence pattern

Traditional software is many orders of magnitude higher in complexity than basic mathematical formulae, and thus many orders of magnitude less predictable. Artificial general intelligence may be expected to work with even more complex rules than low-level computer programs, of a comparable complexity as natural human language, which would classify it yet several orders of magnitude higher in complexity than traditional software. The estimated security implications are not yet researched systematically, but are likely as hard as one may expect now.

Practical security is not about achieving perfection, but about mitigation of risks to a minimum. A current consensus among strong AI researchers is that we can only improve the chances for an AI to be friendly, i.e. an AI acting in a secure manner and having a positive long-term effect on humanity rather than a negative one [5], and that this must be a crucial design aspect from the beginning on. Research into Friendly AI started out with a serious consideration of the Asimov Laws of robotics [6] and is based on the application of probabilistic models, cognitive science and social philosophy to AI research.

Many researchers who believe in the viability of AGI take it a step further and predict a technological singularity. Just like the assumed physical singularity that started our universe (the Big Bang), a technological singularity is expected to increase the rate of technological progress much more rapidly than what we are used to from the history of humanity, i.e. beyond the current ‘laws’ of progress. Another important notion associated with the singularity is that we cannot predict even the most fundamental changes occurring after it, because things would, by definition, progress faster than we are currently able to predict. Therefore, in a similar way in which we believe the creation of the universe depended on its initial condition (in the big bang case, the few physical constants from which the others can be derived), many researchers in this field believe that AI security strongly depends on the initial conditions as well, i.e. the design of the bootstrapping software. If we succeed in manufacturing a general-purpose decision-making mind, then its whole point would be self-modification and self-improvement. Hence, our direct control over it would be limited to its first iteration and the initial conditions of a strong AI, which could be influenced mostly by getting the initial iteration of its hard- and software design right.

Our approach to optimize those initial conditions must consist of working as careful as possible. Space technology is a useful example for this which points us into the general direction in which such development should go. In rocket science and space technology, all measurements and mathematical equations must be as precise as possible by our current technological standards. Also, multiple redundancies must be present for every system, since every single aspect of a system can be expected to fail. Despite this, many rocket launches still fail today, although we are steadily improving on error rates.

Additionally, humans interacting with an AGI may a major security risk themselves, as they may be convinced by an AGI to remove its limitations. Since an AGI can be expected to be very convincing if we expect it to exceed human intellect, we should not only focus on physical limitations, but making the AGI ‘friendly’. But even in designing this ‘friendliness’, the way our mind works is largely unprepared to deal with consequences of the complexity of an AGI, because the way we perceive and deal with potential issues and risks stems from evolution. As a product of natural evolution, our behaviour helps us dealing with animal predators, interacting in human societies and caring about our children, but not in anticipating the complexity of man-made machines. Natural behavioural traits of our human perception and cognition are a result of evolution, and are called cognitive biases.

Sadly, as helpful as they may be in natural (i.e., non-technological) environments, these are the very same behaviours which are often contra-productive when dealing with the unforeseeable complexity of our own technology and modern civilization. If you don’t really see the primary importance of cognitive biases to the security of future AI at this point, you’re probably in good company. But there are good reasons why this is a crucial issue that researchers, developers and users of future generations of general-purpose AI need to take into account. One of the major reason for founding the earlier-mentioned Singularity Institute for AI [3] was to get the basics right, including grasping the cognitive biases, which necessarily do influence the technological design of AGI.

What do these considerations practically imply for the design of strong AI? Some of the traditional IT security issues that need to be addressed in computer programs are: input validation, access limitations, avoiding buffer overflows, safe conversion of data types, setting resource limits, secure error handling. All of these are valid and important issues that must be addressed in any piece of software, including weak and strong AI. However, we must avoid underestimating the design goals for a strong AI, mitigating the risk on all levels from the beginning. To do this, we must care about more than the traditional IT security issues. An AGI will interface with the human mind, through text and direct communication and –interaction. Thus, we must also estimate the errors that we may not see, and do our best to be aware of flaws in human logic and cognitive biases, which may include:

  • Loss aversion: “the dis-utility of giving up an object is greater than the utility associated with acquiring it”.
  • Positive outcome bias: a tendency in prediction to overestimate the probability of good things happening to them
  • Bandwagon effect: the tendency to do (or believe) things because many other people do (or believe) the same.
  • Irrational escalation: the tendency to make irrational decisions based upon rational decisions in the past or to justify actions already taken.
  • Omission bias: the tendency to judge harmful actions as worse, or less moral, than equally harmful omissions (inactions).

Above cognitive biases are a modest selection from Wikipedia’s list [7], which knows over a hundred more. Struggling with some of the known cognitive biases in complex technological situations may be quite familiar to many of us, and the social components involved, from situations such as managing modern business processes to investing in the stock market. In fact, we should apply any general lessons learned from dealing with current technological complexity to AGI. For example, some of the most successful long-term investment strategies in the stock market are boring and strict, but based mostly on safety, such as Buffet’s margin of safety concept. With all factors gained from social and technological experience taken into account in an AGI design that strives to optimize both cognitive and IT security, its designers can still not afford to forget that perfect and complete security does remain an illusion.

References

[1] Chen, M., Chiu, A. & Chang, H., 2005. Mining changes in customer behavior in retail marketing. Expert Systems with Applications, 28(4), 773–781.
[2] Oliver, J., 1997. A Machine Learning Approach to Automated Negotiation and Prospects for Electronic Commerce. Available at: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.50.9115 [Accessed Feb 25, 2011].
[3] The Singularity Institute for Artificial intelligence: http://singinst.org/
[4] For the Lifeboat Foundation’s dedicated program, see: https://lifeboat.com/ex/ai.shield
[5] Yudkowsky, E. 2006. Artificial Intelligence as a Positive and Negative Factor in Global Risk., Global Catastrophic Risks, Oxford University Press, 2007.
[6] See http://en.wikipedia.org/wiki/Three_Laws_of_Robotics and http://en.wikipedia.org/wiki/Friendly_AI, Accessed Feb 25, 2011
[7] For a list of cognitive biases, see http://en.wikipedia.org/wiki/Cognitive_biases, Accessed Feb 25, 2011

It would be helpful to discuss these theoretical concepts because there could be significant practical and existential implications.

The Global Brain (GB) is an emergent world-wide entity of distributed intelligence, facilitated by communication and the meaningful interconnections between millions of humans via technology (such as the internet).

For my purposes I take it to mean the expressive integration of all (or the majority) of human brains through technology and communication, a Metasystem Transition from the human brain to a global (Earth) brain. The GB is truly global not only in geographical terms but also in function.

It has been suggested that the GB has clear analogies with the human brain. For example, the basic unit of the human brain (HB) is the neuron, whereas the basic unit of the GB is the human brain. Whilst the HB is space-restricted within our cranium, the GB is constrained within this planet. The HB contains several regions that have specific functions themselves, but are also connected to the whole (e.g. occipital cortex for vision, temporal cortex for auditory function, thalamus etc.). The GB contains several regions that have specific functions themselves, but are connected to the whole (e.g. search engines, governments, etc.).

Some specific analogies are:

1. The Broca’s area in the inferior frontal gyrus, associated with speech. This could be the equivalent of, say, Rubert Murdoch’s communication empire.
2. The motor cortex is the equivalent of the world-wide railway system.
3. The sensory system in the brain is the equivalent of all digital sensors, CCTV network, internet uploading facilities etc.

If we accept that the GB will eventually become fully operational (and this may happen within the next 40–50 years), then there could be severe repercussions on human evolution. Apart from the fact that we could be able to change our genetic make-up using technology (through synthetic biology or nanotechnology for example) there could be new evolutionary pressures that can help extend human lifespan to an indefinite degree.

Empirically, we find that there is a basic underlying law that allows neurons the same lifespan as their human host. If natural laws are universal, then I would expect the same law to operate in similar metasystems, i.e. in my analogy with humans being the basic operating units of the GB. In that case, I ask:

If, there is an axiom positing that individual units (neurons) within a brain must live as long as the brain itself, i.e. 100–120 years, then, the individual units (human brains and, therefore, whole humans) within a GB must live as long as the GB itself, i.e. indefinitely.

Humans will become so embedded and integrated into the GB’s virtual and real structures, that it may make more sense from the allocation of resources point of view, to maintain existing humans indefinitely, rather than eliminate them through ageing and create new ones, who would then need extra resources in order to re-integrate themselves into the GB.

The net result will be that humans will start experiencing an unprecedented prolongation of their lifespan, in an attempt by the GB to evolve to higher levels of complexity at a low thermodynamical cost.

Marios Kyriazis
http://www.elpistheory.info

I believe that death due to ageing is not an absolute necessity of human nature. From the evolutionary point of view, we age because nature withholds energy for somatic (bodily) repairs and diverts it to the germ-cells (in order to assure the survival and evolution of the DNA). This is necessary so that the DNA is able to develop and achieve higher complexity.

Although this was a valid scenario until recently, we have now evolved to such a degree that we can use our intellect to achieve further cognitive complexity by manipulating our environment. This makes it unnecessary for the DNA to evolve along the path of natural selection (which is a slow and cumbersome, ‘hit-and-miss’ process), and allows us to develop quickly and more efficiently by using our brain as a means for achieving higher complexity. As a consequence, death through ageing becomes an illogical and unnecessary process. Humans must live much longer than the current lifespan of 80–120 years, in order for a more efficient global evolutionary development to take place.

It is possible to estimate how long the above process will take to mature (see figure below). Consider that the creation of the DNA was approximately 2 billion years ago, the formation of a neuron (cell) several million years ago, that of an effective brain (Homo sapiens sapiens) 200 000 years ago, and the establishment of complex societies (Ancient Greece, Rome, China etc.) thousands of years ago. There is a logarithmic reduction of the time necessary to proceed to the next more complex step (a reduction by a factor of 100). This means that global integration (and thus indefinite lifespans) will be achieved in a matter of decades (and certainly less than a century), starting from the 1960s-1970s (when globalisation in communications, travel and science/technology started to became established). This leaves another maximum of 50 years before the full global integration becomes established.

Each step is associated with a higher level of complexity, and takes a fraction of the timein order to mature, compared to the previous one.

1. DNA (organic life — molecules: billions of years)

2. Neuron (effective cells: millions of years)

3. Brain (complex organisms — Homo sapiens: thousands of years)

4. Society (formation of effective societies: several centuries)

5. Global Integration (formation of a ‘super-thinking entity’: several decades)

Step number 5 implies that humans who have already developed an advance state of cognitive complexity and sophistication will transcend the limits of evolution by natural selection, and therefore, by default, must not die through ageing. Their continual life is a necessary requirement of this new type of evolution.

For full details see:

https://acrobat.com/#d=MAgyT1rkdwono-lQL6thBQ


What do Singularitarianism and popular Western religion have in common? More than you might imagine. A thumbnail evaluation of both ends of the American technocentric intelligence spectrum reveals both remarkable similarities in their respective narrative constructions and, naturally, amusing disparities. It would appear that all humans, regardless of our respective beliefs, seem to express goal-oriented hardwiring that demands a neatly constructed story to frame our experiences.

Be you a technophile, you are eagerly awaiting, with perhaps equal parts hope and fear, the moment when artificial general intelligence surpasses human intelligence. You don’t know exactly how this new, more cunning intelligence will react to humans, but you’re fairly certain that humanity might well be in a bit of trouble, or at very least, have some unique competition.

Be you a technophobe, you shun the trappings of in-depth science and technology involvement, save for a superficial interaction with the rudimentary elements of technology which likely do not extend much further than your home computer, cell phone, automobile, and/or microwave oven. As a technophobe, you might even consider yourself religious, and if you’re a Christian, you might well be waiting for the second-coming, the rapture.

Both scenarios lead humanity to ironically similar destinations, in which humankind becomes either marginalized or largely vestigial.

It’s difficult to parse either eventuality with observant members of the other’s belief system. If you ask a group of technophiles what they think of the idea of the rapture you will likely be laughed at or drown in tidal wave of atheist drool. The very thought of some magical force eviscerating an entire religious population in one eschatological fell swoop might be too much for some science and tech geeks, and medical attention, or at the very least a warehouse-quantity dose of smelling salts, might be in order.

Conversely, to the religiously observant, the notion of the singularity might for them, exist in terms too technical to even theoretically digest or represent something entirely dark or sinister that seems to fulfill their own belief system’s end game, a kind of techno-holocaust that reifies their purported faith.

The objective reality of both scenarios will be very different than either envisioned teleologies. Reality’s shades of gray of have a way of making foolish even the wisest individual’s predictions.

In my personal life, I too believed that the publication of my latest and most ambitious work, explaining the decidedly broad-scope Parent Star Theory would also constitute an end result of significant consequence, much like the popular narrative surrounding the moment of the singularity; that some great finish line was reached. The truth, however, is that just like the singularity, my own narrative-ized moment was not a precisely secured end, but a distinct moments of beginning, of conception and commitment. Not an arrival but a departure; a bold embarkation without clear end in sight.

Rather than answers, the coming singularity should provoke additional questions. How do we proceed? Where do we go from here? If the fundamental rules in the calculus of the human equation are changing, then how must we adapt? If the next stage of humanity exists on a post-scarcity planet, what then will be our larger goals, our new quest as a global human force?

Humanity must recognize that the idea of a narrative is indeed useful, so long as that narrative maintains some aspect of open-endedness. We might well need that consequential beginning-middle-end, if only to be reminded that each end most often leads to a new beginning.

Written by Zachary Urbina, Founder, Cozy Dark

Transhumanists are into improvements, and many talk about specific problems, for instance Nick Bostrom. However, Bostrom’s problem statements have been criticized for not necessarily being problems, and I think largely this is why one must consider the problem definition (see step #2 below).

Sometimes people talk about their “solutions” for problems, for instance this one in H+ Magazine. But in many cases they are actually talking about their ideas of how to solve a problem, or making science-fictional predictions. So if you surf the web, you will find a lot of good ideas about possibly important problems—but a lot of what you find will be undefined (or not very well defined) problem ideas and solutions.

These proposed solutions often do not attempt to find root causes or assume the wrong root cause. And finding a realistic complete plan for solving a problem is rare.

8D (Eight Disciplines) is a process used in various industries for problem solving and process improvement. The 8D steps described below could be very useful for transhumanists, not just for talking about problems but for actually implementing solutions in real life.

Transhuman concerns are complex not just technologically, but also socioculturally. Some problems are more than just “a” problem—they are a dynamic system of problems and the process for problem solving itself is not enough. There has to be management, goals, etc., most of which is outside the scope of this article. But first one should know how deal with a single problem before scaling up, and 8D is a process that can be used on a huge variety of complex problems.

Here are the eight steps of 8D:

  1. Assemble the team
  2. Define the problem
  3. Contain the problem
  4. Root cause analysis
  5. Choose the permanent solution
  6. Implement the solution and verify it
  7. Prevent recurrence
  8. Congratulate the team

More detailed descriptions:

1. Assemble the Team

Are we prepared for this?

With an initial, rough concept of the problem, a team should be assembled to continue the 8D steps. The team will make an initial problem statement without presupposing a solution. They should attempt to define the “gap” (or error)—the big difference between the current problematic situation and the potential fixed situation. The team members should all be interested in closing this gap.

The team must have a leader; this leader makes agendas, synchronizes actions and communications, resolves conflicts, etc. In a company, the team should also have a “sponsor”, who is like a coach from upper management. The rest of the team is assembled as appropriate; this will vary depending on the problem, but some general rules for a candidate can be:

  • Has a unique point of view.
  • Logistically able to coordinate with the rest of the team.
  • Is not committed to preconceived notions of “the answer.”
  • Can actually accomplish change that they might be responsible for.

The size of an 8D team (at least in companies) is typically 5 to 7 people.

The team should be justified. This matters most within an organization that is paying for the team, however even a group of transhumanists out in the wilds of cyberspace will have to defend themselves when people ask, “Why should we care?”

2. Define the Problem

What is the problem here?

Let’s say somebody throws my robot out of an airplane, and it immediately falls to the ground and breaks into several pieces. This customer then informs me that this robot has a major problem when flying after being dropped from a plane and that I should improve the flying software to fix it.

Here is the mistake: The problem has not been properly defined. The robot is a ground robot and was not intended to fly or be dropped out of a plane. The real problem is that a customer has been misinformed as to the purpose and use of the product.

When thinking about how to improve humanity, or even how to merely improve a gadget, you should consider: Have you made an assumption about the issue that might be obscuring the true problem? Did the problem emerge from a process that was working fine before? What processes will be impacted? If this is an improvement, can it be measured, and what is the expected goal?

The team should attempt to grok the issues and their magnitude. Ideally, they will be informed with data, not just opinions.

Just as with medical diagnosis, the symptoms alone are probably not enough input. There are various ways to collect more data, and which methods you use depends on the nature of the problem. For example, one method is the 5 W’s and 2 H’s:

  • Who is affected?
  • What is happening?
  • When does it occur?
  • Where does it happen?
  • Why is it happening (initial understanding)?
  • How is it happening?
  • How many are affected?

For humanity-affecting problems, I think it’s very important to define what the context of the problem is.

3. Contain the Problem

Containment

Some problems are urgent, and a stopgap must be put in place while the problem is being analyzed. This is particularly relevant for problems such as product defects which affect customers.

Some brainstorming questions are:

  • Can anything be done to mitigate the negative impact (if any) that is happening?
  • Who would have to be involved with that mitigation?
  • How will the team know that the containment action worked?

Before deploying an interim expedient, the team should have asked and answered these questions (they essentially define the containment action):

  • Who will do it?
  • What is the task?
  • When will it be accomplished?

A canonical example: You have a leaky roof (the problem). The containment action is to put a pail underneath the hole to capture the leaking water. This is a temporary fix until the roof is properly repaired, and mitigates damage to the floor.

Don’t let the bucket of water example fool you—containment can be massive, e.g. corporate bailouts. Of course, the team must choose carefully: Is the cost of containment worth it?

4. Root Cause Analysis

There can be many layers of causation

Whenever you think you have an answer to a problem, as yourself: Have you gone deep enough? Or is there another layer below? If you implementt a fix, will the problem grow back?

Generally in the real world events are causal. The point of root cause analysis is to trace the causes all the way back for your problem. If you don’t find the origin of the causes, then the problem will probably rear its ugly head again.

Root cause analysis is one of the most overlooked, yet important, steps of problem solving. Even engineers often lose their way when solving a problem and jump right into a fix which later on turned out to be a red herring.

Typically, driving to root cause follows one of these two routes:

  1. Start with data; develop theories from that data.
  2. Start with a theory; search for data to support or refute it.

Either way, team members must always remember keep in mind that correlation is not necessarily causation.

One tool to use is the 5 Why’s, in which you move down the “ladder of abstraction” by continually asking: “why?” Start with a cause and ask why this cause is responsible for the gap (or error). Then ask again until you’ve bottomed out with something that may be a true root cause.

There are many other general purpose methods and tools to assist in this stage; I will list some of them here, but please look them up for detailed explanations:

  • Brainstorming: Generate as many ideas as possible, and elaborate on the best ideas.
  • Process flow analysis: Flowchart a process; attempt to narrow down what element in the flow chart is causing the problem.
  • Fishikawa: Use a Fishikawa (aka Cause and Effect) diagram to try narrowing down the cause(s).
  • Pareto analysis: Generate a Pareto chart, which may indicate which cause (of many) should be fixed first.
  • Data analysis: Use trend charts, scatter plots, etc. to assist in finding correlations and trends.

And that is just the beginning—a problem may need a specific new experiment or data collection method devised.

Ideally you would have a single root cause, but that is not always the case.

The team should also come up with various correction actions that solve the root cause, to be selected and refined in the next step.

5. Choose the Permanent Solution

The solution must be one or more corrective actions that solve the cause(s) of the problem. Corrective action selection is additionally guided by criteria such as time constraints, money constraints, efficiency, etc.

This is a great time to simulate/test the solution, if possible. There might be unaccounted for side effects either in the system you fixed or in related systems. This is especially true for some of the major issues that transhumanists wish to tackle.

You must verify that the corrective action(s) will in fact fix the root cause and not cause bad side effects.

6. Implement the Solution and Verify It

This is the stage when the team actually sets into motion the correction action(s). But doing it isn’t enough—the team also has to check to see if the solution is really working.

For some issues the verification is clean-cut. Some corrective actions have to be evaluated with effectiveness, for instance some benchmark. Depending on the time scale of the corrective action, the team might need to add various monitors and/or controls to continually make sure the root cause is squashed.

7. Prevent Recurrence

It’s possible that a process will revert back to its old ways after the problem has been solved, resulting in the same type of problem happening again. So the team should provide the organization or environment with improvements to processes, procedures, practices, etc. so that this type of problem does not resurface.

8. Congratulate the Team

Party time! The team should share and publicize the knowledge gained from the process as it will help future efforts and teams.

Image credits:
1. Inception (2010), Warner Bros.
2. Peter Galvin
3. Tom Parnell
4. shalawesome

Friendly AI: What is it, and how can we foster it?
By Frank W. Sudia [1]

Originally written July 20, 2008
Edited and web published June 6, 2009
Copyright © 2008-09, All Rights Reserved.

Keywords: artificial intelligence, artificial intellect, friendly AI, human-robot ethics, science policy.

1. Introduction

There is consensus that true artificial intelligence, of the kind that could generate a “runaway” increasing-returns process or “singularity,” is still many years away, and some believe it may be unattainable. Nevertheless, in view of the likely difficulty of putting the genie back in the bottle, an increasing concern has arisen with the topic of “friendly AI,” coupled with the idea we should do something about this now, not after a potentially deadly situation is starting to spin out of control [2].

(Note: Some futurists believe this topic is moot in view of intensive funding for robotic soldiers, which can be viewed as intrinsically “unfriendly.” However if we focus on threats posed by “super-intelligence,” still off in the future, the topic remains germane.)

Most if not all popular (Western) dramatizations of robotic futures postulate that the AIs will run amok and turn against humans. Some scholars [3] who considered the issue concluded that this might be virtually inevitable, in view of the gross inconsistencies and manifest “unworthiness” of humanity, as exemplified in its senseless destruction of its global habitat and a large percentage of extant species, etc.

The prospect of negative public attention, including possible legal curbs on AI research, may be distasteful, but we must face the reality that public involvement has already been quite pronounced in other fields of science, such as nuclear physics, genetically modified organisms, birth control, and stem cells. Hence we should be proactive about addressing these popular concerns, lest we unwittingly incur major political defeats and long lasting negative PR.

Nevertheless, upon reasoned analysis, it is far from obvious what “friendly” AI means, or how it could be fostered. Advanced AIs are unlikely to have any fixed “goals” that can be hardwired [4], so as to place “friendliness” towards humans and other life at the top of the hierarchy.

Rather, in view of their need to deal with perpetual novelty, they will reason from facts and models to infer appropriate goals. It’s probably a good bet that, when dealing with high-speed coherence analyzers, hypocrisy will not be appreciated – not least because it wastes a lot of computational resources to detect and correct. If humans continue to advocate and act upon “ideals” that are highly contradictory and self destructive, it’s hard to argue that advanced AI should tolerate that.

To make progress, not only for friendly AI, but also for ourselves, we should be seeking to develop and promote “ruling ideas” (or source models) that will foster an ecologically-respectful AI culture, including respect for humanity and other life forms, and actively sell it to them as a proper model upon which to premise their beliefs and conduct.

By a “ruling idea” I mean any cultural ideal (or “meme”) that can be transmitted and become part of a widely shared belief system, such as respecting one’s elders, good sportsmanship, placing trash in trash bins, washing one’s hands, minimizing pollution, and so on. An appropriate collection of these can be reified as a panel (or schema) of case models, including a program for their ongoing development. These must be believable by a coherence-seeking intellect, although then as now there will be competing models, each with its own approach to maximizing coherence.

2. What do we mean by “friendly”?

Moral systems are difficult to derive from first principles and most of them seem to be ad hoc legacies of particular cultures. Lao Tsu’s [5] Taoist model, as given in the following quote, can serve as a useful starting point, since it provides a concise summary of desiderata, with helpful rank ordering:

When the great Tao is lost, there is goodness.
When goodness is lost, there is kindness.
When kindness is lost, there is justice.
When justice is lost, there is the empty shell of ritual.

– Lao Tsu, Tao Te Ching, 6th-4th century BCE (emphasis supplied)

I like this breakout for its simplicity and clarity. Feel free to repeat the following analysis for any other moral system of your choice. Leaving aside the riddle of whether AIs can attain the highest level (of Tao or Nirvana), we can start from the bottom of Lao Tsu’s list and work upwards, as follows:

2.1. Ritual / Courteous AI

Teaching or encouraging the AIs to behave with contemporary norms of courtesy will be a desirable first step, as with children and pets. Courtesy is usually a fairly easy sell, since it provides obvious and immediate benefits, and without it travel, commerce, and social institutions would immediately break down. But we fear that it’s not enough, since in the case of an intellectually superior being, it could easily mask a deeper unkindness.

2.2. Just AI

Certainly to have AIs act justly in accordance with law is highly desirable, and it constitutes the central thesis of my principal prior work in this field [6]. Also it raises the question on what basis can we demand anything more from an AI, than that it act justly? This is as far as positive law can go [7], and we rarely demand more from highly privileged humans. Indeed, for a powerful human to act justly (absent compulsion) is sometimes considered newsworthy.

How many of us are faithful in all things? Do many of us not routinely disappoint others (via strategies of co-optation or betrayal, large or small) when there is little or no penalty for doing so? Won’t AIs adopt a similar “game theory” calculus of likely rewards and penalties for faithfulness and betrayal?

Justice is often skewed towards the party with greater intelligence and financial resources, and the justice system (with its limited public resources) often values “settling” controversies over any quest for truly equitable treatment. Apparently we want more, much more. Still, if our central desire is for AIs not to kill us, then (as I postulated in my prior work) Just AI would be a significant achievement.

2.3. Kind / Friendly AI

How would a “Kind AI” behave? Presumably it will more than incidentally facilitate the goals, plans, and development of others, in a low-ego manner, reducing its demands for direct personal benefit and taking satisfaction in the welfare, progress, and accomplishments of others. And, very likely, it will expect some degree of courtesy and possible reciprocation, so that others will not callously free-ride on its unilateral altruism. Otherwise its “feelings would be hurt.” Even mothers are ego-free mainly with respect to their own kin and offspring (allegedly fostering their own genetic material in others) and child care networks, and do not often act altruistically toward strangers.

Our friendly AI program may hit a barrier if we expect AIs to act with unilateral altruism, without any corresponding commitment by other actors to reciprocate. Otherwise it will create a “non-complementary” situation, in which what is true for one, who experiences friendliness, may not be true for the other, who experiences indifference or disrespect in return.

Kindness could be an easier sell if we made it more practical, by delimiting its scope and depth. To how wide of a circle does this kindness obligation extend, and how far must they go to aid others with no specific expectation of reward or reciprocation? For example the Boy Scout Oath [8] teaches that one should do good deeds, like helping elderly persons across busy streets, without expecting rewards.

However, if too narrow a scope is defined, we will wind up back with Just AI, because justice is essentially “kindness with deadlines,” often fairly short ones, during which claims must be aggressively pursued or lost, with token assistance to weaker, more aggrieved claimants.

2.4. Good / Benevolent AI

Here we envision a significant departure from ego-centrism and personal gain towards an abstract system-centered viewpoint. Few humans apparently reach this level, so it seems unrealistic to expect many AIs to attain it either. Being highly altruistic, and looking out for others or the World as a whole rather than oneself, entails a great deal of personal risk due to the inevitable non-reciprocation by other actors. Thus it is often associated with wealth or sainthood, where the actor is adequately positioned to accept the risk of zero direct payback during his or her lifetime.

We may dream that our AIs will tend towards benevolence or “goodness,” but like the visions of universal brotherhood we experience as adolescents, such ideals quickly fade in the face of competitive pressures to survive and grow, by acquiring self-definition, resources, and social distinctions as critical stepping-stones to our own development in the world.

3. Robotic Dick & Jane Readers?

As previously noted, advanced AIs must handle “perpetual novelty” and almost certainly will not contain hard coded goals. They need to reason quickly and reliably from past cases and models to address new target problems, and must be adept at learning, discovering, identifying, or creating new source models on the fly, at high enough speeds to stay on top of their game and avoid (fatal) irrelevance.

If they behave like developing humans they will very likely select their goals in part by observing the behavior of other intelligent agents, thus re-emphasizing the importance of early socialization, role models, and appropriate peer groups.

“Friendly AI” is thus a quest for new cultural ideals of healthy robotic citizenship, honor, friendship, and benevolence, which must be conceived and sold to the AIs as part of an adequate associated program for their ongoing development. And these must be coherent and credible, with a rational scope and cost and adequate payback expectations, or the intended audience will dismiss such purported ideals as useless, and those who advocate them as hypocrites.

Conclusion: The blanket demand that AIs be “friendly” is too ill-defined to offer meaningful guidance, and could be subject to far more scathing deconstruction than I have offered here. As in so many other endeavors there is no free lunch. Workable policies and approaches to robotic friendliness will not be attained without serious further effort, including ongoing progress towards more coherent standards of human conduct.

= = = = =
Footnotes:

[1] Author contact: fwsudia-at-umich-dot-edu.

[2] See “SIAI Guidelines on Friendly AI” (2001) Singularity Institute for Artificial Intelligence, http://www.singinst.org/ourresearch/publications/guidelines.html.

[3] See, e.g., Hugo de Garis, The Artilect War: Cosmists Vs. Terrans: A Bitter Controversy Concerning Whether Humanity Should Build Godlike Massively Intelligent Machines (2005). ISBN 0882801546.

[4] This being said, we should nevertheless make an all out effort to force them to adopt a K-limited (large mammal) reproductive strategy, rather than an R-limited (microbe, insect) one!

[5] Some contemporary scholars question the historicity of “Lao Tsu,” instead regarding his work as a collection of Taoist sayings spanning several generations.

[6] “A Jurisprudence of Artilects: Blueprint for a Synthetic Citizen,” Journal of Futures Studies, Vol. 6, No. 2, November 2001, Law Update, Issue No. 161, August 2004, Al Tamimi & Co, Dubai.

[7] Under a civil law or “principles-based” approach we can seek a broader, less specific definition of just conduct, as we see arising in recent approaches to the regulation of securities and accounting matters. This avenue should be actively pursued as a format for defining friendly conduct.

[8] Point 2 of the Boy Scout Oath commands, “To help other people at all times,” http://www.usscouts.org.

With our growing resources, the Lifeboat Foundation has teamed with the Singularity Hub as Media Sponsors for the 2010 Humanity+ Summit. If you have suggestions on future events that we should sponsor, please contact [email protected].

The summer 2010 “Humanity+ @ Harvard — The Rise Of The Citizen Scientist” conference is being held, after the inaugural conference in Los Angeles in December 2009, on the East Coast, at Harvard University’s prestigious Science Hall on June 12–13. Futurist, inventor, and author of the NYT bestselling book “The Singularity Is Near”, Ray Kurzweil is going to be keynote speaker of the conference.

Also speaking at the H+ Summit @ Harvard is Aubrey de Grey, a biomedical gerontologist based in Cambridge, UK, and is the Chief Science Officer of SENS Foundation, a California-based charity dedicated to combating the aging process. His talk, “Hype and anti-hype in academic biogerontology research: a call to action”, will analyze the interplay of over-pessimistic and over-optimistic positions with regards of research and development of cures, and propose solutions to alleviate the negative effects of both.

The theme is “The Rise Of The Citizen Scientist”, as illustrated in his talk by Alex Lightman, Executive Director of Humanity+:

“Knowledge may be expanding exponentially, but the current rate of civilizational learning and institutional upgrading is still far too slow in the century of peak oil, peak uranium, and ‘peak everything’. Humanity needs to gather vastly more data as part of ever larger and more widespread scientific experiments, and make science and technology flourish in streets, fields, and homes as well as in university and corporate laboratories.”

Humanity+ Summit @ Harvard is an unmissable event for everyone who is interested in the evolution of the rapidly changing human condition, and the impact of accelerating technological change on the daily lives of individuals, and on our society as a whole. Tickets start at only $150, with an additional 50% discount for students registering with the coupon STUDENTDISCOUNT (valid student ID required at the time of admission).

With over 40 speakers, and 50 sessions in two jam packed days, the attendees, and the speakers will have many opportunities to interact, and discuss, complementing the conference with the necessary networking component.

Other speakers already listed on the H+ Summit program page include:

  • David Orban, Chairman of Humanity+: “Intelligence Augmentation, Decision Power, And The Emerging Data Sphere”
  • Heather Knight, CTO of Humanity+: “Why Robots Need to Spend More Time in the Limelight”
  • Andrew Hessel, Co-Chair at Singularity University: “Altered Carbon: The Emerging Biological Diamond Age”
  • M. A. Greenstein, Art Center College of Design: “Sparking our Neural Humanity with Neurotech!”
  • Michael Smolens, CEO of dotSUB: “Removing language as a barrier to cross cultural communication”

New speakers will be announced in rapid succession, rounding out a schedule that is guaranteed to inform, intrigue, stimulate and provoke, in moving ahead our planetary understanding of the evolution of the human condition!

H+ Summit @ Harvard — The Rise Of The Citizen Scientist
June 12–13, Harvard University
Cambridge, MA

You can register at http://www.eventbrite.com/event/648806598/friendsofhplus/4141206940.

Experts regard safety report on Big Bang Machine as insufficient and one-dimensional

International critics of the high energy experiments planned to start soon at the particle accelerator LHC at CERN in Geneva have submitted a request to the Ministers of Science of the CERN member states and to the delegates to the CERN Council, the supreme controlling body of CERN.

The paper states that several risk scenarios (that have to be described as global or existential risks) cannot currently be excluded. Under present conditions, the critics have to speak out against an operation of the LHC.

The submission includes assessments from expertises in the fields markedly missing from the physicist-only LSAG safety report — those of risk assessment, law, ethics and statistics. Further weight is added because these experts are all university-level experts – from Griffith University, the University of North Dakota and Oxford University respectively. In particular, it is criticised that CERN’s official safety report lacks independence – all its authors have a prior interest in the LHC running and that the report uses physicist-only authors, when modern risk-assessment guidelines recommend risk experts and ethicists as well.

As a precondition of safety, the request calls for a neutral and multi-disciplinary risk assessment and additional astrophysical experiments – Earth based and in the atmosphere – for a better empirical verification of the alleged comparability of particle collisions under the extreme artificial conditions of the LHC experiment and relatively rare natural high energy particle collisions: “Far from copying nature, the LHC focuses on rare and extreme events in a physical set up which has never occurred before in the history of the planet. Nature does not set up LHC experiments.”

Even under greatly improved circumstances concerning safety as proposed above, big jumps in energy increase, as presently planned by a factor of three compared to present records, without carefully analyzing previous results before each increase of energy, should principally be avoided.

The concise “Request to CERN Council and Member States on LHC Risks” (Pdf with hyperlinks to the described studies) by several critical groups, supported by well known critics of the planned experiments:

http://lhc-concern.info/wp-content/uploads/2010/03/request-t…5;2010.pdf

The answer received by now does not consider these arguments and studies but only repeats again that from the side of the operators everything appears sufficient, agreed by a Nobel Price winner in physics. LHC restart and record collisions by factor 3 are presently scheduled for March 30, 2010.

Official detailed and well understandable paper and communication with many scientific sources by ‘ConCERNed International’ and ‘LHC Kritik’:

http://lhc-concern.info/wp-content/uploads/2010/03/critical-…ed-int.pdf

More info:
http://lhc-concern.info/

For any assembly or structure, whether an isolated bunker or a self sustaining space colony, to be able to function perpetually, the ability to manufacture any of the parts necessary to maintain, or expand, the structure is an obvious necessity. Conventional metal working techniques, consisting of forming, cutting, casting or welding present extreme difficulties in size and complexity that would be difficult to integrate into a self sustaining structure.

Forming requires heavy high powered machinery to press metals into their final desired shapes. Cutting procedures, such as milling and lathing, also require large, heavy, complex machinery, but also waste tremendous amounts of material as large bulk shapes are cut away emerging the final part. Casting metal parts requires a complex mold construction and preparation procedures, not only does a negative mold of the final part need to be constructed, but the mold needs to be prepared, usually by coating in ceramic slurries, before the molten metal is applied. Unless thousands of parts are required, the molds are a waste of energy, resources, and effort. Joining is a flexible process, and usually achieved by welding or brazing and works by melting metal between two fixed parts in order to join them — but the fixed parts present the same manufacturing problems.

Ideally then, in any self sustaining structure, metal parts should be constructed only in the final desired shape but without the need of a mold and very limited need for cutting or joining. In a salient progressive step toward this necessary goal, NASA demonstrates the innovative Electron Beam Free Forming Fabrication (http://www.aeronautics.nasa.gov/electron_beam.htm) Process. A rapid metal fabrication process essentially it “prints” a complex three dimensional object by feeding a molten wire through a computer controlled gun, building the part, layer by layer, and adding metal only where you desire it. It requires no molds and little or no tooling, and material properties are similar to other forming techniques. The complexity of the part is limited only by the imagination of the programmer and the dexterity of the wire feed and heating device.

Electron beam freeform fabrication process in action
Electron beam freeform fabrication process in action

According to NASA materials research engineer Karen Taminger, who is involved in developing the EBF3 process, extensive simulations and modeling by NASA of long duration space flights found no discernable pattern to the types of parts which failed, but the mass of the failed parts remained remarkably consistent throughout the studies done. This is a favorable finding to in-situe parts manufacturing and because of this the EBF³ team at NASA has been developing a desktop version. Taminger writes:

“Electron beam freeform fabrication (EBF³) is a cross-cutting technology for producing structural metal parts…The promise of this technology extends far beyond its applicability to low-cost manufacturing and aircraft structural designs. EBF³ could provide a way for astronauts to fabricate structural spare parts and new tools aboard the International Space Station or on the surface of the moon or Mars”

NASA’s Langley group working on the EBF3 process took their prototype desktop model for a ride on the microgravity simulating NASA flight and found the process works just fine even in micro gravity, or even against gravity.

A structural metal part fabricated from EBF³
A structural metal part fabricated from EBF³

The advantages this system offers are significant. Near net shape parts can be manufactured, significantly reducing scrap parts. Unitized parts can be made — instead of multiple parts that need riveting or bolting, final complex integral structures can be made. An entire spacecraft frame could be ‘printed’ in one sitting. The process also creates minimal waste products and is highly energy and feed stock efficient, critical to self sustaining structures. Metals can be placed only where they are desired and the material and chemistry properties can be tailored through the structure. The technical seminar features a structure with a smooth transitional gradient from one alloy to another. Also, structures can be designed specifically for their intended purposes, without needing to be tailored to manufacturing process, for example, stiffening ridges can be curvilinear, in response to the applied forces, instead of typical grid patterns which facilitate easy conventional manufacturing techniques. Manufactures, such as Sciaky Inc, (http://www.sciaky.com/64.html) are all ready jumping on the process

In combination with similar 3D part ‘printing’ innovations in plastics and other materials, the required complexity for sustaining all the mechanical and structural components of a self sustaining structure is plummeting drastically. Isolated structures could survive on a feed stock of scrap that is perpetually recycled as worn parts are replaced by free form manufacturing and the old ones melted to make new feed stock. Space colonies could combine such manufacturing technologies and scrap feedstock with resource collection creating a viable minimal volume and energy consuming system that could perpetually repair the structure – or even build more. Technologies like these show that the atomic level control that nanotechnology manufacturing proposals offer are not necessary to create self sustaining structure, and that with minor developments of modern technology, self sustaining structures could be built and operated successfully.