Toggle light / dark theme

Abstract

American history teachers praise the educational value of Billy Joel’s 1980s song ‘We Didn’t Start the Fire’. His song is a homage to the 40 years of historical headlines since his birth in 1949.

Which of Joel’s headlines will be considered the most important a millennium from now?

This column discusses five of the most important, and tries to make the case that three of them will become irrelevant, while one will be remembered for as long as the human race exists (one is uncertain). The five contenders are:

The Bomb
The Pill
African Colonies
Television
Moonshot


Article

My previous column concentrated on the Hall Weather Machine[1], with a fairly technocentric focus. In contrast, this column is not technical at all, but considers the premise that if we don’t know our past, then we don’t know what our future will be.

American history teachers praise Billy Joel’s 1980s song ‘We Didn’t Start the Fire’ for its educational value. His song is a homage to the 40-years of historical headlines since his birth in 1949. Before reading further, go to http://yeli.us/Flash/Fire.html to hear it and to see the photographs that go with each phrase of the song.

Which of Joel’s headlines do you think will be most important, when considered by people a millennium from now? A thousand years is a long time.

Many of the popular figures Joel mentions from politics, entertainment, and sports have already begun to fade from living memory, so they are easy to dismiss. Similarly, which nation won which war will be remembered only by historians, though the genetic components of descendants affected by those wars will reverberate through the centuries. An interesting exercise would consider the most significant events of the eleventh century. English-speaking historians might mention the Battle of Hastings, but is Britain even a world power any longer? Where are the Byzantine, Ottoman, Toltec, and Holy Roman empires of a thousand years ago?

Note that there may be a difference between what most people 1,000 years from now will consider to be the most important and what may actually be the most important. In this case, just because the empires mentioned above are gone doesn’t necessarily mean they didn’t have a significant role in creating our present and our future — we may simply be unconscious of their effect.

I will consider a few possibilities before arguing for one headline that is certain to be remembered, rightfully so, ten thousand years from now — if not longer.


The Bomb

First, most thoughtful people would include the hydrogen-bomb. A few decades ago, almost everyone would have agreed wholeheartedly. At that time, the policy of Mutual Assured Destruction hung heavily over every life in the USSR and the United States (if not the world). With the USSR now gone, and Russia and USA not quite at each others throats, the danger from extinction via a full-out nuclear exchange may be lower. However, the danger of a nuclear attack that kills a few million people is actually more likely.

Up till now, for a nation to become a great power and thereby wield great influence, it needed the level of organization that depended on civilization. No matter how brutal their government or culture — such as Nazi Germany, Communist Soviet Union, or Ancient Rome — their organization depended on efficient education, competent administration, large-scale engineering, and the finer things in life — to motivate at least the elite. Even then, some of the benefit would trickle down as “a rising tide raises all boats”. Competent and educated slaves were a key to Roman Civilization, just as educated bureaucrats were essential to the Nazi and Soviet systems.

Now, however, we are getting into a situation in which atomic weapons give the edge to the stark-raving mad — anyone who is willing to use them. This situation could be most destructive to prosperous, open, humanistic, and civilized nations, because they may be less willing to give up their comfort and freedom to defend against this threat. It appears very likely that within a decade or less, any ragtag collection of pip-squeak lunatics will be able to level the greatest city on Earth, even if it is defended by the world’s strongest army. This is because the advances in nuclear enrichment technology (along with all technology) will make it easier for pip-squeak lunatics to acquire or manufacture nuclear bombs.

That being said, however, it is also true that really advanced technology — specifically privacy-invasive information technology, perhaps in the form of throwaway supercomputers in a massive network of dustcams — might stop the pip-squeak lunatics before they can build and deploy their nuclear bombs.

In addition, another decade of technological development will result in nanobots. By the way, this isn’t just my prediction (the defense of which is a subject of a future column), but also the opinion of inventive dreamers such as Raymond Kurzweil, and of conservative achievers such as Lockheed executives. The development of nanobots means that cellular repair of radiation damage may also become possible (though the problems of controlling trillions of nanobots and of how to detect and repair radiation damage are additional separate and very difficult engineering and biological issues). Michael Flynn examined some of the nuclear strategic issues of this nanotech application in his short story “Washer at the Ford”.[2]

The problem is that there may be a five year window during which our only defense against nuclear-bomb-wielding pip-squeak lunatics will be privacy-invasive information technology, run by the FBI, NSA, and CIA, and their counterparts around the world. Yes, you should be worried, but probably not for the reasons you may think. The danger is not as much that these government agencies may infringe on your rights, but that the very nature of their jobs means that they won’t be able to apply Kranstowitz’s weapon of openness[3] against those who want us dead. To make matters worse, the U.S. intelligence agencies will likely follow the complex laws[3] that protect the privacy of U.S. persons[4] — to the exclusion of catching the nuclear lunatics. This is one reason that FBI, NSA, and CIA directors get new gray hairs every night.

Another problem is that the pip-squeak lunatics will also be able to buy cheap, privacy-invasive information (and other) technologies. Petro-dollars, peasant-made knickknacks, and mining rights have given ethically-challenged individuals in third-world countries astonishing wealth. Many of the world’s richest men live in the world’s poorer countries.[5] They have also learned cruel and clever means by which to keep their peasants down. The question is whether or not they can run the expensive technology they bought with their wealth and power. Buying cheap technology is one thing, but controlling it requires skilled people, and skilled people are more difficult to control. Can the dictators keep a small cadre of trusty elites to run the technology? North Korea and Iran are interesting (and rather scary) test cases at the moment.

Another wild card is that while some dictatorships have become more totalitarian, there comes a point at which the downtrodden peasants (and students, and factory workers, and shopkeepers) don’t have anything to lose but their miserable lives. Meanwhile, totalitarian governments can’t keep up with technology as quickly as free ones can. This is when the system collapses of its own weight, and that is what happened to the USSR. The cell phone, Facebook, and Twitter-fed revolutions in Egypt, Libya, Syria, and elsewhere also seem to prove this point. Thus far, the Chinese leaders have been smart enough to adapt their economy without adapting their government. The jury is still out as to what will happen to them next (it may not be pretty for us if it ends badly, and there are many ways it can end badly).

Another wild card to consider is that most of the existing nuclear warheads are in the United States, Russia, and China. Americans conveniently forget, but non-Americans are very aware that the United States is (thus far) the only nation that has actually used an atomic bomb to kill people. On the other hand, America doesn’t have highly corrupt officials in charge of our nuclear arsenal (Pakistan), nor is it controlled by a near-dictator (Russia), nor by a totalitarian crazed nut-job (North Korea). In addition, a number of important Japanese leaders have publicly said that that controversial decision to bomb Hiroshima and Nagasaki was the correct one–“It could not be helped.“[6] A similar case might be made for Israel, which is surrounded by overwhelming numbers of Arab nations. Given the tensions in the area, a preemptive strike by Israel seems possible, if not likely. The important question then becomes: Under what grounds, if any, could such usage be justified? Of course, Iranian and other Arab leaders have often called for the total destruction of Israel, and eventually one of them may be willing to try it. On what grounds could they be justified?

Another issue is that once we lose New York or some other major city, Americans will accept any solution — including a totalitarian police state. So will the people of other democratic republics if they lose a major city to nuclear terrorists. But the solution is not necessarily a police state. David Brin has answered the “who guards the guardians” question with a clever answer: “We all do.” Over-simplified, his solution is to kiss most of your privacy goodbye. Either that or kiss your life, your liberty, and property, and your privacy all goodbye. Brin proposes that we should all submit to being on camera most of the time — as long as the camera essentially points both ways so we know who is watching us — i.e. the police, our neighbors, the pervert three blocks away, and our governments will know that we are watching them too. We must all shoulder the responsibility of policing our neighborhoods and our governments. The world will be like big village in which everyone knows everyone else’s business, but it’s OK because we are all accountable for our actions. Given the fact that human beings only behave when held accountable, it is the only real solution.[7]

Some may think it naive to expect that governments would ever allow their citizens to observe them in return for their observing us. On the other hand, between the increasing calls for government transparency, and the fact that even the chief of the IMF can be taken down by an lowly maid (with the help of the rule of law), there is hope. Not only that, but many of us have already given away much of our privacy on Facebook and YouTube. Don’t worry about it. Maybe I’m still a wide-eyed optimist, but look at the fall of the USSR empire. Nobody with two brain cells to rub together could have possibly predicted that it could have been so bloodless.

DARPA will certainly look for technological answers for nuclear bomb-related problems such as the nightmare of screening shipping containers. They will probably find some solutions, but during the critical transition phase towards productive nanosystems, will they be able to make those solutions affordable?

One nanotech solution to stopping nuclear bombs that are hidden in shipping containers is to stop all physical shipping altogether and just trade files over the internet, printing whatever you want on our desktops (BTW, you can build a very large printer in two steps). Our only problem then would be keeping our computer virus detectors up to date so that we don’t print something nasty.

To summarize, if anybody is around 1,000 years from now, then the nuclear bomb will not be considered an important issue.


The Pill

The second historically consequential development in the past 50 years that many people will propose as significant is the contraceptive pill.

Some claim that the Pill is necessary because we have a population problem. When I was in college in the 1970’s, it was “proven” to me, with the aid of computer models, that overpopulation was going to be the reason we were going to have food riots in the United States by 1985. So naturally, I’m as skeptical about overpopulation as I am about the imminent rapture. Everyone probably agrees that overpopulation results when the population exceeds the sustainable carrying capacity of the environment. But what determines that capacity? Technology multiplies it while ignorance, injustice, and war decrease it. On Earth today, there is currently no correlation between standard of living and population density.[8]

That being said, in a closed system, unlimited human population growth could result in a situation worse than simple human extinction. Natural ecosystems have population boom/crash cycles all the time, but other species don’t have access to nuclear bombs and other devices that can obliterate habitats. The overpopulation disaster on Easter Island occurred with a primitive culture. It still has grass, but not much of an ecosystem. Imagine what could have happened with modern technology.

The Pill fundamentally changed the relationship of men and women, the place of children in a family and, on the macro level, population dynamics. The family is the basic building block of society and civilization, not only because it is an economic unit (you don’t pay your spouse to wash the dishes or take out the garbage), but more importantly, because the family critically shapes the next generation. Therefore, a large change in family structures will have far-reaching effects, at least in the “short run” of five to ten generations. However, to steal from Jerry Pournell and Larry Niven: “Think of it as evolution in action.“[9] The people who embrace contraception as a path to “the good life” will (evolutionarily speaking) remove their vote for influencing their future within a few generations. It is true that for humans, memes may carry as much weight as genes, but the same process applies — as long as meme propagation is kept below a critical level, perhaps by co-traveling xenophobic memes. On the other hand, people who don’t have much of their material resources tied up in children may have more time to devote to meme propagation. However, many studies have shown than the people who have the greatest impact on teens and pre-teens are their parents.[10]

One possible result is that a millennium from now, the Pill will be a small blip, as inconsequential as the Shakers, and for essentially similar reasons. Nanotechnology-enabled life extension techniques will extend that blip for a while, but because the prolific pro-natalists will continue having even more children for their longer lives, more pro-natalists will be born to outvote the anti-natalists. This is why the Jewish Knesset now has a significantly higher percentage of Ultra-Orthodox than when it began,[11] why Utah’s government is almost 100% Mormon,[12] and why the Amish are one of the fastest growing minority in the world, with an average of 6.8 children per family.[13]

The opposing trend is controlled by a number of factors. First, the birth rate goes down as women’s educations go up. This occurs partially because practically speaking, it is more difficult to go to school while being married and raising children. More subtly, however, it is because school is an investment in learning a professional trade — it is a different investment than children. In addition, women and men are implicitly and explicitly taught that a better career is more important than raising more children.

The problem isn’t that women are being educated. The problem is that if they are taught something that results in the extinction of our egalitarian, humanistic, and liberal society by one that is misogynistic, xenophobic, and unjust, then something is wrong.

One weapon of the contraceptive culture is the reeducation of the pro-natalist’s children. Proponents of secularization would call this “giving people free access to all information” not “reeducation”. But when Bibles are banned from the classroom, and students are taught in many ways that they are just animals, it seems like imposition of a secular viewpoint. At least they could teach the debate — and at the end of the semester, the students could try to guess the teacher’s bias (if they can’t, then the teacher presented both views with equal force).

There are more than a million home-schooled children in the U.S., up to two-thirds of whom are there primarily because their parents fear the imposition of our government’s ideas on their children.[14] This quiet protest is so feared by governments that parents are prosecuted for doing this, not only in all totalitarian countries but even in some democratic nations.[15] The alternative is that the governments of open, liberal, and secularized nations (that accept contraception) will decide that the vote of the increasing minority is wrong. Could their right to vote be taken away? Of course it can; it has happened before.

A pessimistic view of this possibility of disenfranchisement is also supported by the prevalence of abortion in liberal democracies. Given the accuracy of ultrasound imagery, if we can ignore the right to life for our most innocent and helpless, then how safe is something as meager as the right to vote? Niemöller’s poem about trade unionists, Communists, and Catholics comes to mind.[16] So do the events in ancient Egypt, during the three or four hundred years between the famines that Joseph ameliorated (Genesis 50:22). The Egyptian upper class used contraception[17], and they felt threatened by the increasing numerical growth of the Jews, who had strict injunctions against it.

Is it good for our country that more than a million children are being taught by their parents? What if rebellious parents are teaching strange and dangerous ideas? How do we decide which ideas are dangerous? Do we censor and suppress them? After all, ideas have consequences.

The answer is that there are limits to what parents can do, but very few — if any — on what they teach. The whole point about freedom of religion is that we can believe what we want, as long as we do not destroy society or individuals with our actions. Our constitution was written not to limit individuals, but to put strict limits on government, since it is inherently more powerful.

The temptation to avoid having children is not limited to any particular culture. The reason is simple: raising children is an expensive, risky, and difficult investment. Parents must be willing to give up fancy vacations, luxury cars, time to themselves, a good night’s sleep, stress on their marriage, and many other things, thus weighing against the pro-natalist agenda. However, the culture that teaches that children are a blessing and a worthwhile investment instead of a cost will overcome those that do not — even if it tends to encourage people to be ignorant, misogynist, racist, and illogical (like two polygamist religions that start with the letter “M“[18]).

Cyril M. Kornbluth’s 1951 short story “The Marching Morons” illustrates another potential downside to the anti/pro-natalist issue by portraying a scenario in which selective pressure resulted in smart people breeding themselves out of existence. It also, despite the derogatory title, provides a warning: the originator of the “Final Solution” (placing all the fertile morons onto one-way rockets to nowhere) ends up screaming futilely as he himself is loaded on one of the last rockets. Kornbluth’s main premise seems logical. People are often willing to trade children for the better material things and higher standard of living, and those with more education are more willing to do so. But is the resulting cost to society worth it?

What will happen when productive nanosystems and advanced software lowers the price of goods and services to very low levels? Many other things will happen at the same time, but in a society of economic abundance, the expense of children will drop significantly — and will be limited only by attention span and desire (and possibly expanded by reproductive-enhancing technologies including parthenogenesis and male pregnancy). Is there a gene for liking children? Or is it a meme that is culturally transmitted? Evolution favors both. Of course, evolution may also favor a “Boys from Brazil“[19] scenario (in which numerous clones of a dictator are grown to reinstate his rule). This strategy may be successful as long as the clones survive to adulthood and can reproduce.

While a contraceptive culture is non-sustainable, especially in the face of a competing culture whose population is growing, it must be noted that a pro-natalist culture is also non-sustainable. Isaac Asimov pointed out that even if we could overcome all technological obstacles, any growth rate will eventually result in humanity becoming a big ball of flesh, expanding at the speed of light (BOFESOL, or BOF for short). At a modest 3% rate, we will reach the initial BOF in only 3,584 years. After that, the speed of light will limit growth.

However, the fact that a contraceptive culture is non-sustainable in a significantly shorter term than the pro-natalist one is why it makes sense for governments to support traditional religions in their efforts to maintain traditional morality and fertility. The difficult problem is finding ways to ensure the survival of a culture without it becoming xenophobic. This is difficult to do when we think that we have Absolute Truth and the One True Religion on our side. But then exactly how do we know that our particular set and ordering of values is the objectively correct one? Note that the denial of the existence of any objectively maximum set of values exists is itself a particular set of values. And note also that sustainability and tolerance are also values that, like all values, must be assumed because they cannot be proven.

Given the contradictory evidence and shifting values, it is likely that equilibrium between pro-natalist and contraceptive meme sets can never be reached. Instead, humanity will likely experience benign (and sometimes not-so benign) boom and crash cycles similar to those that natural ecosystems suffer from. Only for us, our cycles will be constrained by opinions and technological capabilities, not by predators.


African Colonies

A third historical event that may be of consequence a thousand years from now is “Belgians in the Congo”. The Belgian regime in the Congo was about as brutal and inhuman as any the Europeans imposed on its colonies. However, the European Empires spread Christianity in Africa — where it remains a fast-growing religion. This African event may be as significant as when the Spanish and Portuguese spread Christianity in Latin America, and will bring about a fundamentally different world than if Africa had gone Islamic, Hindu, or Confucian. Think of Latin American worshiping the Aztec gods with human sacrifice, or the impact on us if it were an Islamic Civilization. We would live in a very different world.

Then again, Africa may still turn Islamic. After all, Islam generally values large families, just like the fast-growing Mormon and Amish religions do. On the other hand, when Muslims become secularized, they reduce the number of their offspring, just like secularized Christians do — hence their accompanying philosophies will suffer the same fate. The result will be that in order to survive in the long term, future generations must be hostile to secularization, and probably hostile to each other’s religious views also (not a pleasant thought, even if they do share many of the same values). Over the next thousand years, in view of the exponential increase in technological power, which viewpoint will win? The answer depends on science, theology, and demographics.

A handful of nominal Christians destroyed the Aztec civilization, not because of their technology (though that helped), but because the Aztec civilization was based on a great and powerful falsehood — that in order for the sun to rise every morning, human blood needed to be shed — thereby earning the hatred of the neighboring tribes whose blood it was that was usually shed. Islam is not as false as the Aztec religion — otherwise it would not have lasted this long. But the jury is still out on whether it can survive the extreme technological advancement that productive nanosystems will bring. Will fanatical Muslims be able to design and build the nanotech equivalent of 747 jets that they can fly into the skyscrapers of their enemies? Or will they just learn how to use it in unexpected and terrorizing ways? Given the high level of technological advancement in the Muslim empire a thousand years ago, the answer seems to be “yes” to both questions. However, Islam’s ultimate rejection of reason is its Achilles heel, and in the past it helped lead to the decline of the Ottoman Empire after its peak in the 1300s. This is because Islam’s idea of Allah’s absolute transcendence is incompatible with the idea that the universe is ordered and knowable. Psychologically, the problem is that if the universe is not ordered and knowable, then why bother learning and doing science? Meanwhile, Hinduism has many competing gods, and this leads (like in ordinary paganism) to its rejection of the logical principle of contradiction — without which science is impossible. Confucianism seems to be more a moral code than a religious one, so it seems that it could be accommodating to technology — but that didn’t seem to help its practitioners develop it before they collided with the West. Similarly with Buddhism. Meanwhile, the decadent West’s deconstructionism and nihilism is gnawing at its parent’s roots, rejecting reason and science as merely tools of power.

It can be claimed that religious views will keep changing and splitting into new orthodoxies. In that case, the result will be an ever-shifting field of populations and sub-populations with none winning out completely over the others. But as far as I can tell, neither Judaism, Catholicism, Buddhism, nor Islam have changed any of their core beliefs in the past few millennia. In contrast, the Mormons have changed a number of their major doctrines, and so have the Protestants. This does not bode well for their long-term survival as a coherent organization, though the Mormons do have their high fertility on their side.

At the moment, the whole world is copying the Christian-rooted West, as many of their scientific elite are educated in Europe and the United States. It is difficult to say to what extent they understand the philosophical underpinnings of science. When their own universities start to educate their elite, their cultural assumptions will probably displace the Judeo-Christian/Greek philosophy of the West. Then what? It depends if science, which is the foundation of technological superiority, is simply a cargo cult that works. My claim is that science will only continue working for more than a generation or two if its underlying assumptions come with it — that the universe is both ordered and knowable.

These Judeo-Christian assumptions are huge — though atheists, agnostics, and (maybe) Muslims and Buddhists should also be able to accept them. However, every scientist still faces the question of why the universe is ordered and knowable (and if you’re not constantly asking the next question, especially the “why” question, then you’re not a very good scientist). The theistic answer of design by creator[20] is not too far away from the assumption of an ordered and knowable universe, and from there, one begins skating very close to the concept that we are made “Imago Dei”–in God’s image. Some people think that there is too much hubris and ego to that belief, but you don’t see dolphins landing on the Moon, or chimpanzees creating great symphonies (or even bad rap).

“Imago Dei” is the most logical conclusion once we can explain why the universe is predictable and knowable. And being made in God’s image has other implications, especially in terms of our role in this universe. Most notably, it promotes the idea of human beings as powerful stewards of creation, as opposed to subservient subjects of Mother Nature, and it will pit Nietzschean Transhumanists and Traditional Catholics against Gaian environmentalists and National Park Rangers.


Television

Writing has been around for thousands of years, while the printing press has been around for almost 600. It would seem that the printing press was the one invention that, more than anything else, enabled the development of all subsequent inventions. Television could be considered an improvement over writing, and given that large amounts of video can be subject to slightly less interpretation than an equal amount of effort writing text, our descendants might get a better, more complete depiction of history than they could get from just text or physical artifacts. However, the television that Joel mentioned was controlled by the big three television networks. This was because the cost to entry was so high (currently from $200,000 to $13 million per episode). So the role of television of the 1960s was similar to the role of books in Medieval Europe, where the cost of a book was equivalent to the yearly salary of a well-educated person). For this reason, Joel’s headline will not be considered significant, though he was close.

He was close because television’s electronic video display offspring, the computer — especially when connected to form the Internet — will certainly be significant. It will be as significant as the nuclear bomb and the Pill combined, if and when Moore’s Law ushers in the Singularity. But Joel was writing a song, not engaging in future studies. We might as well criticize him for not mentioning the coining of the word “nanotechnology”.


Moonshot

A few of Billy Joel’s headlines may be remembered 1,000 years from now, but none mentioned so far will really be significant.

I would go out on a limb and say that other than the scientific and industrial revolutions, the American Constitution, and the virtual abolishment of slavery, little of consequence has happened in the last thousand years. There is, however, one significant event that happened in the 1400s. No, it’s not Spain kicking out the Muslims. It’s not even Admiral Zheng He, Admiral of China’s famed Dragon Fleet, sailing to Africa in the 1420s, though we’re getting warmer. As impressive as they were, Zheng’s voyages did not change the world. What did change the world was the tiny fleet of three ships that returned from the New World to Spain in 1492.

Apollo and Star Trek both pointed to the next and final frontier. It is true that little has happened in the American space program since Apollo, and with the retirement of the 1960s-designed Space Shuttle, even less is expected. This poor showing has occurred because the moon shot, as awe-inspiring as it was, was a political stunt funded for political reasons. The problem is that it didn’t pay for itself, and we therefore have a dismal space program. However, with communication, weather, and GPS satellites, we have a huge space industry. It’s all about the value added.

On the other hand, it’s the governmental space programs that seem to make the initial (and necessary) investments in the basic technology. More importantly, these programs give voice to that which makes us human — “to look at the stars and wonder”.[21]

Realistically, looking at the historical records of Jamestown and Salt Lake City, space development will occur when prosperous upper class families can sell their homes and businesses to buy a one-way ticket and homesteading tools. In today’s money, that would be about one or two million dollars. We have a long way to go to achieve that price break, though it helps that Moore’s Law is exponential.

There have only been a dozen men on the Moon so far, but Neil Armstrong will be remembered far longer than anyone else in this millennium. After the human race has spread throughout the solar system, and after it starts heading for the stars, everyone will remember who took the first small step. The importance of this step will become obvious after the Google Moon prize is won, and after Elon Musk and his imitators demonstrate conclusively that we are no longer in a zero sum game.

That is something to look forward to.

Tihamer Toth-Fejel is Research Engineer at Novii Systems.


Acknowledgments

Many thanks to Andrew Balet, Bill Bogen, Tim Wright, and Ted Reynolds for their significant contributions to this column.


Footnotes

1. Tihamer Toth-Fejel, The Politics and Ethics of the Hall Weather Machine, https://lifeboat.com/blog/2010/09/the-politics-and-ethics-of…er-machine and http://www.nanotech-now.com/columns/?article=486
2. Michael Flynn, Washer at the Ford, Analog, v109 #6 & 7, June & July 1989.
3. Arthur Kantrowitz, The Weapon of Openness, http://www.foresight.org/Updates/Background4.html
4. United States Signals Intelligence Directive 18, 27 July 1993, http://cryptome.org/nsa-ussid18.htm
5. e.g. Mexico, India, Saudia Arabia, and Russia http://www.forbes.com/lists/2010/10/billionaires-2010_The-Wo…_Rank.html Also, the petro-dollar millionaires in the Mideast http://www.aneki.com/millionaire_density.html
6. There is an interesting discussion at http://en.wikipedia.org/wiki/Debate_over_the_atomic_bombings…d_Nagasaki
7. David Brin,The Transparent Society, Basic Books (1999). For a quick introduction, see The Transparent Society and Other Articles about Transparency and Privacy, http://www.davidbrin.com/transparent.htm.
8. Tihamer Toth-Fejel, Population Control, Molecular Nanotechnology, and the High Frontier, The Assembler, Volume 5, Number 1 & 2, 1997 http://www.islandone.org/MMSG/9701_05.html#_Toc394339700
9. Larry Niven and Jerry Pournelle, Oath of Fealty. New York : Pocket Books, 1982
10. KIDS COUNT Indicator Brief, Reducing the Teen Birth Rate, July 2009. http://www.aecf.org/~/media/Pubs/Initiatives/KIDS%20COUNT/K/…0brief.pdf
11. From a small group of just four members in the 1977 Knesset, they gradually increased their representation to 22 (out of 120) in 1999 (http://en.wikipedia.org/wiki/Haredi_Judaism). The fertility rate for ultra-Orthodox mothers greatly exceeds that of the Israeli Jewish population at large, averaging 6.5 children per mother in the ultra-Orthodox community compared to 2.6 among Israeli Jews overall (http://www.forward.com/articles/7641/ ).
12. As of mid-2001, the Governor of Utah, and all of its Federal senators, representatives and members of the Supreme Court are all Mormon. http://www.religioustolerance.org/lds_hist1.htm
13. Julia A. Ericksen; Eugene P. Ericksen, John A. Hostetler, Gertrude E. Huntington. “Fertility Patterns and Trends among the Old Order Amish”. Population Studies (33): 255–76 (July 1979).
14. 1.1 Million Homeschooled Students in the United States in 2003. http://nces.ed.gov/nhes/homeschool/
15. HOMESCHOOLING: Prosecution is waged abroad; troubling trends abound in US http://www.bpnews.net/BPnews.asp?ID=34699
16. http://timpanogos.wordpress.com/2010/02/26/quote-of-the-mome…speak-out/
17. http://www.patentex.com/about_contraception/journey.php
18. I should note that almost all of the people I have personally known from these two religions are trustworthy, intelligent, and a pleasure to meet. Despite what they are taught in their sacred texts.
19. Ira Levin, Boys from Brazil, Dell (1977)
20. There are many question to follow. How did He do it? Why is He masculine? Why did He do it? How do we know? That last question is especially relevant.
21. Guy J. Consolmagno, Brother Astronomer: Adventures of a Vatican Scientist, McGraw-Hill (2001)

It would be helpful to discuss these theoretical concepts because there could be significant practical and existential implications.

The Global Brain (GB) is an emergent world-wide entity of distributed intelligence, facilitated by communication and the meaningful interconnections between millions of humans via technology (such as the internet).

For my purposes I take it to mean the expressive integration of all (or the majority) of human brains through technology and communication, a Metasystem Transition from the human brain to a global (Earth) brain. The GB is truly global not only in geographical terms but also in function.

It has been suggested that the GB has clear analogies with the human brain. For example, the basic unit of the human brain (HB) is the neuron, whereas the basic unit of the GB is the human brain. Whilst the HB is space-restricted within our cranium, the GB is constrained within this planet. The HB contains several regions that have specific functions themselves, but are also connected to the whole (e.g. occipital cortex for vision, temporal cortex for auditory function, thalamus etc.). The GB contains several regions that have specific functions themselves, but are connected to the whole (e.g. search engines, governments, etc.).

Some specific analogies are:

1. The Broca’s area in the inferior frontal gyrus, associated with speech. This could be the equivalent of, say, Rubert Murdoch’s communication empire.
2. The motor cortex is the equivalent of the world-wide railway system.
3. The sensory system in the brain is the equivalent of all digital sensors, CCTV network, internet uploading facilities etc.

If we accept that the GB will eventually become fully operational (and this may happen within the next 40–50 years), then there could be severe repercussions on human evolution. Apart from the fact that we could be able to change our genetic make-up using technology (through synthetic biology or nanotechnology for example) there could be new evolutionary pressures that can help extend human lifespan to an indefinite degree.

Empirically, we find that there is a basic underlying law that allows neurons the same lifespan as their human host. If natural laws are universal, then I would expect the same law to operate in similar metasystems, i.e. in my analogy with humans being the basic operating units of the GB. In that case, I ask:

If, there is an axiom positing that individual units (neurons) within a brain must live as long as the brain itself, i.e. 100–120 years, then, the individual units (human brains and, therefore, whole humans) within a GB must live as long as the GB itself, i.e. indefinitely.

Humans will become so embedded and integrated into the GB’s virtual and real structures, that it may make more sense from the allocation of resources point of view, to maintain existing humans indefinitely, rather than eliminate them through ageing and create new ones, who would then need extra resources in order to re-integrate themselves into the GB.

The net result will be that humans will start experiencing an unprecedented prolongation of their lifespan, in an attempt by the GB to evolve to higher levels of complexity at a low thermodynamical cost.

Marios Kyriazis
http://www.elpistheory.info

This is an email to the Linux kernel mailing list, but it relates to futurism topics so I post a copy here as well.
———
Science doesn’t always proceed at the speed of thought. It often proceeds at sociological or even demographic speed. — John Tooby

Open Letter to the LKML;

If we were already talking to our computers, etc. as we should be, I wouldn’t feel a need to write this to you. Given current rates of adoption, Linux still seems a generation away from being the priceless piece of free software useful to every child and PhD. This army your kernel enables has millions of people, but they often lose to smaller proprietary armies, because they are working inefficiently. My mail one year ago (http://keithcu.com/wordpress/?p=272) listed the biggest workitems, but I realize now I should have focused on one. In a sentence, I have discovered that we need GC lingua franca(s). (http://www.merriam-webster.com/dictionary/lingua%20franca)

Every Linux success builds momentum, but the desktop serves as a powerful daily reminder of the scientific tradition. Many software PhDs publish papers but not source, like Microsoft. I attended a human genomics conference and found that the biotech world is filled with proprietary software. IBM’s Jeopardy-playing Watson is proprietary, like Deep Blue was. This topic is not discussed in any of the news articles, as if the license does not matter. I find widespread fear of having ideas stolen in the software industry, and proprietary licenses encourage this. We need to get these paranoid programmers, hunched in the shadows, scribbled secrets clutched in their fists, working together, for any of them to succeed. Desktop world domination is not necessary, but it is sufficient to get robotic chaffeurs and butlers. Windows is not the biggest problem, it is the proprietary licensing model that has infected computing, and science.

There is, unsurprisingly, a consensus among kernel programmers that usermode is “a mess” today, which suggests there is a flaw in the Linux desktop programming paradigm. Consider the vast cosmic expanse of XML libraries in a Linux distribution. Like computer vision (http://www.cs.cmu.edu/~cil/v-source.html), there are not yet clear places for knowledge to accumulate. It is a shame that the kernel is so far ahead of most of the rest of user mode.

The most popular free computer vision codebase is OpenCV, but it is time-consuming to integrate because it defines an entire world in C++ down to the matrix class. Because C/C++ didn’t define a matrix, nor provide code, countless groups have created their own. It is easier to build your own computer vision library using standard classes that do math, I/O, and graphics, than to integrate OpenCV. Getting productive in that codebase is months of work and people want to see results before then. Building it is a chore, and they have lost users because of that. Progress in the OpenCV core is very slow because the barriers to entry are high. OpenCV has some machine learning code, but they would be better delegating that out to others. They are now doing CUDA optimizations they could get from elsewhere. They also have 3 Python wrappers and several other wrappers as well; many groups spend more time working on wrappers than the underlying code. Using the wrappers is fine if you only want to call the software, but if you want to improve OpenCV then the programming environment instantly becomes radically different and more complicated.

There is a team working on Strong AI called OpenCog, a C++ codebase created in 2001. They are evolving slowly as they do not have a constant stream of demos. They don’t consider their codebase is a small amount of world-changing ideas buried in engineering baggage like STL. Their GC language for small pieces is Scheme, an unpopular GC language in the FOSS community. Some in their group recommend Erlang. The OpenCog team looks at their core of C++, and over to OpenCV’s core of C++, and concludes the situation is fine. One of the biggest features of the ROS (Robot OS), according to its documentation, is a re-implementation of RPC in C++, not what robotics was missing. I’ve emailed various groups and all know of GC, but they are afraid of any decrease in performance, and they do not think they will ever save time. The transition from brooms to vacuum cleaners was disruptive, but we managed.

C/C++ makes it harder to share code amongst disparate scientists than a GC language. It doesn’t matter if there are lots of XML parsers or RSS readers, but it does matter if we don’t have an official computer vision codebase. This is not against any codebase or language, only for free software lingua franca(s) in certain places to enable faster knowledge accumulation. Even language researchers can improve and create variants of a common language, and tools can output it from other domains like math. Agreeing on a standard still gives us an uncountably infinite number of things to disagree over.

Because the kernel is written in C, you’ve strongly influenced the rest of community. C is fully acceptable for a mature kernel like Linux, but many concepts aren’t so clear in user mode. What is the UI of OpenOffice where speech input is the primary means of control? Many scientists don’t understand the difference between the stack and the heap. Software isn’t buildable if those with the necessary expertise can’t use the tools they are given.

C is a flawed language for user mode because it is missing GC, invented a decade earlier, and C++ added as much as it took away as each feature came with an added cost of complexity. C++ compilers converting to C was a good idea, but being a superset was not. C/C++ never died in user mode because there are now so many GC replacements, it created a situation paralyzing many to inaction, as there seems no clear place to go. Microsoft doesn’t have this confusion as their language, as of 2001, is C#. Microsoft is steadily moving to C#, but it is 10x easier to port a codebase like MySQL than SQL Server, which has an operating system inside. C# is taking over at the edges first, where innovation happens anyway. There is a competitive aspect to this.

Lots of free software technologies have multiple C/C++ implementations, because it is often easier to re-write than share, and an implementation in each GC language. We all might not agree on the solution, so let’s start by agreeing on the problem. A good example for GC is how a Mac port can go from weeks to hours. GC also prevents code from being able to use memory after freeing, free twice, etc. and therefore that user code is less likely to corrupt your memory hardware. If everyone in user mode were still writing in assembly language, you would obviously be concerned. If Git had been built in 98% Python and 2% C, it would have become easier to use faster, found ways to speed up Python, and set a good example. It doesn’t matter now, but it was an opportunity in 2005.

You can “leak” memory in GC, but that just means that you are still holding a reference. GC requires the system to have a fuller understanding of the code, which enables features like reflection. It is helpful to consider that GC is a step-up for programming like C was to assembly language. In Lisp the binary was the source code — Lisp is free by default. The Baby Boomer generation didn’t bring the tradition of science to computers, and the biggest legacy of this generation is if we remember it. Boomers gave us proprietary software, C, C++, Java, and the bankrupt welfare state. Lisp and GC were created / discovered by John McCarthy, a mathematician of the WW II greatest generation. He wrote that computers of 1974 were fast enough to do Strong AI. There were plenty of people working on it back then, but not in a group big enough to achieve critical mass. If they had, we’d know their names. If our scientists had been working together in free software and Lisp in 1959, the technology we would have developed by today would seem magical to us. The good news is that we have more scientists than we need.

There are a number of good languages, and it doesn’t matter too much what one is chosen, but it seems the Python family (Cython / PyPy) require the least amount of work to get what we need as it has the most extensive libraries: http://scipy.org/Topical_Software. I don’t argue the Python language and implementation is perfect, only good enough, like how the shape of the letters of the English language are good enough. Choosing and agreeing on a lingua franca will increase the results for the same amount of effort. No one has to understand the big picture, they just have to do their work in a place where knowledge can easily accumulate. A GC lingua franca isn’t a silver bullet, but it is the bottom piece of a solid science foundation and a powerful form of social engineering.

The most important thing is to get lingua franca(s) in key fields like computer vision and Strong AI. However, we should also consider a lingua franca for the Linux desktop. This will help, but not solve, the situation of the mass of Linux apps feeling dis-integrated. The Linux desktop is a lot harder because code here is 100x bigger than computer vision, and there is a lot of C/C++ in FOSS user mode today. In fact it seems hopeless to me, and I’m an optimist. It doesn’t matter; every team can move at a different pace. Many groups might not be able to finish a port for 5 years, but agreeing on a goal is more than half of the battle. The little groups can adopt it most quickly.

There are a lot of lurkers around codebases who want to contribute but don’t want to spend months getting up to speed on countless tedious things like learning a new error handling scheme. They would be happy to jump into a port as a way to get into a codebase. Unfortunately, many groups don’t encourage these efforts as they feel so busy. Many think today’s hardware is too slow, and that running any slower would doom the effort; they are impervious to the doublings and forget that algorithm performance matters most. A GC system may add a one-time cost of 5–20%, but it has the potential to be faster, and it gives people more time to work on performance. There are also real-time, incremental, and NUMA-aware collectors. The ultimate in performance is taking advantage of parallelism in specialized hardware like GPUs, and a GC language can handle that because it supports arbitrary bitfields.

Science moves at demographic speed when knowledge is not being reused among the existing scientists. A lingua franca makes more sense as more adopt it. That is why I send this message to the main address of the free software mothership. The kernel provides code and leadership, you have influence and the responsibility to lead the rest, who are like wandering ants. If I were Linus, I would threaten to quit Linux and get people going on AI wink There are many things you could do. I mostly want to bring this to your attention. Thank you for reading this.

I am posting a copy of this open letter on my blog as well (http://keithcu.com/wordpress/?p=1691). Reading the LKML for more than one week could be classified as torture under the Geneva conventions.

I believe that death due to ageing is not an absolute necessity of human nature. From the evolutionary point of view, we age because nature withholds energy for somatic (bodily) repairs and diverts it to the germ-cells (in order to assure the survival and evolution of the DNA). This is necessary so that the DNA is able to develop and achieve higher complexity.

Although this was a valid scenario until recently, we have now evolved to such a degree that we can use our intellect to achieve further cognitive complexity by manipulating our environment. This makes it unnecessary for the DNA to evolve along the path of natural selection (which is a slow and cumbersome, ‘hit-and-miss’ process), and allows us to develop quickly and more efficiently by using our brain as a means for achieving higher complexity. As a consequence, death through ageing becomes an illogical and unnecessary process. Humans must live much longer than the current lifespan of 80–120 years, in order for a more efficient global evolutionary development to take place.

It is possible to estimate how long the above process will take to mature (see figure below). Consider that the creation of the DNA was approximately 2 billion years ago, the formation of a neuron (cell) several million years ago, that of an effective brain (Homo sapiens sapiens) 200 000 years ago, and the establishment of complex societies (Ancient Greece, Rome, China etc.) thousands of years ago. There is a logarithmic reduction of the time necessary to proceed to the next more complex step (a reduction by a factor of 100). This means that global integration (and thus indefinite lifespans) will be achieved in a matter of decades (and certainly less than a century), starting from the 1960s-1970s (when globalisation in communications, travel and science/technology started to became established). This leaves another maximum of 50 years before the full global integration becomes established.

Each step is associated with a higher level of complexity, and takes a fraction of the timein order to mature, compared to the previous one.

1. DNA (organic life — molecules: billions of years)

2. Neuron (effective cells: millions of years)

3. Brain (complex organisms — Homo sapiens: thousands of years)

4. Society (formation of effective societies: several centuries)

5. Global Integration (formation of a ‘super-thinking entity’: several decades)

Step number 5 implies that humans who have already developed an advance state of cognitive complexity and sophistication will transcend the limits of evolution by natural selection, and therefore, by default, must not die through ageing. Their continual life is a necessary requirement of this new type of evolution.

For full details see:

https://acrobat.com/#d=MAgyT1rkdwono-lQL6thBQ

The UK’s Observer just put out a set of predictions for the next 25 years (20 predictions for the next 25 years). I will react to each of them individually. More generally, however, these are the kinds of ideas that get headlines, but they don’t constitute good journalism. Scenario planning should be used in all predictive coverage. It is, to me, the most honest way to admit not knowing and documenting the uncertainties of the future—the best way to examine big issues through different lenses. Some of these predictions may well come to pass, but many will not. What this article fails to do, is inform the reader about the ways the predictions may vary from the best guess, and what the possible alternatives may be—and where they simply don’t know.

1. Geopolitics: ‘Rivals will take greater risks against the US’

This is a pretty non-predictive prediction. America’s rivals are already challenging its monetary policy, human rights stances, shipping channels and trade policies. The article states that the US will remain the world’s major power. It does not suggest that Globalization could fracture the world so much that regional powers huddle against the US in various places, essentially creating stagnation and a new localism that causes us to reinvent all economies. It also does not foresee anyone acting on the water rights, food, energy or nuclear proliferation. Any of those could set off major conflicts that completely disrupt our economic and political models, leading to major resets in assumptions about the future.

2. The UK economy: ‘The popular revolt against bankers will become impossible to resist’

British banks will not fall without taking much of the world financial systems with them. I like the idea of the reinvention of financial systems, though I think it is far too early to predict their shape. Banking is a major force that will evolve in emergent ways. For scenario planners, the uncertainty is about the fate of existing financial systems. Planners would do well to imagine multiple ways the institution of banking will reshape itself, not prematurely bet on any one outcome.

3. Global development: ‘A vaccine will rid the world of AIDS’

We can only hope so. Investment is high, but it is not the major cause of death in the world. Other infectious and parasitic diseases still outstrip HIV/AIDS by a large margin, while cardiovascular diseases and cancer even eclipse those. So it is great to predict the end of one disease, but the prediction seems rather arbitrary. I think it would be more advantageous to rate various research programs against potential outcomes over the next 25 years and look at the impact of curing those diseases on different parts of the world. If we tackle, for instance, HIV/AIDS and malaria and diarrhea diseases, what would that do to increase the safety of people in Africa and Asia? What would the economic and political ramifications be? We also have to consider the cost of the cure and the cost of its distribution. Low cost solutions that can easily be distributed will have higher impact than higher cost solutions that limit access (as we have with current HIV/AIDS treatments) I think we will see multiple breakthroughs over the next 25 years and we would do well to imagine the implications of sets of those, not focus on just one.

4. Energy: ‘Returning to a world that relies on muscle power is not an option’

For futurists, any suggestion that the world moves in reverse is an anathema. For scenario planners, we know that great powers have devolved over the last 2,000 years and there is no reason that some political, technological or environmental issue might not arise that would cause our global reality to reset itself in significant ways. I think it is naïve to say we won’t return to muscle power. In fact, the failure to meet global demand for energy and food may actually move us toward a more local view of energy and food production, one that is less automated and scalable. One of the reasons we have predictions like this is because we haven’t yet envisioned a language for sustainable economics that allows people to talk about the world outside of the bounds of industrial age, scale-level terms. It may well be our penchant for holding on to industrial age models that drives us to the brink. Rather than continuing to figure out how to scale up the world, perhaps we should be thinking about ways to slow it down, restructure it and create models that are sustainable over long periods of time. The green movement is just political window dressing for what is really a more fundamental need to seek sustainability in all aspects of life, and that starts with how we measure success.

5. Advertising: ‘All sorts of things will just be sold in plain packages’

This is just a sort of random prediction that doesn’t seem to mean anything if it happens. I’m not sure the state will control what is advertised, or if people will care how their stuff is packaged. In 4, above, I outline more important issues that would cause us to rethink our overall consumer mentality. If that happens, we may well see world where advertising is irrelevant—completely irrelevant. Let’s see how Madison Avenue plans for its demise (or its new role…) in a sustainable knowledge economy.

6. Neuroscience: ‘We’ll be able to plug information streams directly into the cortex’

This is already possible on a small scale. We have seen hardware interfaces with bugs and birds. The question is, will it be a novelty or will it be a major medical tool or will it be commonplace and accessible or will it be seen as dangerous and be shunned by citizen regulators worried about giving up their humanity and banned by governments who can’t imagine governing the overly connected. Just because we can doesn’t mean we will or we should. I certainly think we may we see a singularity over the next 25 years in hardware, where machines can match human computational power, but I think software will greatly lag hardware. We may be able to connect, but we will do so only had rudimentary levels. On the other hand, a new paradigm for software could evolve that would let machines match us thought for thought. I put that in the black swan category. I am on constant watch for a software genius that will make Gates and Zuckerberg look like quaint 18th-Century industrialists. The next revolution in software will come from a few potential paths, here are two: removal to the barriers to entry that the software industry has created and a return to more accessible computing for the masses (where they develop applications, not just consume content) or a breakthrough in distributed, parallel processing that evolves the ability to match human pattern recognition capabilities, even if the approach appears alien to it inventors. We will have a true artificial intelligence only when we no longer understand the engineering behind its abilities.

7. Physics: ‘Within a decade, we’ll know what dark matter is’

Maybe, but we may also find that dark matter, like the “ether” is just a conceptual plug-in for an incomplete model of the universe. I guess saying that it is a conceptual plug-in for an incomplete model would be an explanation of what it is – so this is one of those predictions that can’t lose. Another perspective: dark matter matters, and not only do we understand what it is, but what it means, and it changes our fundamental view of physics in a way that helps us look at matter and energy through a new lens, one that may help fuel a revolution in energy production and consumption.

8. Food: ‘Russia will become a global food superpower’

Really? Well, this presumes some commercial normality for Russia along with maintaining its risk taking propensity to remove the safeties from technology. If Russia becomes politically stable and economically safe (you can go there without fear for your personal or economic life) then perhaps. I think, however, that this predication is too finite and pointed. We could well see the US, China (or other parts of Asia) or even a terraformed Africa become the major food supplier – biotechnology, perhaps – new forms of distributed farming, also possible. The answer may not be hub-and-spoke, but distributed. We may find our own center locally as the costs of moving food around the world outweighs the industrialization efficiency of its production. It may prove healthier and more efficient to forgo the abundant variety we have become accustomed to (in some parts of the world) and see food again as nutrition, and share the lessons of efficient local production with the increasingly water starved world.

9. Nanotechnology: ‘Privacy will be a quaint obsession’

I don’t get the link between nanotechnology and privacy. It is mentioned once in the narrative, but not in an explanatory way. As a purely hardware technology, it will threaten health (nano-pollutants) and improve health (cellular-level, molecular-level repairs). The bigger issue with nanotechnology is its computational model. If nanotechnology includes the procreation and evolution of artificial things, then we are faced with the difficult challenge of trying to imagine how something will evolve that we have never seen before, and that has never existed in nature. The interplay between nature and nanotechnology will be fascinating and perhaps frightening. Our privacy may be challenged by culture and by software, but I seriously doubt that nanotechnology will be the key to decrypting our banking system (though it could play a role). Nanotechnology is more likely to be a black swan full of surprises that we can’t even begin to imagine today.

10. Gaming: ‘We’ll play games to solve problems’

This one is easy. Of course. We always have and we always will. Problem solutions are games to those who find passion in different problem sets. The difference between a game and a chore is perspective, not the task itself. For a mathematician, solving a quadratic equation is a game. For a literature major, that same equation may be seen as a chore. Taken to the next level, gaming may become a new way to engage with work. We often engineer fun out of work, and that is a shame. We should engineer work experiences to include fun as part of the experience (see my new book, Management by Design), and I don’t mean morale events. If you don’t enjoy your “work” then you will be dissatisfied no matter how much you are paid. Thinking about work as a game, as Ender (Enders Game, Orson Scott Card) did, changes the relationship between work and life. Ender, however, found out, that when you get too far removed from the reality, you may find moral compasses misaligned.

11. Web/internet: ‘Quantum computing is the future’

Quantum computing, like nanotechnology, will change fundamental rules, so it is hard to predict their outcome. We will do better to closely monitor developments than to spend time overspeculating on outcomes that are probably unimaginable. It is better to accept that there are things in the future that are unimaginable now and practice how to deal with unimaginable as an idea than to frustrate ourselves by trying to predict those outcomes. Imagine wicked fast computers—doesn’t really matter if they are quantum or not. Imagine machines that can decrypt anything really quickly using traditional methods, and that create new encryptions that they can’t solve themselves.

On the more mundane note in this article, the issues of net neutrality may play out so that those who pay more get more, though I suspect that will be uneven and change at the whim of politics. What I find curious is that this prediction says nothing about the alternative Internet (see my post Pirates Pine for Alternative Internet on Internet Evolution). I think we should also plan for very different information models and more data-centric interaction—in other words, we may we find ourselves talking to data rather than servers in the future.

I’m not sure the next Internet will come from Waterloo, Ontario and its physicists, but from acts of random assertions by smart, tech-savvy idealists who want to take back our intellectual backbone from advertisers and cable companies.

One black swan this prediction fails to account for is the possibility of a loss of trust in the Internet all together if it is hacked or otherwise challenged (by a virus, or made unstable by an attack on power grids or network routers). Cloud computing is based on trust. Microsoft and Google recently touted the uptime of their business offerings (Microsoft: BPOS Components Average 99.9-plus Percent Uptime). If some nefarious group takes that as a challenge (or sees the integrity of banking transactions as a challenge), we could see widespread distrust of the Net and the Cloud and a rapid return to closed, proprietary, non-homogeneous systems that confound hackers by their variety as much as they confound those who operate them.

12. Fashion: ‘Technology creates smarter clothes’

A model on the catwalk during the Gareth Pugh show at London Fashion Week in 2008. Photograph: Leon Neal/AFP/Getty Images

Smarter perhaps, put from the picture above which, not necessarily fashion forward. I think we will see technology integrated with what we wear, and I think smart materials will also redefine other aspects of our lives and create a new manufacturing industry, even in places where manufacturing has been displaced. In the US, for instance, smart materials will not require retrofitting legacy manufacturing facilities, but will require the creation of entirely new facilities that can be created with design and sustainability from their onset. However, smart clothes, other uses of smart materials and personal technology integration all require a continued positive connection between people and technology. That connection looks positive, but we may be be blind to technology push-backs, even rebellions, fostered in current events like the jobless recovery.

13. Nature: ‘We’ll redefine the wild’

I like this one and think it is inevitable, but I also think it is a rather easy prediction to make. It is less easy to see all the ways nature could be redefined. Professor Mace predicts managed protected areas and a continued loss of biodiversity. I think we are at a transition point, and 25 years isn’t enough time to see its conclusion. The rapid influx of “invasive” species with indigenous species creates not just displacement, but offer an opportunity for recreation of environments (read evolution). We have to remember that historically the areas we are trying to protect were very different in the past than they are in our rather short collective memories. We are trying to protect a moment in history for human nostalgia. The changes in the environment presage other changes that may well take place after we have gone. Come to Earth a 1,000 years from now and we may be hard pressed to find anything that is as we experience it today. The general landscape may appear the same at the highest level of fractal magnification, but zoom in and you will find the details will shifted as much as the forests of Europe or the nesting grounds of the Dodo bird have changed over the last 1,000 years.

14. Architecture: What constitutes a ‘city’ will change

I like this prediction because it runs the gamut from distribution of power to returning to caves. It actually represents the idea using scenario thinking. I will keep this brief because Rowan Moore gets it when he writes: “To be optimistic, the human genius for inventing social structures will mean that new forms of settlement we can’t quite imagine will begin to emerge.”

15. Sport: ‘Broadcasts will use holograms’

I guess in a sustainable knowledge economy we will still have sport. I hope we figure out how to monitor the progress of our favorite teams without the creation and collection of non-biodegradable artifacts like Styrofoam number one hands and collectable beverage containers.

As for sport itself, it will be an early adopter of any new broadcast technology. I’m not sure holograms in their traditional sense will be one, however. I’m guessing we figure out 3-D with a lot less technology than holograms require.

I challenge Mr. Lee’s statements on the acceptance of performance-enhancing drugs: “I don’t think we’ll see acceptance as the trend has been towards zero tolerance and long may it remain so.” I think it is just as likely that we start seeing performance enhancement as OK, given the wide proliferation of AD/HD drugs being prescribed, as well as those being used off label for mental enhancement—not to mention the accepted use of drugs by the military (see Troops need to remember, New Scientist, 09 December 2010). I think we may well see an asterisk in the record books a decade or so from now that says, “at this point we realized sport was entertainment, and allowed the use of drugs, prosthetics and other enhancements that increased performance and entertainment value.”

16. Transport: ‘There will be more automated cars’

Yes, if we still have cars, they will likely be more automated. And in a decade, we will likely still see cars, but we may be at the transition point for the adoption of a sustainable knowledge economy where cars start to look arcane. We will see continued tension between the old industrial sectors typified by automobile manufacturers and oil exploration and refining companies, and the technology and healthcare firms that see value and profits in more local ways of staying connected and ways to move that don’t involve internal combustion engines (or electric ones for that matter).

17. Health: ‘We’ll feel less healthy’

Maybe, as Mulgan points out, healthcare isn’t radical, but people can be radical. These uncertainties around health could come down to personal choice. We may find millions of excuses for not taking care of ourselves and then placing the burden of our unhealthy lifestyles at the feet of the public sector, or we may figure out that we are part of the sustainable equation as well. The later would transform healthcare. Some of the arguments above, about distribution and localism may also challenge the monolithic hospitals to become more distributed, as we are seeing with the rise of community-based clinics in the US and Europe. Management of healthcare may remain centralized, but delivery may be more decentralized. Of course, if economies continue to teeter, the state will assert itself and keep everything close and in as few buildings as possible.

As for electronic records, it will be the value to the end user that drives adoption. As soon as patients believe they need an electronic healthcare record as much as they need a little blue pill, we will see the adoption of the healthcare record. Until then, let the professionals do whatever they need to do to service me—the less I know the better. In a sustainable knowledge economy though, I will run my own analytics and use the results to inform my choices and actions. Perhaps we need healthcare analytics companies to start advertising to consumers as much as pharmaceutical companies currently do.

18. Religion: ‘Secularists will flatter to deceive’

I think religion may well see traditions fall, new forms emerge and fundamentalist dig in their heels. Religion offers social benefits that will be augmented by social media—religion acts as a pervasive and public filter for certain beliefs and cultural norms in a way that other associations do not. Over the next 25 years many of the more progressive religious movements may tap into their social side and reinvent themselves around association of people rather than affiliation with tenets of faith. If however, any of the dire scenarios come to pass, look for state asserted use of religion to increase, and for a rising tide of fundamentalism as people try to hold on to what they can of the old way of doing things.

19. Theatre: ‘Cuts could force a new political fringe’

Theatre has always had an edge, and any new fringe movement is likely to find it manifestation in art, be it theatre, song, poetry or painting. I would have preferred that the idea of art be taken up as a predication rather than theatre in isolation. If we continue to automate and displace workers, we will need to reassess our general abandonment of the arts as a way of making a living because creation will be the one thing that can’t be automated. We will need to find ways to pay people for human endeavors, everything from teaching to writing poetry. The fringe may turn out to be the way people stay engaged.

20 Storytelling: ‘Eventually there’ll be a Twitter classic’

Stories are already ubiquitous. We live in stories. Technology has changed our narrative form, not our longing for a narrative. The twitter stream is a narrative channel. I would not, however, anticipate a “twitter classic” because a classic suggests the idea of something lasting. For a “twitter classic” to occur, the 140-character phrases would need to be extracted from their medium and held someplace beyond the context is which they were created, which would make twitter just another version of the typewriter or word processor—either that or Twitter figures out a better mode for persistent retrieval of tweets with associated metadata—in others words, you could query the story out of the twitter-verse, which is very technically possible (and may make for some collaborative branching as well). But in the end, twitter is just a repository for writing, just one of many, which doesn’t make this prediction all that concept shattering.

This post is long enough, so I won’t start listing all of the areas the Guardian failed to tackle, or its internal lack of categorical consistency (e.g., Theatre and storytelling are two sides of the same idea). I hope these observations help you engage more deeply with these ideas and with the future more generally, but most importantly, I hope they help you think about navigating the next 25 years, not relying on prescience from people with no more insight than you and I. The trick with the future is to be nimble, not to be right.


What do Singularitarianism and popular Western religion have in common? More than you might imagine. A thumbnail evaluation of both ends of the American technocentric intelligence spectrum reveals both remarkable similarities in their respective narrative constructions and, naturally, amusing disparities. It would appear that all humans, regardless of our respective beliefs, seem to express goal-oriented hardwiring that demands a neatly constructed story to frame our experiences.

Be you a technophile, you are eagerly awaiting, with perhaps equal parts hope and fear, the moment when artificial general intelligence surpasses human intelligence. You don’t know exactly how this new, more cunning intelligence will react to humans, but you’re fairly certain that humanity might well be in a bit of trouble, or at very least, have some unique competition.

Be you a technophobe, you shun the trappings of in-depth science and technology involvement, save for a superficial interaction with the rudimentary elements of technology which likely do not extend much further than your home computer, cell phone, automobile, and/or microwave oven. As a technophobe, you might even consider yourself religious, and if you’re a Christian, you might well be waiting for the second-coming, the rapture.

Both scenarios lead humanity to ironically similar destinations, in which humankind becomes either marginalized or largely vestigial.

It’s difficult to parse either eventuality with observant members of the other’s belief system. If you ask a group of technophiles what they think of the idea of the rapture you will likely be laughed at or drown in tidal wave of atheist drool. The very thought of some magical force eviscerating an entire religious population in one eschatological fell swoop might be too much for some science and tech geeks, and medical attention, or at the very least a warehouse-quantity dose of smelling salts, might be in order.

Conversely, to the religiously observant, the notion of the singularity might for them, exist in terms too technical to even theoretically digest or represent something entirely dark or sinister that seems to fulfill their own belief system’s end game, a kind of techno-holocaust that reifies their purported faith.

The objective reality of both scenarios will be very different than either envisioned teleologies. Reality’s shades of gray of have a way of making foolish even the wisest individual’s predictions.

In my personal life, I too believed that the publication of my latest and most ambitious work, explaining the decidedly broad-scope Parent Star Theory would also constitute an end result of significant consequence, much like the popular narrative surrounding the moment of the singularity; that some great finish line was reached. The truth, however, is that just like the singularity, my own narrative-ized moment was not a precisely secured end, but a distinct moments of beginning, of conception and commitment. Not an arrival but a departure; a bold embarkation without clear end in sight.

Rather than answers, the coming singularity should provoke additional questions. How do we proceed? Where do we go from here? If the fundamental rules in the calculus of the human equation are changing, then how must we adapt? If the next stage of humanity exists on a post-scarcity planet, what then will be our larger goals, our new quest as a global human force?

Humanity must recognize that the idea of a narrative is indeed useful, so long as that narrative maintains some aspect of open-endedness. We might well need that consequential beginning-middle-end, if only to be reminded that each end most often leads to a new beginning.

Written by Zachary Urbina, Founder, Cozy Dark

Transhumanists are into improvements, and many talk about specific problems, for instance Nick Bostrom. However, Bostrom’s problem statements have been criticized for not necessarily being problems, and I think largely this is why one must consider the problem definition (see step #2 below).

Sometimes people talk about their “solutions” for problems, for instance this one in H+ Magazine. But in many cases they are actually talking about their ideas of how to solve a problem, or making science-fictional predictions. So if you surf the web, you will find a lot of good ideas about possibly important problems—but a lot of what you find will be undefined (or not very well defined) problem ideas and solutions.

These proposed solutions often do not attempt to find root causes or assume the wrong root cause. And finding a realistic complete plan for solving a problem is rare.

8D (Eight Disciplines) is a process used in various industries for problem solving and process improvement. The 8D steps described below could be very useful for transhumanists, not just for talking about problems but for actually implementing solutions in real life.

Transhuman concerns are complex not just technologically, but also socioculturally. Some problems are more than just “a” problem—they are a dynamic system of problems and the process for problem solving itself is not enough. There has to be management, goals, etc., most of which is outside the scope of this article. But first one should know how deal with a single problem before scaling up, and 8D is a process that can be used on a huge variety of complex problems.

Here are the eight steps of 8D:

  1. Assemble the team
  2. Define the problem
  3. Contain the problem
  4. Root cause analysis
  5. Choose the permanent solution
  6. Implement the solution and verify it
  7. Prevent recurrence
  8. Congratulate the team

More detailed descriptions:

1. Assemble the Team

Are we prepared for this?

With an initial, rough concept of the problem, a team should be assembled to continue the 8D steps. The team will make an initial problem statement without presupposing a solution. They should attempt to define the “gap” (or error)—the big difference between the current problematic situation and the potential fixed situation. The team members should all be interested in closing this gap.

The team must have a leader; this leader makes agendas, synchronizes actions and communications, resolves conflicts, etc. In a company, the team should also have a “sponsor”, who is like a coach from upper management. The rest of the team is assembled as appropriate; this will vary depending on the problem, but some general rules for a candidate can be:

  • Has a unique point of view.
  • Logistically able to coordinate with the rest of the team.
  • Is not committed to preconceived notions of “the answer.”
  • Can actually accomplish change that they might be responsible for.

The size of an 8D team (at least in companies) is typically 5 to 7 people.

The team should be justified. This matters most within an organization that is paying for the team, however even a group of transhumanists out in the wilds of cyberspace will have to defend themselves when people ask, “Why should we care?”

2. Define the Problem

What is the problem here?

Let’s say somebody throws my robot out of an airplane, and it immediately falls to the ground and breaks into several pieces. This customer then informs me that this robot has a major problem when flying after being dropped from a plane and that I should improve the flying software to fix it.

Here is the mistake: The problem has not been properly defined. The robot is a ground robot and was not intended to fly or be dropped out of a plane. The real problem is that a customer has been misinformed as to the purpose and use of the product.

When thinking about how to improve humanity, or even how to merely improve a gadget, you should consider: Have you made an assumption about the issue that might be obscuring the true problem? Did the problem emerge from a process that was working fine before? What processes will be impacted? If this is an improvement, can it be measured, and what is the expected goal?

The team should attempt to grok the issues and their magnitude. Ideally, they will be informed with data, not just opinions.

Just as with medical diagnosis, the symptoms alone are probably not enough input. There are various ways to collect more data, and which methods you use depends on the nature of the problem. For example, one method is the 5 W’s and 2 H’s:

  • Who is affected?
  • What is happening?
  • When does it occur?
  • Where does it happen?
  • Why is it happening (initial understanding)?
  • How is it happening?
  • How many are affected?

For humanity-affecting problems, I think it’s very important to define what the context of the problem is.

3. Contain the Problem

Containment

Some problems are urgent, and a stopgap must be put in place while the problem is being analyzed. This is particularly relevant for problems such as product defects which affect customers.

Some brainstorming questions are:

  • Can anything be done to mitigate the negative impact (if any) that is happening?
  • Who would have to be involved with that mitigation?
  • How will the team know that the containment action worked?

Before deploying an interim expedient, the team should have asked and answered these questions (they essentially define the containment action):

  • Who will do it?
  • What is the task?
  • When will it be accomplished?

A canonical example: You have a leaky roof (the problem). The containment action is to put a pail underneath the hole to capture the leaking water. This is a temporary fix until the roof is properly repaired, and mitigates damage to the floor.

Don’t let the bucket of water example fool you—containment can be massive, e.g. corporate bailouts. Of course, the team must choose carefully: Is the cost of containment worth it?

4. Root Cause Analysis

There can be many layers of causation

Whenever you think you have an answer to a problem, as yourself: Have you gone deep enough? Or is there another layer below? If you implementt a fix, will the problem grow back?

Generally in the real world events are causal. The point of root cause analysis is to trace the causes all the way back for your problem. If you don’t find the origin of the causes, then the problem will probably rear its ugly head again.

Root cause analysis is one of the most overlooked, yet important, steps of problem solving. Even engineers often lose their way when solving a problem and jump right into a fix which later on turned out to be a red herring.

Typically, driving to root cause follows one of these two routes:

  1. Start with data; develop theories from that data.
  2. Start with a theory; search for data to support or refute it.

Either way, team members must always remember keep in mind that correlation is not necessarily causation.

One tool to use is the 5 Why’s, in which you move down the “ladder of abstraction” by continually asking: “why?” Start with a cause and ask why this cause is responsible for the gap (or error). Then ask again until you’ve bottomed out with something that may be a true root cause.

There are many other general purpose methods and tools to assist in this stage; I will list some of them here, but please look them up for detailed explanations:

  • Brainstorming: Generate as many ideas as possible, and elaborate on the best ideas.
  • Process flow analysis: Flowchart a process; attempt to narrow down what element in the flow chart is causing the problem.
  • Fishikawa: Use a Fishikawa (aka Cause and Effect) diagram to try narrowing down the cause(s).
  • Pareto analysis: Generate a Pareto chart, which may indicate which cause (of many) should be fixed first.
  • Data analysis: Use trend charts, scatter plots, etc. to assist in finding correlations and trends.

And that is just the beginning—a problem may need a specific new experiment or data collection method devised.

Ideally you would have a single root cause, but that is not always the case.

The team should also come up with various correction actions that solve the root cause, to be selected and refined in the next step.

5. Choose the Permanent Solution

The solution must be one or more corrective actions that solve the cause(s) of the problem. Corrective action selection is additionally guided by criteria such as time constraints, money constraints, efficiency, etc.

This is a great time to simulate/test the solution, if possible. There might be unaccounted for side effects either in the system you fixed or in related systems. This is especially true for some of the major issues that transhumanists wish to tackle.

You must verify that the corrective action(s) will in fact fix the root cause and not cause bad side effects.

6. Implement the Solution and Verify It

This is the stage when the team actually sets into motion the correction action(s). But doing it isn’t enough—the team also has to check to see if the solution is really working.

For some issues the verification is clean-cut. Some corrective actions have to be evaluated with effectiveness, for instance some benchmark. Depending on the time scale of the corrective action, the team might need to add various monitors and/or controls to continually make sure the root cause is squashed.

7. Prevent Recurrence

It’s possible that a process will revert back to its old ways after the problem has been solved, resulting in the same type of problem happening again. So the team should provide the organization or environment with improvements to processes, procedures, practices, etc. so that this type of problem does not resurface.

8. Congratulate the Team

Party time! The team should share and publicize the knowledge gained from the process as it will help future efforts and teams.

Image credits:
1. Inception (2010), Warner Bros.
2. Peter Galvin
3. Tom Parnell
4. shalawesome

The Stoic philosophical school shares several ideas with modern attempts at prolonging human lifespan. The Stoics believed in a non-dualistic, deterministic paradigm, where logic and reason formed part of their everyday life. The aim was to attain virtue, taken to mean human excellence.

I have recently described a model specifically referring to indefinite lifespans, where human biological immortality is a necessary and inevitable consequence of natural evolution (for details see www.elpistheory.info and for a comprehensive summary see http://cid-3d83391d98a0f83a.office.live.com/browse.aspx/Immo…=155370157).

This model is based on a deterministic, non-dualistic approach, described by the laws of Chaos theory (dynamical systems) and suggests that, in order to accelerate the natural transition from human evolution by natural selection to a post-Darwinian domain (where indefinite lifespans are the norm) , it is necessary to lead a life of constant intellectual stimulation, innovation and avoidance of routine (see http://www.liebertonline.com/doi/abs/10.1089/rej.2005.8.96?journalCode=rej and http://www.liebertonline.com/doi/abs/10.1089/rej.2009.0996) i.e. to seek human virtue (excellence, brilliance, and wisdom, as opposed to mediocrity and routine). The search for intellectual excellence increases neural inputs which effect epigenetic changes that can up-regulate age repair mechanisms.

Thus it is possible to conciliate the Stoic ideas with the processes that lead to both technological and developmental Singularities, using approaches that are deeply embedded in human nature and transcend time.

California Dreams Video 1 from IFTF on Vimeo.

INSTITUTE FOR THE FUTURE ANNOUNCES CALIFORNIA DREAMS:
A CALL FOR ENTRIES ON IMAGINING LIFE IN CALIFORNIA IN 2020

Put yourself in the future and show us what a day in your life looks like. Will California keep growing, start conserving, reinvent itself, or collapse? How are you living in this new world? Anyone can enter,anyone can vote; anyone can change the future of California!

California has always been a frontier—a place of change and innovation, reinventing itself time and again. The question is, can California do it again? Today the state is facing some of its toughest challenges. Launching today, IFTF’s California Dreams is a competition with an urgent challenge to recruit citizen visions of the future of California—ideas for what it will be like to live in the state in the next decade—to start creating a new California dream.

California Dreams calls upon the public look 3–10 years into the future and tell a story about a single day in their own life. Videos, graphical entries, and stories will be accepted until January 15, 2011. Up to five winners will be flown to Palo Alto, California in March to present their ideas and be connected to other innovative thinkers to help bring these ideas to life. The grand prize winner will receive the $3,000 IFTF Roy Amara Prize for Participatory Foresight.

“We want to engage Californians in shaping their lives and communities” said Marina Gorbis, Executive Director of IFTF. “The California Dreams contest will outline the kinds of questions and dilemmas we need to be analyzing, and provoke people to ask deep questions.”

Entries may come from anyone anywhere and can include, but are not limited to, the following: Urban farming, online games replacing school, a fast food tax, smaller, sustainable housing, rise in immigrant entrepreneurs, mass migration out of state. Participants are challenged to use IFTF’s California Dreaming map as inspiration, and picture themselves in the next decade, whether it be a future of growth, constraint, transformation, or collapse.

The grand prize, called the Roy Amara Prize, is named for IFTF’s long-time president Roy Amara (1925−2000) and is part of a larger program of social impact projects at IFTF honoring his legacy, known as The Roy Amara Fund for Participatory Foresight, the Fund uses participatory tools to translate foresight research into concrete actions that address future social challenges.

PANEL OF COMPETITION JUDGES

Gina Bianchini, Entrepreneur in Residence, Andreessen Horowitz

Alexandra Carmichael, Research Affiliate, Institute for the Future, Co-Founder, CureTogether, Director, Quantified Self

Bill Cooper, The Urban Water Research Center, UC Irvine

Poppy Davis, Executive Director, EcoFarm

Jesse Dylan, Founder of FreeForm, Founder of Lybba

Marina Gorbis, Executive Director, Institute for the Future

David Hayes-Bautista, Professor of Medicine and Health Services,UCLA School of Public Health

Jessica Jackley, CEO, ProFounder

Xeni Jardin, Partner, Boing Boing, Executive Producer, Boing Boing Video

Jane McGonigal, Director of Game Research and Development, Institute for the Future

Rachel Pike, Clean Tech Analyst, Draper Fisher Jurvetson

Howard Rheingold, Visiting Professor, Stanford / Berkeley, and theInstitute of Creative Technologies

Tiffany Shlain, Founder, The Webby Awards
Co-founder International Academy of Digital Arts and Sciences

Larry Smarr
Founding Director, California Institute for Telecommunications and Information Technology (Calit2), Professor, UC San Diego

DETAILS

WHAT: An online competition for visions of the future of California in the next 10 years, along one of four future paths: growth, constraint, transformation, or collapse. Anyone can enter, anyone can vote, anyone can change the future of California.

WHEN: Launch – October 26, 2010
Deadline for entries — January 15, 2011
Winners announced — February 23, 2011
Winners Celebration — 6 – 9 pm March 11, 2011 — open to the public

WHERE: http://californiadreams.org

For more information on the California Dreaming map or to download the pdf, click here.

Call for Essays:

The Singularity Hypothesis
A Scientific and Philosophical Assessment

Edited volume, to appear in The Frontiers Collection, Springer

Does an intelligence explosion pose a genuine existential risk, or did Alan Turing, Steven Hawking, and Alvin Toffler delude themselves with visions ‘straight from Cloud Cuckooland’? Should the notions of superintelligent machines, brain emulations and transhumans be ridiculed, or is it that skeptics are the ones who suffer from short sightedness and ‘carbon chauvinism’? These questions have remained open because much of what we hear about the singularity originates from popular depictions, fiction, artistic impressions, and apocalyptic propaganda.

Seeking to promote this debate, this edited, peer-reviewed volume shall be concerned with scientific and philosophical analysis of the conjectures related to a technological singularity. We solicit scholarly essays offering a scientific and philosophical analysis of this hypothesis, assess its empirical content, examine relevant evidence, or explore its implications. Commentary offering a critical assessment of selected essays may also be solicited.

Important dates:

  • Extended abstracts (500–1,000 words): 15 January 2011
  • Full essays: (around 7,000 words): 30 September 2011
  • Notifications: 30 February 2012 (tentative)
  • Proofs: 30 April 2012 (tentative)

We aim to get this volume published by the end of 2012.

Purpose of this volume

Central questions

Extended abstracts are ideally short (3 pages, 500 to 1000 words), focused (!), relating directly to specific central questions and indicating how they will be treated in the full essay.

Full essays are expected to be short (15 pages, around 7000 words) and focused, relating directly to specific central questions. Essays longer than 15 pages long will be proportionally more difficult to fit into the volume. Essays that are three times this size or more are unlikely to fit. Essays should address the scientifically-literate non-specialist and written in a language that is divorced from speculative and irrational line of argumentation. In addition, some authors may be asked to make their submission available for commentary (see below).

(More details)

Thank you for reading this call. Please forward it to individuals who may wish to contribute.

Amnon Eden, School of Computer Science and Electronic Engineering, University of Essex
Johnny Søraker, Department of Philosophy, University of Twente
Jim Moor, Department of Philosophy, Dartmouth College
Eric Steinhart, Department of Philosophy, William Paterson University