Toggle light / dark theme

“Jobs for every American is doomed to failure because of modern automation and production. We ought to recognize it and create an income-maintenance system so every single American has the dignity and the wherewithal for shelter, basic food, and medical care. I’m talking about welfare for all. Without it, you’re going to have warfare for all.”

This quote from Jerry Brown in 1995 echoes earlier fears that automation would cause mass unemployment and displacement. These fears have not materialized, due to surging economic growth, the ability of the workforce to adjust, and the fact that the extent of automation is largely limited to physical, repetitive tasks. This is beginning to change.

In recent years, before the current recession, automation in already well established areas has continued to make productivity improvements. “Robotics and other computer automation have reduced the number of workers on a line. Between 2002 and 2005, the number of auto production workers decreased 8.5 percent while shipments increased 5 percent. Assembly plants now require as little as 15 to 25 labor hours per vehicle.” The result of these productivity gains has been a higher quality, less expensive product.

As machines become smarter, less repetitive “white collar” jobs will become subject to automation. Change will come so rapidly, the workforce will not be able to adjust, with real opportunities for alternative work decreasing. The earlier fears of mass unemployment will become realized. This mass displacement could lay the foundation for civil unrest and a general backlash against technology. The full extent of this change is unlikely to happen for another generation, with strong growth in China and other emerging economies. Regardless of exact timing or mechanism, the fact is that this transition to full automation has already begun, and micro economics dictate that it will continue. The choice between an inefficient, expensive, human labor force and an efficient, cheap, automated labor force is clear at the micro level, which will drive the pace of change.

What is needed now is a new economic paradigm, new theories, and a new understanding for the new coming age. We are not far away from a time when it will be possible to provide every human with a clean, safe place to live, with excellent healthcare and ample food, through the provision of automated labor. If it is possible to provide these things as a birth right, while not infringing upon the rights of others, then it should be done. As long as we are human, no matter how virtual our world becomes, we will still have basic physical needs that should be accessible by everyone, as owners by heritage to this offspring of our species. The correct lessons from the failures and shortcomings of Marxism and Austrian Economics must be learned. The introduction of something never before possible, an intelligent omnipresent and free labor source (free once machines are able to replicate themselves from the mine, to design and manufacture, without any human input), is a game changer.

Jerry Brown was right. The trick is to not be too far ahead of your time.

Many people think that the issues Lifeboat Foundation is discussing will not be relevant for many decades to come. But recently a major US Governmental Agency, the TSA, decided to make life hell for 310 million Americans (and anyone who dares visit the USA) as it reacts to the coming Great Filter.

What is the Great Filter? Basically it is whatever has caused our universe to be dead with no advanced civilizations in it. (An advanced civilization is defined as a civilization advanced enough to be self-sustaining outside its home planet.)

The most likely explanation for this Great Filter is that civilizations eventually develop technologies so powerful that they provide individuals with the means to destroy all life on the planet. Technology has now become powerful enough that the TSA even sees 3-year-old girls as threats who may take down a plane so they take away her teddy bear and grope her.

Do I agree with the TSA’s actions? No, because they are not risk-based. For example, they recently refused to let a man board a plane even when he stripped down to his underwear that “left nothing to the imagination” as he attempted to prove that he didn’t have a bomb on his body. Instead they arrested him, handcuffed and paraded him through two separate airport terminals in his underwear, stole his phone, and arrested a bystander who filmed the event and stole her camera as well. Obviously the TSA’s actions in this instance did nothing to protect Americans from mad bombers. And such examples are numerous.

But is the TSA in general reacting to real growing threats as the Great Filter approaches? You bet it is. The next 10 years will be interesting. May you live in interesting times.

The Stoic philosophical school shares several ideas with modern attempts at prolonging human lifespan. The Stoics believed in a non-dualistic, deterministic paradigm, where logic and reason formed part of their everyday life. The aim was to attain virtue, taken to mean human excellence.

I have recently described a model specifically referring to indefinite lifespans, where human biological immortality is a necessary and inevitable consequence of natural evolution (for details see www.elpistheory.info and for a comprehensive summary see http://cid-3d83391d98a0f83a.office.live.com/browse.aspx/Immo…=155370157).

This model is based on a deterministic, non-dualistic approach, described by the laws of Chaos theory (dynamical systems) and suggests that, in order to accelerate the natural transition from human evolution by natural selection to a post-Darwinian domain (where indefinite lifespans are the norm) , it is necessary to lead a life of constant intellectual stimulation, innovation and avoidance of routine (see http://www.liebertonline.com/doi/abs/10.1089/rej.2005.8.96?journalCode=rej and http://www.liebertonline.com/doi/abs/10.1089/rej.2009.0996) i.e. to seek human virtue (excellence, brilliance, and wisdom, as opposed to mediocrity and routine). The search for intellectual excellence increases neural inputs which effect epigenetic changes that can up-regulate age repair mechanisms.

Thus it is possible to conciliate the Stoic ideas with the processes that lead to both technological and developmental Singularities, using approaches that are deeply embedded in human nature and transcend time.

California Dreams Video 1 from IFTF on Vimeo.

INSTITUTE FOR THE FUTURE ANNOUNCES CALIFORNIA DREAMS:
A CALL FOR ENTRIES ON IMAGINING LIFE IN CALIFORNIA IN 2020

Put yourself in the future and show us what a day in your life looks like. Will California keep growing, start conserving, reinvent itself, or collapse? How are you living in this new world? Anyone can enter,anyone can vote; anyone can change the future of California!

California has always been a frontier—a place of change and innovation, reinventing itself time and again. The question is, can California do it again? Today the state is facing some of its toughest challenges. Launching today, IFTF’s California Dreams is a competition with an urgent challenge to recruit citizen visions of the future of California—ideas for what it will be like to live in the state in the next decade—to start creating a new California dream.

California Dreams calls upon the public look 3–10 years into the future and tell a story about a single day in their own life. Videos, graphical entries, and stories will be accepted until January 15, 2011. Up to five winners will be flown to Palo Alto, California in March to present their ideas and be connected to other innovative thinkers to help bring these ideas to life. The grand prize winner will receive the $3,000 IFTF Roy Amara Prize for Participatory Foresight.

“We want to engage Californians in shaping their lives and communities” said Marina Gorbis, Executive Director of IFTF. “The California Dreams contest will outline the kinds of questions and dilemmas we need to be analyzing, and provoke people to ask deep questions.”

Entries may come from anyone anywhere and can include, but are not limited to, the following: Urban farming, online games replacing school, a fast food tax, smaller, sustainable housing, rise in immigrant entrepreneurs, mass migration out of state. Participants are challenged to use IFTF’s California Dreaming map as inspiration, and picture themselves in the next decade, whether it be a future of growth, constraint, transformation, or collapse.

The grand prize, called the Roy Amara Prize, is named for IFTF’s long-time president Roy Amara (1925−2000) and is part of a larger program of social impact projects at IFTF honoring his legacy, known as The Roy Amara Fund for Participatory Foresight, the Fund uses participatory tools to translate foresight research into concrete actions that address future social challenges.

PANEL OF COMPETITION JUDGES

Gina Bianchini, Entrepreneur in Residence, Andreessen Horowitz

Alexandra Carmichael, Research Affiliate, Institute for the Future, Co-Founder, CureTogether, Director, Quantified Self

Bill Cooper, The Urban Water Research Center, UC Irvine

Poppy Davis, Executive Director, EcoFarm

Jesse Dylan, Founder of FreeForm, Founder of Lybba

Marina Gorbis, Executive Director, Institute for the Future

David Hayes-Bautista, Professor of Medicine and Health Services,UCLA School of Public Health

Jessica Jackley, CEO, ProFounder

Xeni Jardin, Partner, Boing Boing, Executive Producer, Boing Boing Video

Jane McGonigal, Director of Game Research and Development, Institute for the Future

Rachel Pike, Clean Tech Analyst, Draper Fisher Jurvetson

Howard Rheingold, Visiting Professor, Stanford / Berkeley, and theInstitute of Creative Technologies

Tiffany Shlain, Founder, The Webby Awards
Co-founder International Academy of Digital Arts and Sciences

Larry Smarr
Founding Director, California Institute for Telecommunications and Information Technology (Calit2), Professor, UC San Diego

DETAILS

WHAT: An online competition for visions of the future of California in the next 10 years, along one of four future paths: growth, constraint, transformation, or collapse. Anyone can enter, anyone can vote, anyone can change the future of California.

WHEN: Launch – October 26, 2010
Deadline for entries — January 15, 2011
Winners announced — February 23, 2011
Winners Celebration — 6 – 9 pm March 11, 2011 — open to the public

WHERE: http://californiadreams.org

For more information on the California Dreaming map or to download the pdf, click here.

Call for Essays:

The Singularity Hypothesis
A Scientific and Philosophical Assessment

Edited volume, to appear in The Frontiers Collection, Springer

Does an intelligence explosion pose a genuine existential risk, or did Alan Turing, Steven Hawking, and Alvin Toffler delude themselves with visions ‘straight from Cloud Cuckooland’? Should the notions of superintelligent machines, brain emulations and transhumans be ridiculed, or is it that skeptics are the ones who suffer from short sightedness and ‘carbon chauvinism’? These questions have remained open because much of what we hear about the singularity originates from popular depictions, fiction, artistic impressions, and apocalyptic propaganda.

Seeking to promote this debate, this edited, peer-reviewed volume shall be concerned with scientific and philosophical analysis of the conjectures related to a technological singularity. We solicit scholarly essays offering a scientific and philosophical analysis of this hypothesis, assess its empirical content, examine relevant evidence, or explore its implications. Commentary offering a critical assessment of selected essays may also be solicited.

Important dates:

  • Extended abstracts (500–1,000 words): 15 January 2011
  • Full essays: (around 7,000 words): 30 September 2011
  • Notifications: 30 February 2012 (tentative)
  • Proofs: 30 April 2012 (tentative)

We aim to get this volume published by the end of 2012.

Purpose of this volume

Central questions

Extended abstracts are ideally short (3 pages, 500 to 1000 words), focused (!), relating directly to specific central questions and indicating how they will be treated in the full essay.

Full essays are expected to be short (15 pages, around 7000 words) and focused, relating directly to specific central questions. Essays longer than 15 pages long will be proportionally more difficult to fit into the volume. Essays that are three times this size or more are unlikely to fit. Essays should address the scientifically-literate non-specialist and written in a language that is divorced from speculative and irrational line of argumentation. In addition, some authors may be asked to make their submission available for commentary (see below).

(More details)

Thank you for reading this call. Please forward it to individuals who may wish to contribute.

Amnon Eden, School of Computer Science and Electronic Engineering, University of Essex
Johnny Søraker, Department of Philosophy, University of Twente
Jim Moor, Department of Philosophy, Dartmouth College
Eric Steinhart, Department of Philosophy, William Paterson University

Kevin Kelly concluded a chapter in his new book What Technology Wants with the declaration that if you hate technology, you basically hate yourself.

The rationale is twofold:

1. As many have observed before, technology–and Kelly’s superset “technium”–is in many ways the natural successor to biological evolution. In other words, human change is primarily through various symbiotic and feedback-looped systems that comprise human culture.

2. It all started with biology, but humans throughout their entire history have defined and been defined by their tools and information technologies. I wrote an essay a few months ago called “What Bruce Campbell Taught Me About Robotics” concerning human co-evolution with tools and the mind’s plastic self-models. And of course there’s the whole co-evolution with or transition to language-based societies.

So if the premise that human culture is a result of taking the path of technologies is true, then to reject technology as a whole would be reject human culture as it has always been. If the premise that our biological framework is a result of a back-and-forth relationship with tools and/or information, then you have another reason to say that hating technology is hating yourself (assuming you are human).

In his book, Kelly argues against the noble savage concept. Even though there are many useless implementations of technology, the tech that is good is extremely good and all humans adopt them when they can. Some examples Kelly provides are telephones, antibiotics and other medicines, and…chainsaws. Low-tech villagers continue to swarm to slums of higher-tech cities, not because they are forced, but because they want their children to have better opportunities.

So is it a straw man that actually hates technology? Certainly people hate certain implementations of technology. Certainly it is ok, and perhaps needed more than ever, to reject useless technology artifacts. I think one place where you can definitely find some technology haters are the ones afraid of obviously transformative technologies, in other words the ones that purposely and radically alter humans. And they are only “transformative” in an anachronistic sense–e.g., if you compare two different time periods in history, you can see drastic differences.

Also, although perhaps not outright hate in most cases, there are many who have been infected by the meme that artificial creatures such as robots and/or super-smart computers (and/or super-smart networks of computers) present a competition to humans as they exist now. This meme is perhaps more dangerous than any computer could be because it tries to divorce humans from the technium.

Image credit: whokilledbambi

Dear Ray;

I’ve written a book about the future of software. While writing it, I came to the conclusion that your dates are way off. I talk mostly about free software and Linux, but it has implications for things like how we can have driverless cars and other amazing things faster. I believe that we could have had all the benefits of the singularity years ago if we had done things like started Wikipedia in 1991 instead of 2001. There is no technology in 2001 that we didn’t have in 1991, it was simply a matter of starting an effort that allowed people to work together.

Proprietary software and a lack of cooperation among our software scientists has been terrible for the computer industry and the world, and its greater use has implications for every aspect of science. Free software is better for the free market than proprietary software, and there are many opportunities for programmers to make money using and writing free software. I often use the analogy that law libraries are filled with millions of freely available documents, and no one claims this has decreased the motivation to become a lawyer. In fact, lawyers would say that it would be impossible to do their job without all of these resources.

My book is a full description of the issues but I’ve also written some posts on this blog, and this is probably the one most relevant for you to read: https://lifeboat.com/blog/2010/06/h-conference-and-faster-singularity

Once you understand this, you can apply your fame towards getting more people to use free software and Python. The reason so many know Linus Torvalds’s name is because he released his code as GPL, which is a license whose viral nature encourages people to work together. Proprietary software makes as much sense as a proprietary Wikipedia.

I would be happy to discuss any of this further.

Regards,

-Keith
—————–
Response from Ray Kurzweil 11/3/2010:

I agree with you that open source software is a vital part of our world allowing everyone to contribute. Ultimately software will provide everything we need when we can turn software entities into physical products with desktop nanofactories (there is already a vibrant 3D printer industry and the scale of key features is shrinking by a factor of a hundred in 3D volume each decade). It will also provide the keys to health and greatly extended longevity as we reprogram the outdated software of life. I believe we will achieve the original goals of communism (“from each according to their ability, to each according to their need”) which forced collectivism failed so miserably to achieve. We will do this through a combination of the open source movement and the law of accelerating returns (which states that the price-performance and capacity of all information technologies grows exponentially over time). But proprietary software has an important role to play as well. Why do you think it persists? If open source forms of information met all of our needs why would people still purchase proprietary forms of information. There is open source music but people still download music from iTunes, and so on. Ultimately the economy will be dominated by forms of information that have value and these two sources of information – open source and proprietary – will coexist.
———
Response back from Keith:
Free versus proprietary isn’t a question about whether only certain things have value. A Linux DVD has 10 billion dollars worth of software. Proprietary software exists for a similar reason that ignorance and starvation exist, a lack of better systems. The best thing my former employer Microsoft has going for it is ignorance about the benefits of free software. Free software gets better only as more people use it. Proprietary software is an inferior development model and an anathema to science because it hinders people’s ability to work together. It has infected many corporations, and I’ve found that PhDs who work for public institutions often write proprietary software.

Here is a paragraph from my writings I will copy here:

I start the AI chapter of my book with the following question: Imagine 1,000 people, broken up into groups of five, working on two hundred separate encyclopedias, versus that same number of people working on one encyclopedia? Which one will be the best? This sounds like a silly analogy when described in the context of an encyclopedia, but it is exactly what is going on in artificial intelligence (AI) research today.

Today, the research community has not adopted free software and shared codebases sufficiently. For example, I believe there are more than enough PhDs today working on computer vision, but there are 200+ different codebases plus countless proprietary ones. Simply put, there is no computer vision codebase with critical mass.

We’ve known approximately what a neural network should look like for many decades. We need “places” for people to work together to hash out the details. A free software repository provides such a place. We need free software, and for people to work in “official” free software repositories.

“Open source forms of information” I have found is a separate topic from the software issue. Software always reads, modifies, and writes data, state which lives beyond the execution of the software, and there can be an interesting discussion about the licenses of the data. But movies and music aren’t science and so it doesn’t matter for most of them. Someone can only sell or give away a song after the software is written and on their computer in the first place. Some of this content can be free and some can be protected, and this is an interesting question, but mostly this is a separate topic. The important thing to share is scientific knowledge and software.

It is true that software always needs data to be useful: configuration parameters, test files, documentation, etc. A computer vision engine will have lots of data, even though most of it is used only for testing purposes and little used at runtime. (Perhaps it has learned the letters of the alphabet, state which it caches between executions.) Software begets data, and data begets software; people write code to analyze the Wikipedia corpus. But you can’t truly have a discussion of sharing information unless you’ve got a shared codebase in the first place.

I agree that proprietary software is and should be allowed in a free market. If someone wants to sell something useful that another person finds value in and wants to pay for, I have no problem with that. But free software is a better development model and we should be encouraging / demanding it. I’ll end with a quote from Linus Torvalds:

Science may take a few hundred years to figure out how the world works, but it does actually get there, exactly because people can build on each others’ knowledge, and it evolves over time. In contrast, witchcraft/alchemy may be about smart people, but the knowledge body never “accumulates” anywhere. It might be passed down to an apprentice, but the hiding of information basically means that it can never really become any better than what a single person/company can understand.
And that’s exactly the same issue with open source (free) vs proprietary products. The proprietary people can design something that is smart, but it eventually becomes too complicated for a single entity (even a large company) to really understand and drive, and the company politics and the goals of that company will always limit it.

The world is screwed because while we have things like Wikipedia and Linux, we don’t have places for computer vision and lots of other scientific knowledge to accumulate. To get driverless cars, we don’t need any more hardware, we don’t need any more programmers, we just need 100 scientists to work together in SciPy and GPL ASAP!

Regards,

-Keith

1: Extropian, as in “Transhumanists tend to believe that Kurzweil’s extropian Law of Accelerating Returns will ultimately trump the 2nd Law of Thermodynamics.”

2: Bemes, as in “By uploading her bemes, the transhumanist was able to create a mindfile to serve as a basis for a future cyber-conscious analog of herself.” The singular form, beme, refers to a digitally-inheritable unit of beingness (such as a single element of one’s mannerisms, personality, recollections, feelings, beliefs, attitudes and values) as in “The transhuman survivalist had a very strong beme for paranoia.”

3: Singularity, as in “The Singularity — that era, no more than a few decades hence, when transhumanists believe machine intelligence will merge with and surpass biological intelligence.”

4: Ectogenetic, as in “Many transhumanists look forward to growing replacements for all or part of their body via controlled differentiation of stem cells in an ex vivo ectogenetic process.”

5: Mindclone, as in “Transhumanists are often accepting of the notion that one identity can simultaneously operate across multiple physical and virtual instantiations, via wireless synchronization, with each such instantiation being a mindclone of a biological original mind.”

6: Vitology, as in “Some transhumanists believe biology is simply a subset of vitology, the study of self-replicating Darwinian code subject to mutation and Natural Selection, with the codes expressed in particular molecules for biology and more generally in differing voltage states for vitology.”

7: Beman, as in “A person created with bio-nanotechnology, a cyborg, a virtual person with a human mind, and a person who integrates electronics into their life are four examples of a bio-electronic human, also known as a beman.

8: Nanobot, as in “Transhumanists have a strong tendency to wish for an acceleration of the date when many problems could be solved with large numbers of microscopic, wirelessly networked, intelligent machines, each of which are called a nanobot.”

9: Techno-progressive, as in “Transhumanists tend to be socially-conscious libertarians, also known as techno-progressive, because they believe technology will solve most of the world’s problems.

10: Transhuman, as in “People who believe it is good to transcend our human biological inheritance, such as by modifying our DNA, our bodies or the substrate for our minds, and/or by leaving the earth to live in space habitats or on other celestial bodies, are considered transhuman.”

If the WW II generation was The Greatest Generation, the Baby Boomers were The Worst. My former boss Bill Gates is a Baby Boomer. And while he has the potential to do a lot for the world by giving away his money to other people (for them to do something they wouldn’t otherwise do), after studying Wikipedia and Linux, I see that the proprietary development model Gates’s generation adopted has stifled the progress of technology they should have provided to us. The reason we don’t have robot-driven cars and other futuristic stuff is that proprietary software became the dominant model.

I start the AI chapter of my book with the following question: Imagine 1,000 people, broken up into groups of five, working on two hundred separate encyclopedias, versus that same number of people working on one encyclopedia? Which one will be the best? This sounds like a silly analogy when described in the context of an encyclopedia, but it is exactly what is going on in artificial intelligence (AI) research today.

Today, the research community has not adopted free software and shared codebases sufficiently. For example, I believe there are more than enough PhDs today working on computer vision, but there are 200+ different codebases plus countless proprietary ones.

Simply put, there is no computer vision codebase with critical mass.

We can blame the Baby Boomers for making proprietary software the dominant model. We can also blame them for outlawing nuclear power, never drilling in ANWR despite decades of discussion, never fixing Social Security, destroying the K-12 education system, handing us a near-bankrupt welfare state, and many of the other long-term problems that have existed in this country for decades that they did not fix, and the new ones they created.

It is our generation that will invent the future, as we incorporate more free software, more cooperation amongst our scientists, and free markets into society. The boomer generation got the collectivism part, but they failed on the free software and the freedom from government.

My book describes why free software is critical to faster technological development, and it ends with some pages on why our generation needs to build a space elevator. I believe that in addition to driverless cars, and curing cancer, building a space elevator, getting going on nanotechnology, and terraforming Mars are also in reach. Wikipedia surpassed Encyclopedia Britanicca in 2.5 years. The problems in our world are not technical, but social. Let’s step up. We can make much of it happen a lot faster than we think.

Did you know that many researchers would like to discover light-catching components in order to convert more of the sun’s power into carbon-free electric power?

A new study reported in the journal Applied Physics Letters in August this year (published by the American Institute of Physics), explains how solar energy could potentially be collected by using oxide materials that have the element selenium. A team at the Lawrence Berkeley National Laboratory in Berkeley, California, embedded selenium in zinc oxide, a relatively affordable material that could make more efficient use of the sun’s power.

The team noticed that even a relatively small amount of selenium, just 9 percent of the mostly zinc-oxide base, significantly enhanced the material’s efficiency in absorbing light.

The main author of this study, Marie Mayer (a fourth-year University of California, Berkeley doctoral student) affirms that photo-electrochemical water splitting, that means using energy from the sun to cleave water into hydrogen and oxygen gases, could potentially be the most fascinating future application for her labor. Managing this reaction is key to the eventual production of zero-emission hydrogen powered motors, which hypothetically will run only on water and sunlight.

Journal Research: Marie A. Mayer et all. Applied Physics Letters, 2010 [link: http://link.aip.org/link/APPLAB/v97/i2/p022104/s1]

The conversion efficiency of a PV cell is the proportion of sunlight energy that the photovoltaic cell converts to electric power. This is very important when discussing Pv products, because improving this efficiency is vital to making Photovoltaic energy competitive with more traditional sources of energy (e.g., fossil fuels).

For comparison, the earliest Photovoltaic products converted about 1%-2% of sunlight energy into electric energy. Today’s Photo voltaic devices convert 7%-17% of light energy into electric energy. Of course, the other side of the equation is the money it costs to produce the PV devices. This has been improved over the decades as well. In fact, today’s PV systems generate electricity at a fraction of the cost of early PV systems.

In the 1990s, when silicon cells were 2 times as thick, efficiencies were much smaller than nowadays and lifetimes were reduced, it may well have cost more energy to make a cell than it could generate in a lifetime. In the meantime, the technological know-how has progressed significantly, and the energy repayment time (defined as the recovery time necessary for generating the energy spent to produce the respective technical energy systems) of a modern photovoltaic module is generally from 1 to 4 years depending on the module type and location.

Usually, thin-film technologies — despite having comparatively low conversion efficiencies — obtain significantly shorter energy repayment times than standard systems (often < 1 year). With a normal lifetime of 20 to 30 years, this means that contemporary photovoltaic cells are net energy producers, i.e. they generate significantly more energy over their lifetime than the energy expended in producing them.

The author — Rosalind Sanders writes for the solar pool cover ratings blog, her personal hobby weblog focused on tips to help home owners to save energy with solar power.