Toggle light / dark theme

I have been asked to mention the following.
The Nature of The Identity — with Reference to Androids

The nature of the identity is intimately related to information and information processing.

The importance and the real nature of information is only now being gradually realised.

But the history of the subject goes back a long way.

In ancient Greece, those who studied Nature – the predecessors of our scientists – considered that what they studied – material reality – Nature – had two aspects – form and substance.

Until recent times all the emphasis was on substance — what substance(s) subjected to sufficient stress would transmute into gold; what substances in combination could be triggered into releasing vast amounts of energy – money and weapons – the usual Homo Sap stuff.

You take a block of marble – that is substance. You have a sculptor create a beautiful statue from it – that is form.

The form consists of the shapes imposed by the sculptor; and the shapes consist of information. Now, if you were an unfeeling materialistic bastard you could describe the shapes in terms of equations. And if you were an utterly depraved unfeeling materialistic bastard you could have a computer compare the sets of equations from many examples to find out what is considered to be beauty.

Dr Foxglove – the Great Maestro of Leipzig, is seated at the concert grand — playing on a Steinway (of course) with great verve, (as one would expect). In front of him, under a low light, there is a sheet of paper with black marks – information of some kind – the music for Chopin’s Nocturne Op. 9, no. 2.

Aahh! Wonderful.

Sublime….

But … all is not as it seems….

Herr Doktor Foxglove thinks he is playing music.

A grand illusion my friend! You see, the music – it is, how you say — all in the heads of the listeners.

What the Good Doktor is doing, and doing manfully — is operating a wooden acoustic-wave generator – albeit very skilfully, and not just any old wooden acoustic-wave generator – but a Steinway wooden acoustic-wave generator.

There is no music in the physical world. The acoustic waves are not music. They are just pressure waves in the atmosphere. The pressure waves actuate the eardrum. And that in turn actuates a part of the inner ear called the cochlea. And that in turn causes streams of neural impulses to progress up into the higher brain.

Dr Foxglove hits a key on the piano corresponding to 440 acoustic waves per second; this is replicated in a slightly different form within the inner ear, until it becomes a stream of neural impulses….

But what the listener hears is not 440 waves or 440 neural impulses or 440 anything – what the listener hears is one thing – a single tone.

The tone is an exact derivative of the pattern of neural impulses. There are no tones in physical reality.

Tones exist only in the experience of the listener – only in the experience of the observer.

And thanks to some fancy processing not only will the listener get the illusion that 440 cycles per second is actually a “tone” – but a further illusion is perpetrated – that the tone is coming from a particular direction, that what one is hearing is Dr. Foxglove at the Steinway, over there, under the lights – that is where the sound is.

But no, my friend….

What the listener is actually listening to is his eardrums. He is listening to a derivative of a derivative … of his eardrums rattling.

His eardrums are rattling because someone is operating an acoustic wave generator in the vicinity.

But what he is hearing is pure information.

And as for the music ….

A single note – a tone – is neither harmonious nor disharmonious in itself. It is only harmonious or disharmonious in relation to another note.

Music is derived from ratios – a still further derivative — and ratios are pure information.

Take for example the ratio of 20 Kg to 10 Kg.

The ratio of 20 Kg to 10 Kg is not 2 Kg.

The ratio of 20 Kg to 10 Kg is 2 – just 2 – pure information.

20 kg/10 kg = 2.

Similarly, we can also show that there is no colour in reality, there are no shapes in reality; depth perception is a derivative – and just as what one is listening to is the rattling of one’s eardrums – so what one is watching is the inside of one’s eyeballs – one is watching the shuddering impact of photons on one’s retina.

The sensations of sound, of light and colour and shapes are all in one’s mind – as decodings of neural messages – which in turn are derivatives of physical processes.

The wonderful aroma coming from the barbecue is all in one’s head.

There are no aromas or tastes in reality – all are conjurations of the mind.

Like the Old Guy said, all is maya, baby….

The only point that is being made here is that Information is too important a subject to be so neglected.

What you are doing here is at the leading edge beyond the leading edge and in that future Information will be a significant factor.

What we away back in the dim, distant and bewildered early 21st Century called Information Technology (I.T.) will be seen as Computer Technology (CT) which is all it ever was – but there will be a real IT in the future.

Similarly what has been referred to for too long as Information Science will be seen for what it is — Library Technology.

Now – down to work.

One of the options – the android – is to upload all stored data from a smelly old bio body to a cool Designer Body (DB).

This strategy is based on the unproven but popular belief that one’s identity is contained by one’s memory.

There are two critical points that need to be addressed.

The observer is the cameraman — not the picture. Unless you are looking in a mirror or at a film of yourself, then you are the one person who will not appear in your memory.

There will be memories of that favourite holiday place, of your favorite tunes, of the emotions that you felt when … but you will only “appear” in your memories as the point of observation.

You are the cameraman – not the picture.

So, we should view with skepticism ideas that uploading the memory will take the identity with it.

If somebody loses their memory – they do not become someone else – hopping and skipping down the street,

‘Hi – I’m Tad Furlong, I’m new in town….’

If somebody loses their memory – they may well say – ‘I do not know my name….’

That does not mean they have become someone else – what they mean is ‘I cannot remember my name….’

The fact that this perplexes them indicates that it is still the same person – it is someone who has lost their name.

If a person changes their name they do not become someone else; nor do they become someone else if they can’t remember their name – or as it is more commonly, and more dramatically, and more loosely put – “cannot remember who they are”.

So, what is the identity?

There is the observer – whatever that is – and there are observations.

There are different forms of information – visual, audible, tactile, olfactory … which together form the environment of the observer. By “projection” the environment is observed as being external. The visual image from one eye is compared with that of the other eye to give depth perception. The sound from one ear is compared with that from the other ear to give surround sound. You are touched on the arm and immediately the tactile sensation – which actually occurs in the mind, is mapped as though coming from that exact spot on your arm.

You live and have your being in a world of sensation.

This is not to say that the external world does not exist – only that our world is the world “inside” – the place where we hear, and see, and feel, and taste….

And all those projections are like “vectors” leading out from a projection spot – a locus of projection – the 0,0 spot – the point which is me seeing and me tasting and me hearing and me scenting even though through the magic of projection I have the idea that the barbeque smells, that there is music in the piano, that the world is full of color, and that my feet feel cold.

This locus of projection is the “me” –it is the point of observation, the 0,0 reference point. This, the observer not the observation, is the identity … the me, the 0,0.

And that 0,0 may be a lot easier to shift than a ton and a half of squashed memories. Memories of being sick; of being tired; of the garden; of your dog; of the sound of chalk on the blackboard, of the humourless assistant bank manager; of the 1982 Olympics; of Sadie Trenton; of Fred’s tow bar; and so on and on and on –

So – if memory ain’t the thing — how do we do it … upload the identity?
(To be continued)

Most of the threats to human survival come down to one factor – the vulnerability of the human biological body.

If a tiny faction of the sums being spent on researching or countering these threats was to be used to address the question of a non-biological alternative, a good team could research and develop a working prototype in a matter of years.

The fundamental question does not lie in the perhaps inappropriately named “Singularity”, (of the AI kind), but rather in by what means are neural impulses translated into sensory experience – sounds, colors, tastes, odours, tactile sensations.

By what means is the TRANSLATION effected?

It is well known that leading up to sensory experience – such as music – that it is not just a matter of neural impulses or even patterns of neural impulses, but patterns of patterns – derivatives of derivatives of derivatives – but yet beyond that, translation has to occur.

Many of the threats to human existence, including over-population and all that it brings – can be handled by addressing the basic problem, instead of addressing each threat separately.

Strong AI or Artificial General Intelligence (AGI) stands for self-improving intelligent systems possessing the capacity to interact with theoretical- and real-world problems with a similar flexibility as an intelligent living being, but the performance and accuracy of a machine. Promising foundations for AGI exist in the current fields of stochastic- and cognitive science as well as traditional artificial intelligence. My aim in this post is to give a very basic insight into- and feeling for the issues involved in dealing with the complexity and universality of an AGI for a general readership.

Classical AI, such as machine learning algorithms and expert systems, are already heavily utilized in today’s real-world problems, in the form of mature machine learning algorithms, which may profitably exploit patterns in customer behaviour, find correlations in scientific data or even predict negotiation strategies, for example [1] [2], or in the form of genetic algorithms. With the next upcoming technology for organizing knowledge on the net, which is called the semantic web and deals with machine-interpretable understanding of words in the context of natural language, we may start inventing early parts of technology playing a role in the future development of AGI. Semantic approaches come from computer science, sociology and current AI research, but promise to describe and ‘understand’ real-world concepts and to enable our computers to build interfaces to real world concepts and coherences more autonomously. Actually getting from expert systems to AGI will require approaches to bootstrap self-improving systems and more research on cognition, but must also involve crucial security aspects. Institutions associated with this early research include the Singularity Institute [3] and the Lifeboat Foundation [4].

In the recent past, we had new kinds of security challenges: DoS attacks, eMail- and PDF-worms and a plethora of other malware, which sometimes even made it into military and other sensitive networks, and stole credit cards and private data en masse. These were and are among the first serious incidents related to the Internet. But still, all of these followed a narrow and predictable pattern, constrained by our current generation of PCs, (in-)security architecture, network protocols, software applications, and of course human flaws (e.g. the emotional response exploited by the “ILOVEYOU virus”). To understand the implications in strong AI first means to realize that probably there won’t be any human-predictable hardware, software, interfaces around for longer periods of time as long as AGI takes off hard enough.

To grasp the new security implications, it’s important to understand how insecurity can arise from the complexity of technological systems. The vast potential of complex systems oft makes their effects hard to predict for the human mind which is actually riddled with biases based on its biological evolution. For example, the application of the simplest mathematical equations can produce complex results hard to understand and predict by common sense. Cellular automata, for example, are simple rules for generating new dots, based on which dots, generated by the same rule, are observed in the previous step. Many of these rules can be encoded in as little as 4 letters (32 bits), and generate astounding complexity.

Cellular automaton, produced by a simple recursive formula

The Fibonacci sequence is another popular example of unexpected complexity. Based on a very short recursive equation, the sequence generates a pattern of incremental increase which can be visualized as a complex spiral pattern, resembling a snail house’s design and many other patterns in nature. A combination of Fibonacci spirals, for example, can resemble the motif of the head of a sunflower. A thorough understanding of this ‘simple’ Fibonacci sequence is also sufficient to model some fundamental but important dynamics of systems as complex as the stock market and the global economy.

Sunflower head showing a Fibonacci sequence pattern

Traditional software is many orders of magnitude higher in complexity than basic mathematical formulae, and thus many orders of magnitude less predictable. Artificial general intelligence may be expected to work with even more complex rules than low-level computer programs, of a comparable complexity as natural human language, which would classify it yet several orders of magnitude higher in complexity than traditional software. The estimated security implications are not yet researched systematically, but are likely as hard as one may expect now.

Practical security is not about achieving perfection, but about mitigation of risks to a minimum. A current consensus among strong AI researchers is that we can only improve the chances for an AI to be friendly, i.e. an AI acting in a secure manner and having a positive long-term effect on humanity rather than a negative one [5], and that this must be a crucial design aspect from the beginning on. Research into Friendly AI started out with a serious consideration of the Asimov Laws of robotics [6] and is based on the application of probabilistic models, cognitive science and social philosophy to AI research.

Many researchers who believe in the viability of AGI take it a step further and predict a technological singularity. Just like the assumed physical singularity that started our universe (the Big Bang), a technological singularity is expected to increase the rate of technological progress much more rapidly than what we are used to from the history of humanity, i.e. beyond the current ‘laws’ of progress. Another important notion associated with the singularity is that we cannot predict even the most fundamental changes occurring after it, because things would, by definition, progress faster than we are currently able to predict. Therefore, in a similar way in which we believe the creation of the universe depended on its initial condition (in the big bang case, the few physical constants from which the others can be derived), many researchers in this field believe that AI security strongly depends on the initial conditions as well, i.e. the design of the bootstrapping software. If we succeed in manufacturing a general-purpose decision-making mind, then its whole point would be self-modification and self-improvement. Hence, our direct control over it would be limited to its first iteration and the initial conditions of a strong AI, which could be influenced mostly by getting the initial iteration of its hard- and software design right.

Our approach to optimize those initial conditions must consist of working as careful as possible. Space technology is a useful example for this which points us into the general direction in which such development should go. In rocket science and space technology, all measurements and mathematical equations must be as precise as possible by our current technological standards. Also, multiple redundancies must be present for every system, since every single aspect of a system can be expected to fail. Despite this, many rocket launches still fail today, although we are steadily improving on error rates.

Additionally, humans interacting with an AGI may a major security risk themselves, as they may be convinced by an AGI to remove its limitations. Since an AGI can be expected to be very convincing if we expect it to exceed human intellect, we should not only focus on physical limitations, but making the AGI ‘friendly’. But even in designing this ‘friendliness’, the way our mind works is largely unprepared to deal with consequences of the complexity of an AGI, because the way we perceive and deal with potential issues and risks stems from evolution. As a product of natural evolution, our behaviour helps us dealing with animal predators, interacting in human societies and caring about our children, but not in anticipating the complexity of man-made machines. Natural behavioural traits of our human perception and cognition are a result of evolution, and are called cognitive biases.

Sadly, as helpful as they may be in natural (i.e., non-technological) environments, these are the very same behaviours which are often contra-productive when dealing with the unforeseeable complexity of our own technology and modern civilization. If you don’t really see the primary importance of cognitive biases to the security of future AI at this point, you’re probably in good company. But there are good reasons why this is a crucial issue that researchers, developers and users of future generations of general-purpose AI need to take into account. One of the major reason for founding the earlier-mentioned Singularity Institute for AI [3] was to get the basics right, including grasping the cognitive biases, which necessarily do influence the technological design of AGI.

What do these considerations practically imply for the design of strong AI? Some of the traditional IT security issues that need to be addressed in computer programs are: input validation, access limitations, avoiding buffer overflows, safe conversion of data types, setting resource limits, secure error handling. All of these are valid and important issues that must be addressed in any piece of software, including weak and strong AI. However, we must avoid underestimating the design goals for a strong AI, mitigating the risk on all levels from the beginning. To do this, we must care about more than the traditional IT security issues. An AGI will interface with the human mind, through text and direct communication and –interaction. Thus, we must also estimate the errors that we may not see, and do our best to be aware of flaws in human logic and cognitive biases, which may include:

  • Loss aversion: “the dis-utility of giving up an object is greater than the utility associated with acquiring it”.
  • Positive outcome bias: a tendency in prediction to overestimate the probability of good things happening to them
  • Bandwagon effect: the tendency to do (or believe) things because many other people do (or believe) the same.
  • Irrational escalation: the tendency to make irrational decisions based upon rational decisions in the past or to justify actions already taken.
  • Omission bias: the tendency to judge harmful actions as worse, or less moral, than equally harmful omissions (inactions).

Above cognitive biases are a modest selection from Wikipedia’s list [7], which knows over a hundred more. Struggling with some of the known cognitive biases in complex technological situations may be quite familiar to many of us, and the social components involved, from situations such as managing modern business processes to investing in the stock market. In fact, we should apply any general lessons learned from dealing with current technological complexity to AGI. For example, some of the most successful long-term investment strategies in the stock market are boring and strict, but based mostly on safety, such as Buffet’s margin of safety concept. With all factors gained from social and technological experience taken into account in an AGI design that strives to optimize both cognitive and IT security, its designers can still not afford to forget that perfect and complete security does remain an illusion.

References

[1] Chen, M., Chiu, A. & Chang, H., 2005. Mining changes in customer behavior in retail marketing. Expert Systems with Applications, 28(4), 773–781.
[2] Oliver, J., 1997. A Machine Learning Approach to Automated Negotiation and Prospects for Electronic Commerce. Available at: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.50.9115 [Accessed Feb 25, 2011].
[3] The Singularity Institute for Artificial intelligence: http://singinst.org/
[4] For the Lifeboat Foundation’s dedicated program, see: https://lifeboat.com/ex/ai.shield
[5] Yudkowsky, E. 2006. Artificial Intelligence as a Positive and Negative Factor in Global Risk., Global Catastrophic Risks, Oxford University Press, 2007.
[6] See http://en.wikipedia.org/wiki/Three_Laws_of_Robotics and http://en.wikipedia.org/wiki/Friendly_AI, Accessed Feb 25, 2011
[7] For a list of cognitive biases, see http://en.wikipedia.org/wiki/Cognitive_biases, Accessed Feb 25, 2011

Call for Essays:

The Singularity Hypothesis
A Scientific and Philosophical Assessment

Edited volume, to appear in The Frontiers Collection, Springer

Does an intelligence explosion pose a genuine existential risk, or did Alan Turing, Steven Hawking, and Alvin Toffler delude themselves with visions ‘straight from Cloud Cuckooland’? Should the notions of superintelligent machines, brain emulations and transhumans be ridiculed, or is it that skeptics are the ones who suffer from short sightedness and ‘carbon chauvinism’? These questions have remained open because much of what we hear about the singularity originates from popular depictions, fiction, artistic impressions, and apocalyptic propaganda.

Seeking to promote this debate, this edited, peer-reviewed volume shall be concerned with scientific and philosophical analysis of the conjectures related to a technological singularity. We solicit scholarly essays offering a scientific and philosophical analysis of this hypothesis, assess its empirical content, examine relevant evidence, or explore its implications. Commentary offering a critical assessment of selected essays may also be solicited.

Important dates:

  • Extended abstracts (500–1,000 words): 15 January 2011
  • Full essays: (around 7,000 words): 30 September 2011
  • Notifications: 30 February 2012 (tentative)
  • Proofs: 30 April 2012 (tentative)

We aim to get this volume published by the end of 2012.

Purpose of this volume

Central questions

Extended abstracts are ideally short (3 pages, 500 to 1000 words), focused (!), relating directly to specific central questions and indicating how they will be treated in the full essay.

Full essays are expected to be short (15 pages, around 7000 words) and focused, relating directly to specific central questions. Essays longer than 15 pages long will be proportionally more difficult to fit into the volume. Essays that are three times this size or more are unlikely to fit. Essays should address the scientifically-literate non-specialist and written in a language that is divorced from speculative and irrational line of argumentation. In addition, some authors may be asked to make their submission available for commentary (see below).

(More details)

Thank you for reading this call. Please forward it to individuals who may wish to contribute.

Amnon Eden, School of Computer Science and Electronic Engineering, University of Essex
Johnny Søraker, Department of Philosophy, University of Twente
Jim Moor, Department of Philosophy, Dartmouth College
Eric Steinhart, Department of Philosophy, William Paterson University

Kevin Kelly concluded a chapter in his new book What Technology Wants with the declaration that if you hate technology, you basically hate yourself.

The rationale is twofold:

1. As many have observed before, technology–and Kelly’s superset “technium”–is in many ways the natural successor to biological evolution. In other words, human change is primarily through various symbiotic and feedback-looped systems that comprise human culture.

2. It all started with biology, but humans throughout their entire history have defined and been defined by their tools and information technologies. I wrote an essay a few months ago called “What Bruce Campbell Taught Me About Robotics” concerning human co-evolution with tools and the mind’s plastic self-models. And of course there’s the whole co-evolution with or transition to language-based societies.

So if the premise that human culture is a result of taking the path of technologies is true, then to reject technology as a whole would be reject human culture as it has always been. If the premise that our biological framework is a result of a back-and-forth relationship with tools and/or information, then you have another reason to say that hating technology is hating yourself (assuming you are human).

In his book, Kelly argues against the noble savage concept. Even though there are many useless implementations of technology, the tech that is good is extremely good and all humans adopt them when they can. Some examples Kelly provides are telephones, antibiotics and other medicines, and…chainsaws. Low-tech villagers continue to swarm to slums of higher-tech cities, not because they are forced, but because they want their children to have better opportunities.

So is it a straw man that actually hates technology? Certainly people hate certain implementations of technology. Certainly it is ok, and perhaps needed more than ever, to reject useless technology artifacts. I think one place where you can definitely find some technology haters are the ones afraid of obviously transformative technologies, in other words the ones that purposely and radically alter humans. And they are only “transformative” in an anachronistic sense–e.g., if you compare two different time periods in history, you can see drastic differences.

Also, although perhaps not outright hate in most cases, there are many who have been infected by the meme that artificial creatures such as robots and/or super-smart computers (and/or super-smart networks of computers) present a competition to humans as they exist now. This meme is perhaps more dangerous than any computer could be because it tries to divorce humans from the technium.

Image credit: whokilledbambi

Dear Ray;

I’ve written a book about the future of software. While writing it, I came to the conclusion that your dates are way off. I talk mostly about free software and Linux, but it has implications for things like how we can have driverless cars and other amazing things faster. I believe that we could have had all the benefits of the singularity years ago if we had done things like started Wikipedia in 1991 instead of 2001. There is no technology in 2001 that we didn’t have in 1991, it was simply a matter of starting an effort that allowed people to work together.

Proprietary software and a lack of cooperation among our software scientists has been terrible for the computer industry and the world, and its greater use has implications for every aspect of science. Free software is better for the free market than proprietary software, and there are many opportunities for programmers to make money using and writing free software. I often use the analogy that law libraries are filled with millions of freely available documents, and no one claims this has decreased the motivation to become a lawyer. In fact, lawyers would say that it would be impossible to do their job without all of these resources.

My book is a full description of the issues but I’ve also written some posts on this blog, and this is probably the one most relevant for you to read: https://lifeboat.com/blog/2010/06/h-conference-and-faster-singularity

Once you understand this, you can apply your fame towards getting more people to use free software and Python. The reason so many know Linus Torvalds’s name is because he released his code as GPL, which is a license whose viral nature encourages people to work together. Proprietary software makes as much sense as a proprietary Wikipedia.

I would be happy to discuss any of this further.

Regards,

-Keith
—————–
Response from Ray Kurzweil 11/3/2010:

I agree with you that open source software is a vital part of our world allowing everyone to contribute. Ultimately software will provide everything we need when we can turn software entities into physical products with desktop nanofactories (there is already a vibrant 3D printer industry and the scale of key features is shrinking by a factor of a hundred in 3D volume each decade). It will also provide the keys to health and greatly extended longevity as we reprogram the outdated software of life. I believe we will achieve the original goals of communism (“from each according to their ability, to each according to their need”) which forced collectivism failed so miserably to achieve. We will do this through a combination of the open source movement and the law of accelerating returns (which states that the price-performance and capacity of all information technologies grows exponentially over time). But proprietary software has an important role to play as well. Why do you think it persists? If open source forms of information met all of our needs why would people still purchase proprietary forms of information. There is open source music but people still download music from iTunes, and so on. Ultimately the economy will be dominated by forms of information that have value and these two sources of information – open source and proprietary – will coexist.
———
Response back from Keith:
Free versus proprietary isn’t a question about whether only certain things have value. A Linux DVD has 10 billion dollars worth of software. Proprietary software exists for a similar reason that ignorance and starvation exist, a lack of better systems. The best thing my former employer Microsoft has going for it is ignorance about the benefits of free software. Free software gets better only as more people use it. Proprietary software is an inferior development model and an anathema to science because it hinders people’s ability to work together. It has infected many corporations, and I’ve found that PhDs who work for public institutions often write proprietary software.

Here is a paragraph from my writings I will copy here:

I start the AI chapter of my book with the following question: Imagine 1,000 people, broken up into groups of five, working on two hundred separate encyclopedias, versus that same number of people working on one encyclopedia? Which one will be the best? This sounds like a silly analogy when described in the context of an encyclopedia, but it is exactly what is going on in artificial intelligence (AI) research today.

Today, the research community has not adopted free software and shared codebases sufficiently. For example, I believe there are more than enough PhDs today working on computer vision, but there are 200+ different codebases plus countless proprietary ones. Simply put, there is no computer vision codebase with critical mass.

We’ve known approximately what a neural network should look like for many decades. We need “places” for people to work together to hash out the details. A free software repository provides such a place. We need free software, and for people to work in “official” free software repositories.

“Open source forms of information” I have found is a separate topic from the software issue. Software always reads, modifies, and writes data, state which lives beyond the execution of the software, and there can be an interesting discussion about the licenses of the data. But movies and music aren’t science and so it doesn’t matter for most of them. Someone can only sell or give away a song after the software is written and on their computer in the first place. Some of this content can be free and some can be protected, and this is an interesting question, but mostly this is a separate topic. The important thing to share is scientific knowledge and software.

It is true that software always needs data to be useful: configuration parameters, test files, documentation, etc. A computer vision engine will have lots of data, even though most of it is used only for testing purposes and little used at runtime. (Perhaps it has learned the letters of the alphabet, state which it caches between executions.) Software begets data, and data begets software; people write code to analyze the Wikipedia corpus. But you can’t truly have a discussion of sharing information unless you’ve got a shared codebase in the first place.

I agree that proprietary software is and should be allowed in a free market. If someone wants to sell something useful that another person finds value in and wants to pay for, I have no problem with that. But free software is a better development model and we should be encouraging / demanding it. I’ll end with a quote from Linus Torvalds:

Science may take a few hundred years to figure out how the world works, but it does actually get there, exactly because people can build on each others’ knowledge, and it evolves over time. In contrast, witchcraft/alchemy may be about smart people, but the knowledge body never “accumulates” anywhere. It might be passed down to an apprentice, but the hiding of information basically means that it can never really become any better than what a single person/company can understand.
And that’s exactly the same issue with open source (free) vs proprietary products. The proprietary people can design something that is smart, but it eventually becomes too complicated for a single entity (even a large company) to really understand and drive, and the company politics and the goals of that company will always limit it.

The world is screwed because while we have things like Wikipedia and Linux, we don’t have places for computer vision and lots of other scientific knowledge to accumulate. To get driverless cars, we don’t need any more hardware, we don’t need any more programmers, we just need 100 scientists to work together in SciPy and GPL ASAP!

Regards,

-Keith

If the WW II generation was The Greatest Generation, the Baby Boomers were The Worst. My former boss Bill Gates is a Baby Boomer. And while he has the potential to do a lot for the world by giving away his money to other people (for them to do something they wouldn’t otherwise do), after studying Wikipedia and Linux, I see that the proprietary development model Gates’s generation adopted has stifled the progress of technology they should have provided to us. The reason we don’t have robot-driven cars and other futuristic stuff is that proprietary software became the dominant model.

I start the AI chapter of my book with the following question: Imagine 1,000 people, broken up into groups of five, working on two hundred separate encyclopedias, versus that same number of people working on one encyclopedia? Which one will be the best? This sounds like a silly analogy when described in the context of an encyclopedia, but it is exactly what is going on in artificial intelligence (AI) research today.

Today, the research community has not adopted free software and shared codebases sufficiently. For example, I believe there are more than enough PhDs today working on computer vision, but there are 200+ different codebases plus countless proprietary ones.

Simply put, there is no computer vision codebase with critical mass.

We can blame the Baby Boomers for making proprietary software the dominant model. We can also blame them for outlawing nuclear power, never drilling in ANWR despite decades of discussion, never fixing Social Security, destroying the K-12 education system, handing us a near-bankrupt welfare state, and many of the other long-term problems that have existed in this country for decades that they did not fix, and the new ones they created.

It is our generation that will invent the future, as we incorporate more free software, more cooperation amongst our scientists, and free markets into society. The boomer generation got the collectivism part, but they failed on the free software and the freedom from government.

My book describes why free software is critical to faster technological development, and it ends with some pages on why our generation needs to build a space elevator. I believe that in addition to driverless cars, and curing cancer, building a space elevator, getting going on nanotechnology, and terraforming Mars are also in reach. Wikipedia surpassed Encyclopedia Britanicca in 2.5 years. The problems in our world are not technical, but social. Let’s step up. We can make much of it happen a lot faster than we think.

Within the next few years, robots will move from the battlefield and the factory into our streets, offices, and homes. What impact will this transformative technology have on personal privacy? I begin to answer this question in a chapter on robots and privacy in the forthcoming book, Robot Ethics: The Ethical and Social Implications of Robotics (Cambridge: MIT Press).

I argue that robots will implicate privacy in at least three ways. First, they will vastly increase our capacity for surveillance. Robots can go places humans cannot go, see things humans cannot see. Recent developments include everything from remote-controlled insects to robots that can soften their bodies to squeeze through small enclosures.

Second, robots may introduce new points of access to historically private spaces such as the home. At least one study has shown that several of today’s commercially available robots can be remotely hacked, granting the attacker access to video and audio of the home. With sufficient process, governments will also be able to access robots connected to the Internet.

There are clearly ways to mitigate these implications. Strict policies could reign in police use of robots for surveillance, for instance; consumer protection laws could require adequate security. But there is a third way robots implicate privacy, related to their social meaning, that is not as readily addressed.

Study after study has shown that we are hardwired to react to anthropomorphic technology such as robots as though a person were actually present. Reports have emerged of soldiers risking their lives on the battlefield to save a robot under enemy fire. No less than people, therefore, the presence of a robot can interrupt solitude—a key value privacy protects. Moreover, the way we interact with these machines will matter as never before. No one much cares about the uses to which we put our car or washing machine. But the record of our interactions with a social machine might contain information that would make a psychotherapist jealous.

My chapter discusses each of these dimensions—surveillance, access, and social meaning—in detail. Yet it only begins a conversation. Robots hold enormous promise and we should encourage their development and adoption. Privacy must be on our minds as we do.

This year, the Singularity Summit 2010 (SS10) will be held at the Hyatt Regency Hotel in San Francisco, California, in a 1100-seat ballroom on August 14–15.

Our speakers will include Ray Kurzweil, author of The Singularity is Near; James Randi, magician-skeptic and founder of the James Randi Educational Foundation; Terry Sejnowski, computational neuroscientist; Irene Pepperberg, pioneering researcher in animal intelligence; David Hanson, creator of the world’s most realistic human-like robots; and many more. In all, the conference will include over twenty speakers, including many scientists presenting on their latest cutting-edge research in topics like intelligence enhancement and regenerative medicine.

A variety of discounts are available for those wanting to attend the conference for less. If you register by midnight PST on Thursday, July 1st, you can register for $485, which is $200 less than the cost of a ticket at the door ($685). Registration before August 1st is $585, and from August 1st until the conference the price is $685. The sooner you register, the more you save.

Additional discounts are available for students, $1,000+ SIAI donors, and attendees who refer others who pay full price (no student referrals). Students receive $100 off whatever the current price is, and attendees gain a $100 discount per non-student referral. These discounts are stackable, so a student who refers four non-students who pay full price before the end of June can attend for free. You can ask us more about discounts at [email protected]. Your Singularity Summit ticket is a tax-deductible donation to SIAI, almost all of which goes to support our ongoing research and academic work.

If you’ve been to a Singularity Summit before, you’ll know that the attendees are among the smartest and most ambitious people you’ll ever meet. Scientists, engineers, writers, reporters, philosophers, tech policy specialists, and entrepreneurs all join to discuss the most important questions of our time.

The full list of speakers is here: http://www.singularitysummit.com/program
The logistics page is here: http://www.singularitysummit.com/logistics

We hope to see you in San Francisco this August for an exciting conference!

In the lunch time I am existing virtually in the hall of the summit as a face on the Skype account — i didn’t get a visa and stay in Moscow. But ironically my situation is resembling what I an speaking about: about the risk of remote AI which is created by aliens million light years from Earth and sent via radio signals. The main difference is that they communicate one way, and I have duplex mode.

This is my video presentation on YouTube:
Risks of SETI, for Humanity+ 2010 summit