Toggle light / dark theme

Thomas Insel from the National Institute of Mental Health recently focused on his life’s work on oxytocin, as I learned from a report in the Wall Street Journal (Oct. 5) featuring this “bonding hormone” of all mammals including humans. (See http://www.dnalc.org/view/2377-Oxytocin-Emotion-and-Autism.html )

Humans are the laughter-bonding mammals. Non-smile-blind toddlers at one point get seduced by Mom’s laughter into a bonding bout. Much as a puppy can in principle (no one checked on this) be seduced into a bonding bout by an adult dog’s happy tail-wagging. This strange convergence of two moods (bonding and joyfulness) into being expressed by the same innate releaser thus has occurred twice independently in two different mammalian species, wolf and human. But the toddler unlike the puppy is mirror-competent. Hence he is able to in addition concoct the hypothesis that Mom is being rewarded over there deep inside by his own momentary activity here that is making her laugh: A strange suspicion which overwhelms his own heart. He invents benevolence as existing over there out of nothing through perceiving it in the joy given to him. And then he tries to do the same thing reciprocally in anticipation of her appreciation. The all of a sudden grown appreciative former animal is no longer an animal – he suddenly knows heaven.

The invention of appreciation turns the toddler into a person. In Bill Seaman’s and mine new book, “Neosentience – The Benevolence Engine” (University of Chicago Press/Intellect 2011), much of this is detailed. Why am I mentioning it here? It is because benevolence is the human stamp. No other animal is benevolent so far – knowing about responsibility and the Now and truthfulness. But we humans can induce animals more intelligent than we are, hardware-wise, into becoming our elder brothers. Leo Szilard — bomb-inventor, bomb proposer and (in vain) bomb retractor — caught a first glimpse of this desperate hope in 1948, as detailed in my paper on the gothic-R theorem of general relativity.

Can I seduce everyone who reads this into becoming moved into “calling another soul his own,” as poet Schiller and composer Beethoven put it in their scientifically correct Song of Joy?

Science is the greatest fun in this most human activity of mutual support and appreciation. Let us not kill it by allowing it to be misused in an attempt to shrink the planet to 2 cm in a matter of years. The toddlers won’t understand this nor will the mothers.

The Nature of Identity Part 3
(Drawings not reproduced here — contact the author for copies)
We have seen how the identity is defined by the 0,0 point – the centroid or locus of perception.

The main problem we have is finding out how neural signals translate into sensory signals – how neural information is translated into the language we understand – that of perception. How does one neural pattern become Red and another the Scent of coffee. Neurons do not emit any color nor any scent.

As in physics, so in cognitive science, some long cherished theories and explanations are having to change.

Perception, and the concept of an Observer (the 0,0 point), are intimately related to the idea of Identity.

Many years ago I was a member of what was called the Artorga Research Group – a group including some of the early cyberneticists – who were focussed on Artificial Organisms.

One of the main areas of concern was, of course, Memory.

One of our group was a young German engineer who suggested that perhaps memories were in fact re-synthesised in accordance with remembered rules, as opposed to storing huge amounts of data.

Since then similar ideas have arisen in such areas as computer graphics.

Here is an example,

It shows a simple picture on a computer screen. We want to store (memorize) this information.

One way is to store the information about each pixel on the screen – is it white or is it black. With a typical screen resolution that could mean over 2.5 million bits of information

But there is another way….

In this process one simply specifies the start point (A) in terms of its co-ordinates (300 Vertically, 100 Horizontally); and its end point (B) (600 Vertically, 800 Horizontally); and simply instructs – “Draw a line of thickness w between them”.

The whole picture is specified in just a few bits..

The first method, specifying bit by bit, known as the Bit Mapped Protocol (.BMP), uses up lots of memory space.

The other method, based on re-synthesising according to stored instructions, is used in some data reduction formats; and is, essentially, just what that young engineer suggested, many years before.

On your computer you will have a screen saver –almost certainly a colorful scene – and of course that is stored, so that if you are away from the computer for a time it can automatically come on to replace what was showing, and in this way “save” your screen.

So – where are those colors in your screensaver stored, where are the shapes shown in it stored? Is there in the computer a Color Storage Place? Is there a Shape Storage Place?

Of course not.

Yet these are the sort of old, sodden concepts that are sometimes still applied in thinking about the brain and memories.

Patterned streams of binary bits, not unlike neural signals , (but about 70 times larger), are fed to a computer screen. And then the screen takes these patterns of bits as instructions to re-synthesise glowing colors and shapes.

We cannot actually perceive the binary signals, and so they are translated by the screen into a language that we can understand. The screen is a translator – that is its sole function.

This is exactly analogous to the point made earlier about perception and neural signals.

The main point here, though, is that what is stored in the computer memory are not colors and shapes but instructions.

And inherent in these instructions as a whole, there must exist a “map”.

Each instruction must not only tell its bit of the screen what color to glow – but it must also specify the co-ordinates of that bit. If the picture is the head of a black panther with green eyes, we don’t want to see a green head and black eyes. The map has to be right. It is important.

Looking at it in another way the map can be seen as a connectivity table – specifying what goes where. Just two different ways of describing the same thing.

As well as simple perception there are derivatives of what has been perceived that have to be taken into account, for example, the factor called movement.

Movement is not in itself perceptible (as we shall presently show); it is a computation.

Take for example, the following two pictures shown side-by-side.

I would like to suggest that one of these balls is moving. And to ask — which one is moving?

If movement had a visual attribute then one could see which one it was – but movement has no visual attributes – it is a computation.

To determine the speed of something, one has to observe its current position, compare that with the record (memory) of its previous position; check the clock to determine the interval between the two observations; and then divide the distance between the two positions, s; by the elapsed time, t; to determine the speed, v,

s/t = v.

This process is carried out automatically, (subconsciously), in more elaborate organisms by having two eyes spaced apart by a known distance and having light receptors – the retina – where each has a fast turn-on and a slow (about 40 ms) turn off, all followed by a bit of straightforward neural circuitry.

Because of this system, one can look at a TV screen and see someone in a position A, near the left hand edge, and then very rapidly, a series of other still pictures in which the person is seen being closer and closer to B, at the right hand edge.

If the stills are shown fast enough – more than 25 a second — then we will see the person walking across the screen from left to right. What you see is movement – except you don’t actually see anything extra on the screen. Being aware of movement as an aid to survival is very old in evolutionary terms. Even the incredibly old fish, the coelacanth, has two eyes.

The information provided is a derivate of the information provided by the receptors.

And now we ought to look at information in a more mathematical way – as in the concept of Information Space (I-space).

For those who are familiar with the term, it is a Hilbert Space.

Information Space is not “real” space – it is not distance space – it is not measurable in metres and centimetres.

As an example, consider Temperature Space. Take the temperature of the air going in to an air-conditioning (a/c) system; the temperature of the air coming out of the a/c system; and the temperature of the room. These three provide the three dimensions of a Temperature Space. Every point in that space correlates to an outside air temperature, an a/c output temperature and the temperature of the room. No distances are involved – just temperatures.

This is an illustration of what it would look like if we re-mapped it into a drawing.

The drawing shows the concept of a 3-dimensional Temperature Space (T-space). The darkly outlined loop is shown here as a way of indicating the “mapping” of a part of T-space.

But what we are interested in here is I-space. And I-space will have many more dimensions than T-space.

In I-space each location is a different item of information, and the fundamental rule of I-space – indeed of any Hilbert space – is,

Similarity equals Proximity.

This would mean that the region concerned with Taste, for example, would be close to the area concerned with Smell, since the two are closely related.

Pale Red would be closer to Medium Red than to Dark Red.

Perception then would be a matter of connectivity.

An interconnected group we could refer to as a Composition or Feature.

Connect 4 & legs & fur & tail & bark & the word dog & the sound of the word dog – and we have a familiar feature.

Features are patterns of interconnections; and it is these features that determine what a thing or person is seen as. What they are seen as is taken as their identity. It is the identity as seen from outside.

To oneself one is here and now, a 0,0 reference point. To someone else one is not the 0,0 point – one is there — not here, and to that person it is they who are the 0,0 point.

This 0,0 or reference point is crucially important. One could upload a huge mass of data, but if there was no 0,0 point that is all it would be – a huge mass of data.

The way forward towards this evolutionary goal, is not to concentrate on being able to upload more and more data, faster and faster – but instead to concentrate on being able to identify the 0.0 point; and to be able to translate from neural code to the language of perception.

The vulnerability of the bio body is the source of most threats to its existence.

We have looked at the question of uploading the identity by uploading the memory contents, on the assumption that the identity is contained in the memories. I believe this assumption has been proved to be almost certainly wrong.

What we are concentrating on is the identity as the viewer of its perceptions, the centroid or locus of perception.

It is the fixed reference point. And the locus of perception is always Here, and it is always Now. This is abbreviated here to 0,0.

What more logical place to find the identity than where it considers Here and Now – its residence in Space Time.

It would surely be illogical to start searching for the identity where it considers to be Somewhere Else or in Another Time.

We considered the fact that the human being accesses the outside world through its senses, and that its information processing system is able to present that information as being “external.” A hand is pricked with a pin. The sensory information – a stream of neural impulses, all essentially identical — progress to the upper brain where the pattern is read and the sensation of pain is felt. That sensation, however, is projected or mapped onto the exact point it originated from.

One feels the pain at the place the neural disturbance came from. It is an illusion — a very useful illusion.

In the long slow progress of evolution from a single cell to the human organism, and to the logical next step — the “android” (we must find a better word) – this mapping function must be one of the most vital survival strategies. If the predator is gnawing at your tail, it’s smart to know where the pain is coming from.

It wasn’t just structure that evolved, but “smarts” too… smarter systems.

Each sensory channel conveys not just sensory information but information regarding where it came from. Like a set of outgoing information vectors. But there is also a complementary set of incoming vectors. The array of sensory vectors from visual, audible, tactile, and so on, all converge on one location – a locus of perception. And the channels cross-correlate. The hand is pricked – we immediately look at the place the pain came from. And… one can “follow one’s nose” to see where the barbecue is.

Dr Shu can use both his left hand and arm; and his right hand and arm in coordination to lift up the $22M Ming vase he is in the process of stealing.

Left/right coordination — so obvious and simple it gets overlooked.

A condition known as Synesthesia [http://hplusmagazine.com/editors-blog/sight-synesthesia-what…be-rewired ] provides an example of how two channels can get confused — for example, being able to see sounds or hear movement.

Perhaps the most interesting example is the rubber hand experiment from UC Riverside. In this the subject places their hands palm down on a table. The left arm and hand are screened off, and a substitute left “arm” and rubber hand are installed. After a while, the subject reacts as though the substitute was their real hand.

It is on Youtube at https://www.youtube.com/watch?v=93yNVZigTsk.

This phenomenon has been attributed to neuroplasticity.

A simpler explanation would be changed coordinates — something that people who row or who ride bicycles are familiar with — even if they have never analysed it. The vehicle becomes part of oneself. It becomes a part of the system, an extension. What about applying the same sense on a grander scale? Such a simple and common observation may have just as much relevance to the next step in evolution as the number of teraflops per second.

So, we can get the sensory vectors to be re-deployed. But one of the fundamental questions would be – can we get the 0,0 locus, the centroid of perception, to shift to another place?

Our environment, the environment we live in, is made of perception. Outside there may be rocks and rivers and rain and wind and thunder… but not in the head. Outside this “theater in the head,” there is a world of photons and particles and energy and radiation — reality — but what we see is what is visible, what we hear is what is audible, what we feel is what is tangible … that is our environment, that is where we live.

However, neurones do not emit any light, neurons do not make any sound, they are not a source of pressure or temperature so what the diddly are we watching and listening to?

We live in a world of perception. Thanks to powerful instrumentation and a great deal of scientific research we know that behind this world of perception there are neurons, unknown to us all the time working away providing us with colors and tones and scents….

But they do not emit colors or tones or scents – the neuronal language is binary – fired or not fired.

Somewhere the neuronal binary (Fired/Not Fired) language has to be translated into the language of perception – the range of colors, the range of tones, the range of smells … these are each continuous variables; not two-state variables as in the language of neurons.

There has been a great flurry of research activity in the area of neurons, and what was considered to be “Gospel” 10 years ago, is no longer so.

IBM and ARM in the UK have (summer 2011) announced prototype brains with hyper-connectivity – a step in the right direction but the fundamental question of interpretation/translation is side-stepped.

I hope someone will prove me wrong, but I am not aware of anyone doing any work on the translator question. This is a grievous error.

(To be continued)

I have been asked to mention the following.
The Nature of The Identity — with Reference to Androids

The nature of the identity is intimately related to information and information processing.

The importance and the real nature of information is only now being gradually realised.

But the history of the subject goes back a long way.

In ancient Greece, those who studied Nature – the predecessors of our scientists – considered that what they studied – material reality – Nature – had two aspects – form and substance.

Until recent times all the emphasis was on substance — what substance(s) subjected to sufficient stress would transmute into gold; what substances in combination could be triggered into releasing vast amounts of energy – money and weapons – the usual Homo Sap stuff.

You take a block of marble – that is substance. You have a sculptor create a beautiful statue from it – that is form.

The form consists of the shapes imposed by the sculptor; and the shapes consist of information. Now, if you were an unfeeling materialistic bastard you could describe the shapes in terms of equations. And if you were an utterly depraved unfeeling materialistic bastard you could have a computer compare the sets of equations from many examples to find out what is considered to be beauty.

Dr Foxglove – the Great Maestro of Leipzig, is seated at the concert grand — playing on a Steinway (of course) with great verve, (as one would expect). In front of him, under a low light, there is a sheet of paper with black marks – information of some kind – the music for Chopin’s Nocturne Op. 9, no. 2.

Aahh! Wonderful.

Sublime….

But … all is not as it seems….

Herr Doktor Foxglove thinks he is playing music.

A grand illusion my friend! You see, the music – it is, how you say — all in the heads of the listeners.

What the Good Doktor is doing, and doing manfully — is operating a wooden acoustic-wave generator – albeit very skilfully, and not just any old wooden acoustic-wave generator – but a Steinway wooden acoustic-wave generator.

There is no music in the physical world. The acoustic waves are not music. They are just pressure waves in the atmosphere. The pressure waves actuate the eardrum. And that in turn actuates a part of the inner ear called the cochlea. And that in turn causes streams of neural impulses to progress up into the higher brain.

Dr Foxglove hits a key on the piano corresponding to 440 acoustic waves per second; this is replicated in a slightly different form within the inner ear, until it becomes a stream of neural impulses….

But what the listener hears is not 440 waves or 440 neural impulses or 440 anything – what the listener hears is one thing – a single tone.

The tone is an exact derivative of the pattern of neural impulses. There are no tones in physical reality.

Tones exist only in the experience of the listener – only in the experience of the observer.

And thanks to some fancy processing not only will the listener get the illusion that 440 cycles per second is actually a “tone” – but a further illusion is perpetrated – that the tone is coming from a particular direction, that what one is hearing is Dr. Foxglove at the Steinway, over there, under the lights – that is where the sound is.

But no, my friend….

What the listener is actually listening to is his eardrums. He is listening to a derivative of a derivative … of his eardrums rattling.

His eardrums are rattling because someone is operating an acoustic wave generator in the vicinity.

But what he is hearing is pure information.

And as for the music ….

A single note – a tone – is neither harmonious nor disharmonious in itself. It is only harmonious or disharmonious in relation to another note.

Music is derived from ratios – a still further derivative — and ratios are pure information.

Take for example the ratio of 20 Kg to 10 Kg.

The ratio of 20 Kg to 10 Kg is not 2 Kg.

The ratio of 20 Kg to 10 Kg is 2 – just 2 – pure information.

20 kg/10 kg = 2.

Similarly, we can also show that there is no colour in reality, there are no shapes in reality; depth perception is a derivative – and just as what one is listening to is the rattling of one’s eardrums – so what one is watching is the inside of one’s eyeballs – one is watching the shuddering impact of photons on one’s retina.

The sensations of sound, of light and colour and shapes are all in one’s mind – as decodings of neural messages – which in turn are derivatives of physical processes.

The wonderful aroma coming from the barbecue is all in one’s head.

There are no aromas or tastes in reality – all are conjurations of the mind.

Like the Old Guy said, all is maya, baby….

The only point that is being made here is that Information is too important a subject to be so neglected.

What you are doing here is at the leading edge beyond the leading edge and in that future Information will be a significant factor.

What we away back in the dim, distant and bewildered early 21st Century called Information Technology (I.T.) will be seen as Computer Technology (CT) which is all it ever was – but there will be a real IT in the future.

Similarly what has been referred to for too long as Information Science will be seen for what it is — Library Technology.

Now – down to work.

One of the options – the android – is to upload all stored data from a smelly old bio body to a cool Designer Body (DB).

This strategy is based on the unproven but popular belief that one’s identity is contained by one’s memory.

There are two critical points that need to be addressed.

The observer is the cameraman — not the picture. Unless you are looking in a mirror or at a film of yourself, then you are the one person who will not appear in your memory.

There will be memories of that favourite holiday place, of your favorite tunes, of the emotions that you felt when … but you will only “appear” in your memories as the point of observation.

You are the cameraman – not the picture.

So, we should view with skepticism ideas that uploading the memory will take the identity with it.

If somebody loses their memory – they do not become someone else – hopping and skipping down the street,

‘Hi – I’m Tad Furlong, I’m new in town….’

If somebody loses their memory – they may well say – ‘I do not know my name….’

That does not mean they have become someone else – what they mean is ‘I cannot remember my name….’

The fact that this perplexes them indicates that it is still the same person – it is someone who has lost their name.

If a person changes their name they do not become someone else; nor do they become someone else if they can’t remember their name – or as it is more commonly, and more dramatically, and more loosely put – “cannot remember who they are”.

So, what is the identity?

There is the observer – whatever that is – and there are observations.

There are different forms of information – visual, audible, tactile, olfactory … which together form the environment of the observer. By “projection” the environment is observed as being external. The visual image from one eye is compared with that of the other eye to give depth perception. The sound from one ear is compared with that from the other ear to give surround sound. You are touched on the arm and immediately the tactile sensation – which actually occurs in the mind, is mapped as though coming from that exact spot on your arm.

You live and have your being in a world of sensation.

This is not to say that the external world does not exist – only that our world is the world “inside” – the place where we hear, and see, and feel, and taste….

And all those projections are like “vectors” leading out from a projection spot – a locus of projection – the 0,0 spot – the point which is me seeing and me tasting and me hearing and me scenting even though through the magic of projection I have the idea that the barbeque smells, that there is music in the piano, that the world is full of color, and that my feet feel cold.

This locus of projection is the “me” –it is the point of observation, the 0,0 reference point. This, the observer not the observation, is the identity … the me, the 0,0.

And that 0,0 may be a lot easier to shift than a ton and a half of squashed memories. Memories of being sick; of being tired; of the garden; of your dog; of the sound of chalk on the blackboard, of the humourless assistant bank manager; of the 1982 Olympics; of Sadie Trenton; of Fred’s tow bar; and so on and on and on –

So – if memory ain’t the thing — how do we do it … upload the identity?
(To be continued)

carboncopies.org

Concerns arose recently about the concept of so-called “catchment areas”, evolutionary developments that result in a very tight interdependence between requirements for survival and behavioral drives. In particular, the concern has been raised that such catchment might render any significant modification of the human mind, such as through brain enhancement, impossible (Suzanne Gildert, “Pavlov’s AI: What do superintelligences REALLY want?”, Humanity+@Caltech, 2010).

The concept of a catchment area assumes that beneath the veneer of goal-oriented rational planning, learned behavior and skill lies a basic set of drives and reward mechanisms. The only purpose of those drives and reward mechanisms is genetic survival, a necessary result of eons of natural selection. It follows that all of our perceived goals, our desires and interests, the pursuit of wealth, social acceptance or fame, love, scientific understanding, all of it is merely a means to an end. All of it points back to the set of drives and reward mechanisms that best enable us as individuals, us as a tribe and us as a species to survive in our given environment.

Why does that describe a catchment area, a type of prison of behavior? It is assumed that the distribution of behaviors that have enabled long-term survival is a narrow one with little real variance. Stray too far from the norm and your behaviors become counter-productive to survival. Worst of all, if you recognize your enslavement to those single-purpose drives and reward mechanisms, if you realize that they have no meaning beyond a survival that itself links to no universal purpose, then you risk embarking upon a nihilistic course that would likely end in your extermination or self-termination.

How risky is modifying reward mechanisms?

If the catchment problem is real, and if it indeed implies that we live in a precarious balance of behavioral drives that keep us alive, then any modification brings with it the risk that we tip the balance. One significant change, or a series of changes could push us into a condition where our mental reward system is no longer aligned with requirements for survival. One form of this problem has been popularized as “wire-heading” (Larry Niven, Known Space & Ringworld novels, 1970–1996), where an individual exists in a short-circuited reward-loop, living only to repeatedly and directly deliver reward stimulus to herself.

There are of course numerous possible critiques of the catchment hypothesis, which bears a heavy burden of proof. There is plenty of evidence that evolution is not an actual optimizer. If the process of natural selection is not an optimizer, then why should we assume that we exist in a delicately optimized state? We may also consider changes in our mental experience in the recent past. For example, humans generally live longer now than they did previously, so that the extended experience itself is a novel condition for human mental function, and brings with it different survival challenges to which behavior needs to be adapted. And, while we share many behavioral traits as a species, there are clearly differences in behavior between individuals, most of whom appear to function and survive. In fact, some behaviors do not seem at all optimal for survival, such as extreme sports. Those critiques do not mean that the notion of catchment areas is wrong, but they demonstrate that we must take care before drawing extreme conclusions in the matter.

If we represent behavioral traits as variables in a multi-dimensional landscape, and the survival suitability of combinations of traits as elevation in that landscape, does the landscape look like a Himalayan mountain ridge with sharp peaks, steep cliffs and deep valleys? Or does it look more like a rolling vista of hills, or perhaps even a concatenation of several contiguous high-altitude plateaus? If we do not know what this landscape looks like, then it is extremely difficult to make informed statements about the results that we should expect when reward mechanisms and consequent behaviors are modified.

Can we modify while specifying conditions for survival?

Is there anything about past developments that we might use as a guide, to tell us if modifications of reward mechanisms and behaviors are survivable, and how that might work? I believe there is. I think the process is unavoidable, as it is a result of selection among differences. Darwin got us here, and he can get us out too.

Let us assume that modifying our reward mechanisms can result in personal destruction. That is not a fanciful assumption. We need only look at the worst-case scenarios in cases of addiction to see relevant examples. Similarly, we may observe that suicide is such a case, unless it is a sacrifice that serves the greater purpose of tribe or species survival.

Do all modifications lead to destruction? That seems highly unlikely, given that humans have not existed forever. There have been ancestors who probably had different brains and at least somewhat different drives and reward mechanisms. The further back you look, the more different and strange those drives and mechanisms may seem, since the species involved will have had somewhat different challenges and requirements for survival.

If there was a way that led from there to what we are now through natural selection, then why should we assume that this is the terminal state? It seems reasonable to assume that if we carried out a large number of experiments in which we modified our brains and their underlying drives and reward mechanisms to some degree, some of those experiments would not result in catastrophe. There would still be a selection process. The question is not whether there exist ways to achieve brain enhancement. Rather, we should seek out the best process. We should determine how to carry out intelligent experimentation that minimizes that rate of failure and maximizes the rate of success.

Image attribution

Wirehead Darwin: modified from George Grantham Bain press photo collection, purchased by the library of Congress. No restrictions.
Survival landscape: modified Height map (Wikipedia), unknown author. Public domain.

It would be helpful to discuss these theoretical concepts because there could be significant practical and existential implications.

The Global Brain (GB) is an emergent world-wide entity of distributed intelligence, facilitated by communication and the meaningful interconnections between millions of humans via technology (such as the internet).

For my purposes I take it to mean the expressive integration of all (or the majority) of human brains through technology and communication, a Metasystem Transition from the human brain to a global (Earth) brain. The GB is truly global not only in geographical terms but also in function.

It has been suggested that the GB has clear analogies with the human brain. For example, the basic unit of the human brain (HB) is the neuron, whereas the basic unit of the GB is the human brain. Whilst the HB is space-restricted within our cranium, the GB is constrained within this planet. The HB contains several regions that have specific functions themselves, but are also connected to the whole (e.g. occipital cortex for vision, temporal cortex for auditory function, thalamus etc.). The GB contains several regions that have specific functions themselves, but are connected to the whole (e.g. search engines, governments, etc.).

Some specific analogies are:

1. The Broca’s area in the inferior frontal gyrus, associated with speech. This could be the equivalent of, say, Rubert Murdoch’s communication empire.
2. The motor cortex is the equivalent of the world-wide railway system.
3. The sensory system in the brain is the equivalent of all digital sensors, CCTV network, internet uploading facilities etc.

If we accept that the GB will eventually become fully operational (and this may happen within the next 40–50 years), then there could be severe repercussions on human evolution. Apart from the fact that we could be able to change our genetic make-up using technology (through synthetic biology or nanotechnology for example) there could be new evolutionary pressures that can help extend human lifespan to an indefinite degree.

Empirically, we find that there is a basic underlying law that allows neurons the same lifespan as their human host. If natural laws are universal, then I would expect the same law to operate in similar metasystems, i.e. in my analogy with humans being the basic operating units of the GB. In that case, I ask:

If, there is an axiom positing that individual units (neurons) within a brain must live as long as the brain itself, i.e. 100–120 years, then, the individual units (human brains and, therefore, whole humans) within a GB must live as long as the GB itself, i.e. indefinitely.

Humans will become so embedded and integrated into the GB’s virtual and real structures, that it may make more sense from the allocation of resources point of view, to maintain existing humans indefinitely, rather than eliminate them through ageing and create new ones, who would then need extra resources in order to re-integrate themselves into the GB.

The net result will be that humans will start experiencing an unprecedented prolongation of their lifespan, in an attempt by the GB to evolve to higher levels of complexity at a low thermodynamical cost.

Marios Kyriazis
http://www.elpistheory.info

My generation was the last one to learn to use a slide rule in school. Today that skill is totally obsolete. So is the ability to identify the Soviet Socialist Republics on a map, the ability to write an operation in FORTAN, or how to drive a car with a standard transmission.

We live in a world of instant access to information and where technology is making exponential advances in synthetic biology, nanotechnology, genetics, robotics, neuroscience and artificial intelligence. In this world, we should not be focused on improving the classrooms but should be devoting resources to improving the brains that the students bring to that classroom.

To prepare students for this high-velocity, high-technology world the most valuable skill we can teach them is to be better learners so they can leap from one technological wave to the next. That means education should not be about modifying the core curricula of our schools but should be about building better learners by enhancing each student’s neural capacities and motivation for life-long learning.

Less than two decades ago this concept would have been inconceivable. We used to think that brain anatomy (and hence learning capacity) was fixed at birth. But recent breakthroughs in the neuroscience of learning have demonstrated that this view is fundamentally wrong.

In the past few decades, neuroscience research has demonstrated that, contrary to popular belief, the brain is not static. Rather, it is highly modifiable (“plastic”) throughout life, and this remarkable “neuroplasticity” is primarily experience-dependent. Neuroplasticity research shows that the brain changes its very structure with each different activity it performs, perfecting its circuits so it is better suited to the task at hand. Neurological capacities and competencies are both measurable and significantly consequential to educational outcomes.

This means that the neural capacities that form the building blocks for learning — attention & focus, memory, prediction & modeling, processing speed, spatial skills, and executive functioning — can be improved throughout life through training. Just as physical exercise is a well-known and well-accepted means to improve health for anyone, regardless of age or background, so too can the brain be put “into shape” for optimal learning.

If any of these neural capacities are enhanced, you would see significant improvements in a person’s ability to understand and master new situations.

While these basic neural capacities are well known by scientists and clinicians today, they are rarely used to develop students into better learners by schools, teachers or parents. There is too little awareness and too few tools available for enhancing a student’s capacity and ability to learn. The failure to focus on optimizing each student’s neural capacities for learning is resulting in widespread failure of the educational systems, particularly for the underprivileged.

Gone are the days when you could equip students with slide rules and a core of knowledge and skills and expect them to achieve greatness. Our children already inhabit a world where new game platforms and killer apps appear and are surpassed in dizzying profusion and speed. They are already adapting to the dynamics of the 21st century. But we can help them adapt more methodically and systematically by focusing our attention on improving their capacity to learn throughout their lives.

This far-reaching and potentially revolutionary conclusion is based on recent research breakthroughs and thus may be contrary to the past beliefs of many teachers, administrators, parents and students, who have historically emphasized classroom size and curriculum as the key to improved learning.

Just as new knowledge and understanding is revolutionizing the way we communicate, trade, or practice medicine so too must it transform the way we learn. For students, that revolution is already well under way but it’s happening outside of their schools. We owe it to them to equip them with all the capabilities they’ll need to thrive in the limitless world beyond the classroom.

I believe that while it’s important to leave better country for our children, it’s more important that we leave better children for our country.

Naveen Jain is a philanthropist, entrepreneur and technology pioneer. He is a founder and CEO of Intelius, a Seattle-based company that empowers consumers with information to make intelligent decisions about personal safety and security. Prior to Intelius, Naveen Jain founded InfoSpace and took it public in 1998 on NASDAQ. Naveen Jain has been awarded many honors for his entrepreneurial successes and leadership skills including “Ernst & Young Entrepreneur of the Year”, “Albert Einstein Technology Medal” for pioneers in technology, “Top 20 Entrepreneurs” by Red Herring, “Six People Who Will Change the Internet” by Information Week, among other honors.

The Stoic philosophical school shares several ideas with modern attempts at prolonging human lifespan. The Stoics believed in a non-dualistic, deterministic paradigm, where logic and reason formed part of their everyday life. The aim was to attain virtue, taken to mean human excellence.

I have recently described a model specifically referring to indefinite lifespans, where human biological immortality is a necessary and inevitable consequence of natural evolution (for details see www.elpistheory.info and for a comprehensive summary see http://cid-3d83391d98a0f83a.office.live.com/browse.aspx/Immo…=155370157).

This model is based on a deterministic, non-dualistic approach, described by the laws of Chaos theory (dynamical systems) and suggests that, in order to accelerate the natural transition from human evolution by natural selection to a post-Darwinian domain (where indefinite lifespans are the norm) , it is necessary to lead a life of constant intellectual stimulation, innovation and avoidance of routine (see http://www.liebertonline.com/doi/abs/10.1089/rej.2005.8.96?journalCode=rej and http://www.liebertonline.com/doi/abs/10.1089/rej.2009.0996) i.e. to seek human virtue (excellence, brilliance, and wisdom, as opposed to mediocrity and routine). The search for intellectual excellence increases neural inputs which effect epigenetic changes that can up-regulate age repair mechanisms.

Thus it is possible to conciliate the Stoic ideas with the processes that lead to both technological and developmental Singularities, using approaches that are deeply embedded in human nature and transcend time.

With our growing resources, the Lifeboat Foundation has teamed with the Singularity Hub as Media Sponsors for the 2010 Humanity+ Summit. If you have suggestions on future events that we should sponsor, please contact [email protected].

The summer 2010 “Humanity+ @ Harvard — The Rise Of The Citizen Scientist” conference is being held, after the inaugural conference in Los Angeles in December 2009, on the East Coast, at Harvard University’s prestigious Science Hall on June 12–13. Futurist, inventor, and author of the NYT bestselling book “The Singularity Is Near”, Ray Kurzweil is going to be keynote speaker of the conference.

Also speaking at the H+ Summit @ Harvard is Aubrey de Grey, a biomedical gerontologist based in Cambridge, UK, and is the Chief Science Officer of SENS Foundation, a California-based charity dedicated to combating the aging process. His talk, “Hype and anti-hype in academic biogerontology research: a call to action”, will analyze the interplay of over-pessimistic and over-optimistic positions with regards of research and development of cures, and propose solutions to alleviate the negative effects of both.

The theme is “The Rise Of The Citizen Scientist”, as illustrated in his talk by Alex Lightman, Executive Director of Humanity+:

“Knowledge may be expanding exponentially, but the current rate of civilizational learning and institutional upgrading is still far too slow in the century of peak oil, peak uranium, and ‘peak everything’. Humanity needs to gather vastly more data as part of ever larger and more widespread scientific experiments, and make science and technology flourish in streets, fields, and homes as well as in university and corporate laboratories.”

Humanity+ Summit @ Harvard is an unmissable event for everyone who is interested in the evolution of the rapidly changing human condition, and the impact of accelerating technological change on the daily lives of individuals, and on our society as a whole. Tickets start at only $150, with an additional 50% discount for students registering with the coupon STUDENTDISCOUNT (valid student ID required at the time of admission).

With over 40 speakers, and 50 sessions in two jam packed days, the attendees, and the speakers will have many opportunities to interact, and discuss, complementing the conference with the necessary networking component.

Other speakers already listed on the H+ Summit program page include:

  • David Orban, Chairman of Humanity+: “Intelligence Augmentation, Decision Power, And The Emerging Data Sphere”
  • Heather Knight, CTO of Humanity+: “Why Robots Need to Spend More Time in the Limelight”
  • Andrew Hessel, Co-Chair at Singularity University: “Altered Carbon: The Emerging Biological Diamond Age”
  • M. A. Greenstein, Art Center College of Design: “Sparking our Neural Humanity with Neurotech!”
  • Michael Smolens, CEO of dotSUB: “Removing language as a barrier to cross cultural communication”

New speakers will be announced in rapid succession, rounding out a schedule that is guaranteed to inform, intrigue, stimulate and provoke, in moving ahead our planetary understanding of the evolution of the human condition!

H+ Summit @ Harvard — The Rise Of The Citizen Scientist
June 12–13, Harvard University
Cambridge, MA

You can register at http://www.eventbrite.com/event/648806598/friendsofhplus/4141206940.

Another risk is loss of human rationality, while preserving human life. In a society there are always so many people with limited cognitive abilities, and most of the achievements are made by a small number of talented people. Genetic and social degradation, reducing the level of education, loss of skills of logic can lead to a temporary decrease in intelligence of individual groups of people. But as long as humanity is very large in population, it is not so bad, because there will always be enough intelligent people. Significant drop in population after nonglobal disaster may exacerbate this problem. And the low intelligence of the remaining people will reduce their chances of survival. Of course, one can imagine such an absurd situation that people are so degraded that by the evolutionary path new species arise from us, which is not having a full-fledged intelligence — and that back then this kind of evolving reasonable, developed a new intelligence.
More dangerous is decline of intelligence because of the spread of technological contaminants (or use of a certain weapon). For example, I should mention constantly growing global arsenic contamination, which is used in various technological processes. Sergio Dani wrote about this in his article “Gold, coal and oil.” http://sosarsenic.blogspot.com/2009/11/gold-coal-and-oil-reg…is-of.html, http://www.medical-hypotheses.com/article/S0306-9877 (09) 00666–5/abstract
Disengaged during the extraction of gold mines in the arsenic remains in the biosphere for millennia. Dani binds arsenic with Alzheimer’s disease. In his another paper is demonstrated that increasing concentrations of arsenic leads to an exponential increase in incidence of Alzheimer’s disease. He believes that people are particularly vulnerable to arsenic poisoning, as they have large brains and longevity. If, however, according to Denis, in the course of evolution, people will adapt to high levels of arsenic, it will lead to a decline in the brain and life expectancy, resulting in the intellect of people will be lost.
In addition to arsenic contamination occurs among many other neurotoxic substances — CO, CO2, methane, benzene, dioxin, mercury, lead, etc. Although the level of pollution by each of them separately is below health standards, the sum of the impacts may be larger. One reason for the fall of the Roman Empire was called the total poisoning of its citizens (though not barbarians) of lead from water pipes. Of course, they could not have knowledge about these remote and unforeseen consequences — but we also may not know about the many consequences of our affairs.
In addition to dementia is alcohol and most drugs, many drugs (eg, side effect in the accompanying sheets of mixtures of heartburn called dementia). Also rigid ideological system, or memes.
Number of infections, particularly prion, also leads to dementia.
Despite this, the average IQ of people is growing as life expectancy.