Toggle light / dark theme

A response to McClelland and Plaut’s
comments in the Phys.org story:

Do brain cells need to be connected to have meaning?

Asim Roy
Department of Information Systems
Arizona State University
Tempe, Arizona, USA
www.lifeboat.com/ex/bios.asim.roy

Article reference:

Roy A. (2012). “A theory of the brain: localist representation is used widely in the brain.” Front. Psychology 3:551. doi: 10.3389/fpsyg.2012.00551

Original article: http://www.frontiersin.org/Journal/FullText.aspx?s=196&n…2012.00551

Comments by Plaut and McClelland: http://phys.org/news273783154.html

Note that most of the arguments of Plaut and McClelland are theoretical, whereas the localist theory I presented is very much grounded in four decades of evidence from neurophysiology. Note also that McClelland may have inadvertently subscribed to the localist representation idea with the following statement:

Even here, the principles of distributed representation apply: the same place cell can represent very different places in different environments, for example, and two place cells that represent overlapping places in one environment can represent completely non-overlapping places in other environments.”

The notion that a place cell can “represent” one or more places in different environments is very much a localist idea. It implies that the place cell has meaning and interpretation. I start with responses to McClelland’s comments first. Please reference the Phys.org story to find these quotes from McClelland and Plaut and see the contexts.

1. McClelland – “what basis do I have for thinking that the representation I have for any concept – even a very familiar one – is associated with a single neuron, or even a set of neurons dedicated only to that concept?”

There’s four decades of research in neurophysiology on receptive field cells in the sensory processing systems and on hippocampal place cells that shows that single cells can encode a concept – from motion detection, color coding and line orientation detection to identifying a particular location in an environment. Neurophysiologists have also found category cells in the brains of humans and animals. See the next response which has more details on category cells. The neurophysiological evidence is substantial that single cells encode concepts, starting as early as the retinal ganglion cells. Hubel and Wiesel won a Nobel Prize in physiology and medicine in 1981 for breaking this “secret code” of the brain. Thus there’s enough basis to think that a single neuron can be dedicated to a concept and even at a very low level (e.g. for a dot, a line or an edge).

2. McClelland – “Is each such class represented by a localist representation in the brain?”

Cells that represent categories have been found in human and animal brains. Fried et al. (1997) found some MTL (medial temporal lobe) neurons that respond selectively to gender and facial expression and Kreiman et al. (2000) found MTL neurons that respond to pictures of particular categories of objects, such as animals, faces and houses. Recordings of single-neuron activity in the monkey visual temporal cortex led to the discovery of neurons that respond selectively to certain categories of stimuli such as faces or objects (Logothetis and Sheinberg, 1996; Tanaka, 1996; Freedman and Miller, 2008).

I quote Freedman and Miller (2008): “These studies have revealed that the activity of single neurons, particularly those in the prefrontal and posterior parietal cortices (PPCs), can encode the category membership, or meaning, of visual stimuli that the monkeys had learned to group into arbitrary categories.”

Lin et al. (2007) report finding “nest cells” in mouse hippocampus that fire selectively when the mouse observes a nest or a bed, regardless of the location or environment.

Gothard et al. (2007) found single neurons in the amygdala of monkeys that responded selectively to images of monkey faces, human faces and objects as they viewed them on a computer monitor. They found one neuron that responded in particular to threatening monkey faces. Their general observation is (p. 1674): “These examples illustrate the remarkable selectivity of some neurons in the amygdala for broad categories of stimuli.”

Thus the evidence is substantial that category cells exist in the brain.

References:

  1. Fried, I., McDonald, K. & Wilson, C. (1997). Single neuron activity in human hippocampus and amygdala during recognition of faces and objects. Neuron 18, 753–765.
  2. Kreiman, G., Koch, C. & Fried, I. (2000) Category-specific visual responses of single neurons in the human medial temporal lobe. Nat. Neurosci. 3, 946–953.
  3. Freedman DJ, Miller EK (2008) Neural mechanisms of visual categorization: insights from neurophysiology. Neurosci Biobehav Rev 32:311–329.
  4. Logothetis NK, Sheinberg DL (1996) Visual object recognition. Annu Rev Neurosci 19:577–621.
  5. Tanaka K (1996) Inferotemporal cortex and object vision. Annu Rev Neurosci 19:109–139.
  6. Lin, L. N., Chen, G. F., Kuang, H., Wang, D., & Tsien, J. Z. (2007). Neural encoding of the concept of nest in the mouse brain. Proceedings of theNational Academy of Sciences of the United States of America, 104, 6066–6071.
  7. Gothard, K.M., Battaglia, F.P., Erickson, C.A., Spitler, K.M. & Amaral, D.G. (2007). Neural Responses to Facial Expression and Face Identity in the Monkey Amygdala. J. Neurophysiol. 97, 1671–1683.

3. McClelland – “Do I have a localist representation for each phase of every individual that I know?”

Obviously more research is needed to answer these types of questions. But Saddam Hussein and Jennifer Aniston type cells may provide the clue someday.

4. McClelland – “Let us discuss one such neuron – the neuron that fires substantially more when an individual sees either the Eiffel Tower or the Leaning Tower of Pisa than when he sees other objects. Does this neuron ‘have meaning and interpretation independent of other neurons’? It can have meaning for an external observer, who knows the results of the experiment – but exactly what meaning should we say it has?”

On one hand, this obviously brings into focus a lot of the work in neurophysiology. This could boil down to asking who is to interpret the activity of receptive fields, place and grid cells and so on and whether such interpretation can be independent of other neurons. In neurophysiology, the interpretation of these cells (e.g. for motion detection, color coding, edge detection, place cells and so on) are obviously being verified independently in various research labs throughout the world and with repeated experiments. So it is not that some researcher is arbitrarily assigning meaning to cells and that such results can’t be replicated and verified. For many such cells, assignment of meaning is being verified by different labs.

On the other hand, this probably is a question about whether that cell is a category cell and how to assign meaning to it. The interpretation of a cell that responds to pictures of the Eiffel Tower and the Leaning Tower of Pisa, but not to other landmarks, could be somewhat similar to a place cell that responds to a certain location or it could be similar to a category cell. Similar cells have been found in the MTL region — a neuron firing to two different basketball players, a neuron firing to Luke Skywalker and Yoda, both characters of Star Wars, and another firing to a spider and a snake (but not to other animals) (Quiroga & Kreiman, 2010a). Quian Quiroga et al. (2010b, p. 298) had the following observation on these findings: “…. one could still argue that since the pictures the neurons fired to are related, they could be considered the same concept, in a high level abstract space: ‘the basketball players,’ ‘the landmarks,’ ‘the Jedi of Star Wars,’ and so on.”

If these are category cells, there is obviously the question what other objects are included in the category. But, it’s clear that the cells have meaning although it might include other items.

References:

  1. Quian Quiroga, R. & Kreiman, G. (2010a). Measuring sparseness in the brain: Comment on Bowers (2009). Psychological Review, 117, 1, 291–297.
  2. Quian Quiroga, R. & Kreiman, G. (2010b). Postscript: About Grandmother Cells and Jennifer Aniston Neurons. Psychological Review, 117, 1, 297–299.

5. McClelland – “In the context of these observations, the Cerf experiment considered by Roy may not be as impressive. A neuron can respond to one of four different things without really having a meaning and interpretation equivalent to any one of these items.”

The Cerf experiment is not impressive? What McClelland is really questioning is the existence of highly selective cells in the brains of humans and animals and the meaning and interpretation associated with those cells. This obviously has a broader implication and raises questions about a whole range of neurophysiological studies and their findings. For example, are the “nest cells” of Lin et al. (2007) really category cells sending signals to the mouse brain that there is a nest nearby? Or should one really believe that Freedman and Miller (2008) found category cells in the monkey visual temporal cortex that identify certain categories of stimuli such as faces or objects? Or should one believe that Gothard et al. (2007) found category cells in the amygdala of monkeys that responded selectively to images of monkey faces, human faces and objects as they viewed them on a computer monitor? And how about that one neuron that Gothard et al. (2007) found that responded in particular to threatening monkey faces? And does this question about the meaning and interpretation of highly selective cells also apply to simple and complex receptive fields in the retina ganglion and the primary visual cortex? Note that a Nobel Prize has already been awarded for the discovery of these highly selective cells.

The evidence for the existence of highly selective cells in the brains of humans and animals is substantive and irrefutable although one can theoretically ask “what else does it respond to?” Note that McClelland’s question contradicts his own view that there could exist place cells, which are highly selective cells.

6. McClelland – “While we sometimes (Kumeran & McClelland, 2012 as in McClelland & Rumelhart, 1981) use localist units in our simulation models, it is not the neurons, but their interconnections with other neurons, that gives them meaning and interpretation….Again we come back to the patterns of interconnections as the seat of knowledge, the basis on which one or more neurons in the brain can have meaning and interpretation.”

“one or more neurons in the brain can have meaning and interpretation” – that sounds like localist representation, but obviously that’s not what is meant. Anyway, there’s no denying that there is knowledge embedded in the connections between the neurons, but that knowledge is integrated by the neurons to create additional knowledge. So the neurons have additional knowledge that does not exist in the connections. And single cell studies are focused on discovering the integrated knowledge that exists only in the neurons themselves. For example, the receptive field cells in the sensory processing systems and the hippocampal place cells show that some cells detect direction of motion, some code for color, some detect orientation of a line and some detect a particular location in an environment. And there are cells that code for certain categories of objects. That kind of knowledge is not easily available in the connections. In general, consolidated knowledge exists within the cells and that’s where the general focus has been of single cell studies.

7. Plaut – “Asim’s main argument is that what makes a neural representation localist is that the activation of a single neuron has meaning and interpretation on a stand-alone basis. This is about how scientists interpret neural activity. It differs from the standard argument on neural representation, which is about how the system actually works, not whether we as scientists can make sense of a single neuron. These are two separate questions.”

Doesn’t “how the system actually works” depend on our making “sense of a single neuron?” The representation theory has always been centered around single neurons, whether they have meaning on a stand-alone basis or not. So how does making “sense of a single neuron” become a separate question now? And how are these two separate questions addressed in the literature?

8. Plaut – “My problem is that his claim is a bit vacuous because he’s never very clear about what a coherent ‘meaning and interpretation’ has to be like…. but never lays out the constraints that this is meaning and interpretation, and this isn’t. Since we haven’t figured it out yet, what constitutes evidence against the claim? There’s no way to prove him wrong.

In the article, I used the standard definition from cognitive science for localist units, which is a simple one, that localist units have meaning and interpretation. There is no need to invent a new definition for localist representation. The standard definition is very acceptable, accepted by the cognitive science community and I draw attention to that in the article with verbatim quotes from Plate, Thorpe and Elman. Here they are again.

  • Plate (2002):“Another equivalent property is that in a distributed representation one cannot interpret the meaning of activity on a single neuron in isolation: the meaning of activity on any particular neuron is dependent on the activity in other neurons (Thorpe 1995).”
  • Thorpe (1995, p. 550): “With a local representation, activity in individual units can be interpreted directly … with distributed coding individual units cannot be interpreted without knowing the state of other units in the network.”
  • Elman (1995, p. 210): “These representations are distributed, which typically has the consequence that interpretable information cannot be obtained by examining activity of single hidden units.”

The terms “meaning” and “interpretation” are not bounded in any way other than that by means of the alternative representation scheme where “meaning” of a unit is dependent on other units. That’s how it’s constrained in the standard definition and that’s been there for a long time.

Neither Plaut nor McClelland have questioned the fact that receptive fields in the sensory processing systems have meaning and interpretation. Hubel and Wiesel won the Nobel Prize in physiology and medicine in 1981 for breaking this “secret code” of the brain. Here’s part of the Nobel Prize citation:

“Thus, they have been able to show how the various components of the retinal image are read out and interpreted by the cortical cells in respect to contrast, linear patterns and movement of the picture over the retina. The cells are arranged in columns, and the analysis takes place in a strictly ordered sequence from one nerve cell to another and every nerve cell is responsible for one particular detail in the picture pattern.”

Neither Plaut nor McClelland have questioned the fact that place cells have meaning and interpretation. McClelland, in fact, accepts the fact that place cells indicate locations in an environment, which means that he accepts that they have meaning and interpretation.

9. Plaut – “If you look at the hippocampal cells (the Jennifer Aniston neuron), the problem is that it’s been demonstrated that the very same cell can respond to something else that’s pretty different. For example, the same Jennifer Aniston cell responds to Lisa Kudrow, another actress on the TV show Friends with Aniston. Are we to believe that Lisa Kudrow and Jennifer Aniston are the same concept? Is this neuron a Friends TV show cell?”

Want to clarify three things here. First, localist cells are not necessarily grandmother cells. Grandmother cells are a special case of localist cells and this has been made clear in the article. For example, in the primary visual cortex, there are simple and complex cells that are tuned to visual characteristics such as orientation, color, motion and shape. They are localist cells, but not grandmother cells.

Second, the analysis in the article of the interactive activation (IA) model of McClelland and Rumelhart (1981) shows that a localist unit can respond to more than one concept in the next higher level. For example, a letter unit can respond to many word units. And the simple and complex cells in the primary visual cortex will respond to many different objects.

Third, there are indeed category cells in the brain. Response No. 2 above to McClelland’s comments cites findings in neurophysiology on category cells. So the Jennifer Aniston/Lisa Kudrow cell could very well be a category cell, much like the one that fired to spiders and snakes (but not to other animals) and the one that fired for both the Eiffel Tower and the Tower of Pisa (but not to other landmarks). But category cells have meaning and interpretation too. The Jennifer Aniston/Lisa Kudrow cell could be a Friends TV show cell, as Plaut suggested, but it still has meaning and interpretation. However, note that Koch (2011, p. 18, 19) reports finding another Jennifer Aniston MTL cell that didn’t respond to Lisa Kudrow:

One hippocampal neuron responded only to photos of actress Jennifer Aniston but not to pictures of other blonde women or actresses; moreover, the cell fired in response to seven very different pictures of Jennifer Aniston.

References:

  1. Koch, C. (2011). Being John Malkovich. Scientific American Mind, March/April, 18–19.

10. Plaut “Only a few experiments show the degree of selectivity and interpretability that he’s talking about…. In some regions of the medial temporal lobe and hippocampus, there seem to be fairly highly selective responses, but the notion that cells respond to one concept that is interpretable doesn’t hold up to the data.

There are place cells in the hippocampus that identify locations in an environment. Locations are concepts. And McClelland admits place cells represent locations. There is also plenty of evidence on the existence of category cells in the brain (see Response No. 2 above to McClelland’s comments) and categories are, of course, concepts. And simple and complex receptive fields also represent concepts such as direction of motion, line orientation, edges, shapes, color and so on. There is thus abundance of data in neurophysiology that shows that “cells respond to one concept that is interpretable” and that evidence is growing.

The existence of highly tuned and selective cells that have meaning and interpretation is now beyond doubt, given the volume of evidence from neurophysiology over the last four decades.

The historical context in which Brain Computer Interfaces (BCI) has emerged has been addressed in a previous article called “To Interface the Future: Interacting More Intimately with Information” (Kraemer, 2011). This review addresses the methods that have formed current BCI knowledge, the directions in which it is heading and the emerging risks and benefits from it. Why neural stem cells can help establish better BCI integration is also addressed as is the overall mapping of where various cognitive activities occur and how a future BCI could potentially provide direct input to the brain instead of only receive and process information from it.

EEG Origins of Thought Pattern Recognition
Early BCI work to study cognition and memory involved implanting electrodes into rats’ hippocampus and recording its EEG patterns in very specific circumstances while exploring a track both when awake and sleeping (Foster & Wilson, 2006; Tran, 2012). Later some of these patterns are replayed by the rat in reverse chronological order indicating a retrieval of the memory both when awake and asleep (Foster & Wilson, 2006). Dr. John Chapin shows that the thoughts of movement can be written to a rat to then remotely control the rat (Birhard, 1999; Chapin, 2008).

A few human paraplegics have volunteered for somewhat similar electrode implants into their brains for an enhanced BrainGate2 hardware and software device to use as a primary data input device (UPI, 2012; Hochberg et al., 2012). Clinical trials of an implanted BCI are underway with BrainGate2 Neural Interface System (BrainGate, 2012; Tran, 2012). Currently, the integration of the electrodes into the brain or peripheral nervous system can be somewhat slow and incomplete (Grill et al., 2001). Nevertheless, research to optimize the electro-stimulation patterns and voltage levels in the electrodes, combining cell cultures and neurotrophic factors into the electrode and enhance “endogenous pattern generators” through rehabilitative exercises are likely to improve the integration closer to full functional restoration in prostheses (Grill et al., 2001) and improved functionality in other BCI as well.

When integrating neuro-chips to the peripheral nervous system for artificial limbs or even directly to the cerebral sensorimotor cortex as has been done for some military veterans, neural stem cells would likely help heal the damage to the site of the limb lost and speed up the rate at which the neuro-chip is integrated into the innervating tissue (Grill et al., 2001; Park, Teng, & Snyder, 2002). These neural stem cells are better known for their natural regenerative ability and it would also generate this benefit in re-establishing the effectiveness of the damaged original neural connections (Grill et al., 2001).

Neurochemistry and Neurotransmitters to be Mapped via Genomics
Cognition is electrochemical and thus the electrodes only tell part of the story. The chemicals are more clearly coded for by specific genes. Jaak Panksepp is breeding one line of rats that are particularly prone to joy and social interaction and another that tends towards sadness and a more solitary behavior (Tran, 2012). He asserts that emotions emerged from genetic causes (Panksepp, 1992; Tran, 2012) and plans to genome sequence members of both lines to then determine the genomic causes of or correlations between these core dispositions (Tran, 2012). Such causes are quite likely to apply to humans as similar or homologous genes in the human genome are likely to be present. Candidate chemicals like dopamine and serotonin may be confirmed genetically, new neurochemicals may be identified or both. It is a promising long-term study and large databases of human genomes accompanied by medical histories of each individual genome could result in similar discoveries. A private study of the medical and genomic records of the population of Iceland is underway and has in the last 1o years has made unique genetic diagnostic tests for increased risk of type 2 diabetes, breast cancer prostate cancer, glaucoma, high cholesterol/hypertension and atrial fibrillation and a personal genomic testing service for these genetic factors (deCODE, 2012; Weber, 2002). By breeding 2 lines of rats based on whether they display a joyful behavior or not, the lines of mice should likewise have uniquely different genetic markers in their respective populations (Tran, 2012).

fMRI and fNIRIS Studies to Map the Flow of Thoughts into a Connectome
Though EEG-based BCI have been effective in translating movement intentionality of the cerebral motor cortex for neuroprostheses or movement of a computer cursor or other directional or navigational device, it has not advanced the understanding of the underlying processes of other types or modes of cognition or experience (NPG, 2010; Wolpaw, 2010). The use of functional Magnetic Resonance Imaging (fMRI) machines, and functional Near-Infrared Spectroscopy (fNIRIS) and sometimes Positron Emission Tomography (PET) scans for literally deeper insights into the functioning of brain metabolism and thus neural activity has increased in order to determine the relationships or connections of regions of the brain now known collectively as the connectome (Wolpaw, 2010).

Dr. Read Montague explained broadly how his team had several fMRI centers around the world linked to each other across the Internet so that various economic games could be played and the regional specific brain activity of all the participant players of these games can be recorded in real time at each step of the game (Montague, 2012). In the publication on this fMRI experiment, it shows the interaction between baseline suspicion in the amygdala and the ongoing evaluation of the specific situation that may increase or degree that suspicion which occurred in the parahippocampal gyrus (Bhatt et al., 2012). Since the fMRI equipment is very large, immobile and expensive, it cannot be used in many situations (Solovey et al., 2012). To essentially substitute for the fMRI, the fNIRS was developed which can be worn on the head and is far more convenient than the traditional full body fMRI scanner that requires a sedentary or prone position to work (Solovey et al., 2012).

In a study of people multitasking on the computer with the fNIRIS head-mounted device called Brainput, the Brainput device worked with remotely controlled robots that would automatically modify the behavior of 2 remotely controlled robots when Brainput detected an information overload in the multitasking brains of the human navigating both of the robots simultaneously over several differently designed terrains (Solovey et al., 2012).

Writing Electromagnetic Information to the Brain?
These 2 examples of the Human Connectome Project lead by the National Institute of Health (NIH) in the US and also underway in other countries show how early the mapping of brain region interaction is for higher cognitive functions beyond sensory motor interactions. Nevertheless, one Canadian neurosurgeon has taken volunteers for an early example of writing some electromagnetic input into the human brain to induce paranormal kinds of subjective experience and has been doing so since 1987 (Cotton, 1996; Nickell, 2005; Persinger, 2012). Dr. Michael Persinger uses small electrical signals across the temporal lobes in an environment with partial audio-visual isolation to reduce neural distraction (Persinger, 2003). These microtesla magnetic fields especially when applied to the right hemisphere of the temporal lobes often induced a sense of an “other” presence generally described as supernatural in origin by the volunteers (Persinger, 2003). This early example shows how input can be received directly by the brain as well as recorded from it.

Higher Resolution Recording of Neural Data
Electrodes from EEGs and electromagnets from fMRI and fNIRIS still record or send data at the macro level of entire regions or areas of the brain. Work on intracellular recording such as the nanotube transistor allows for better understanding at the level of neurons (Gao et al., 2012). Of course, when introducing micro scale recording or transmitting equipment into the human brain, safety is a major issue. Some progress has been made in that an ingestible microchip called the Raisin has been made that can transmit information gathered during its voyage through the digestive system (Kessel, 2009). Dr. Robert Freitas has designed many nanoscale devices such as Respirocytes, Clottocytes and Microbivores to replace or augment red blood cells, platelets and phagocytes respectively that can in principle be fabricated and do appear to meet the miniaturization and propulsion requirements necessary to get into the bloodstream and arrive at the targeted system they are programmed to reach (Freitas, 1998; Freitas, 2000; Freitas, 2005; Freitas, 2006).

The primary obstacle is the tremendous gap between assembling at the microscopic level and the molecular level. Dr. Richard Feynman described the crux of this struggle to bridge the divide between atoms in his now famous talk given on December 29, 1959 called “There’s Plenty of Room at the Bottom” (Feynman, 1959). To encourage progress towards the ultimate goal of molecular manufacturing by enabling theoretical and experimental work, the Foresight Institute has awarded annual Feynman Prizes every year since 1997 for contribution in this field called nanotechnology (Foresight, 2012).

The Current State of the Art and Science of Brain Computer Interfaces
Many neuroscientists think that cellular or even atomic level resolution is probably necessary to understand and certainly to interface with the brain at the level of conceptual thought, memory storage and retrieval (Ptolemy, 2009; Koene, 2010) but at this early stage of the Human Connectome Project this evaluation is quite preliminary. The convergence of noninvasive brain scanning technology with implantable devices among volunteer patients supplemented with neural stem cells and neurotrophic factors to facilitate the melding of biological and artificial intelligence will allow for many medical benefits for paraplegics at first and later to others such as intelligence analysts, soldiers and civilians.

Some scientists and experts in Artificial Intelligence (AI) express the concern that AI software is on track to exceed human biological intelligence before the middle of the century such as Ben Goertzel, Ray Kurzweil, Kevin Warwick, Stephen Hawking, Nick Bostrom, Peter Diamandis, Dean Kamen and Hugo de Garis (Bostrom, 2009; de Garis, 2009, Ptolemy, 2009). The need for fully functioning BCIs that integrate the higher order conceptual thinking, memory recall and imagination into cybernetic environments gains ever more urgency if we consider the existential risk to the long-term survival of the human species or the eventual natural descendent of that species. This call for an intimate and fully integrated BCI then acts as a shield against the possible emergence of an AI independently of us as a life form and thus a possible rival and intellectually superior threat to the human heritage and dominance on this planet and its immediate solar system vicinity.

References

Bhatt MA, Lohrenz TM, Camerer CF, Montague PR. (2012). Distinct contributions of the amygdala and parahippocampal gyrus to suspicion in a repeated bargaining game. Proc. Nat’l Acad. Sci. USA, 109(22):8728–8733. Retrieved October 15, 2012, from http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3365181/pdf/pnas.201200738.pdf.

Birhard, K. (1999). The science of haptics gets in touch with prosthetics. The Lancet, 354(9172), 52–52. Retrieved from http://search.proquest.com/docview/199023500

Bostrom, N. (2009). When Will Computers Be Smarter Than Us? Forbes Magazine. Retrieved October 19, 2012, from http://www.forbes.com/2009/06/18/superintelligence-humanity-oxford-opinions-contributors-artificial-intelligence-09-bostrom.html.

BrainGate. (2012). BrainGate — Clinical Trials. Retrieved October 15, 2012, from http://www.braingate2.org/clinicalTrials.asp.

Chapin, J. (2008). Robo Rat — The Brain/Machine Interface [Video]. Retrieved October 19, 2012, from https://www.youtube.com/watch?v=-EvOlJp5KIY.

Cotton, I. (1997, 96). Dr. persinger’s god machine. Free Inquiry, 17, 47–51. Retrieved from http://search.proquest.com/docview/230100330.

de Garis, H. (2009, June 22). The Coming Artilect War. Forbes Magazine. Retrieved October 19, 2012, from http://www.forbes.com/2009/06/18/cosmist–terran-cyborgist-opinions-contributors-artificial-intelligence-09-hugo-de-garis.html.

deCODE genetics. (2012). deCODE genetics – Products. Retrieved October 26, 2012, from http://www.decode.com/products.

Feynman, R. (1959, December 29). There’s Plenty of Room at the Bottom, An Invitation to Enter a New Field of Physics. Caltech Engineering and Science. 23(5)22–36. Retrieved October 17, 2012, from http://calteches.library.caltech.edu/47/2/1960Bottom.pdf.

Foresight Institute. (2012). FI sponsored prizes & awards. Retrieved October 17, 2012, from http://www.foresight.org/FI/fi_spons.html.

Foster, D. J., & Wilson, M. A. (2006). Reverse replay of behavioural sequences in hippocampal place cells during the awake state. Nature, 440(7084), 680–3. doi: 10.1038/nature04587.

Freitas, R. (1998). Exploratory Design in Medical Nanotechnology: A Mechanical Artificial Red Cell, Artificial Cells, Blood Substitutes, and Immobil. Biotech.26(1998):411–430. Retrieved October 15, 2012, from http://www.foresight.org/Nanomedicine/Respirocytes.html.

Freitas, R. (2000, June 30). Clottocytes: Artificial Mechanical Platelets,” Foresight Update (41)9–11. Retrieved October 15, 2012, from http://www.imm.org/publications/reports/rep018.

Freitas, R. (2005. April). Microbivores: Artificial Mechanical Phagocytes using Digest and Discharge Protocol. J. Evol. Technol. (14)55–106. Retrieved October 15, 2012, from http://www.jetpress.org/volume14/freitas.pdf.

Freitas, R. (2006. September). Pharmacytes: An Ideal Vehicle for Targeted Drug Delivery. J. Nanosci. Nanotechnol. (6)2769–2775. Retrieved October 15, 2012, from http://www.nanomedicine.com/Papers/JNNPharm06.pdf.

Gao, R., Strehle, S., Tian, B., Cohen-Karni, T. Xie, P., Duan, X., Qing, Q., & Lieber, C.M. (2012). “Outside looking in: Nanotube transistor intracellular sensors” Nano Letters. 12(3329−3333). Retrieved September 7, 2012, from http://cmliris.harvard.edu/assets/NanoLet12-3329_RGao.pdf.

Grill, W., McDonald, J., Peckham, P., Heetderks, W., Kocsis, J., & Weinrich, M. (2001). At the interface: convergence of neural regeneration and neural prostheses for restoration of function. Journal Of Rehabilitation Research & Development, 38(6), 633–639.

Hochberg, L. R., Bacher, D., Jarosiewicz, B., Masse, N. Y., Simeral, J. D., Vogel, J., Donoghue, J. P. (2012). Reach and grasp by people with tetraplegia using a neurally controlled robotic arm. Nature, 485(7398), 372–5. Retrieved from http://search.proquest.com/docview/1017604144.

Kessel, A. (2009, June 8). Proteus Ingestible Microchip Hits Clinical Trials. Retrieved October 15, 2012, from http://singularityhub.com/2009/06/08/proteus–ingestible-microchip-hits-clinical-trials.

Koene, R.A. (2010). Whole Brain Emulation: Issues of scope and resolution, and the need for new methods of in-vivo recording. Presented at the Third Conference on Artificial General Intelligence (AGI2010). March, 2010. Lugano, Switzerland. Retrieved August 29, 2010, from http://rak.minduploading.org/publications/publications/koene…=0&d=1.

Kraemer, W. (2011, December). To Interface the Future: Interacting More Intimately with Information. Journal of Geoethical Nanotechnology. 6(2). Retrieved December 27, 2011, from http://www.terasemjournals.com/GNJournal/GN0602/kraemer.html.

Montague, R. (2012, June). What we’re learning from 5,000 brains. Retrieved October 15, 2012, from http://video.ted.com/talk/podcast/2012G/None/ReadMontague_2012G-480p.mp4.

Nature Publishing Group (NPG). (2010, December). A critical look at connectomics. Nature Neuroscience. p. 1441. doi:10.1038/nn1210-1441.

Nickell, J. (2005, September). Mystical experiences: Magnetic fields or suggestibility? The Skeptical Inquirer, 29, 14–15. Retrieved from http://search.proquest.com/docview/219355830

Panksepp, J. (1992). A Critical Role for “Affective Neuroscience” in Resolving What Is Basic About Basic Emotions. 99(3)554–560. Retrieved October 14, 2012, from http://www.communicationcache.com/uploads/1/0/8/8/10887248/a…otions.pdf.

Park, K. I., Teng, Y. D., & Snyder, E. Y. (2002). The injured brain interacts reciprocally with neural stem cells supported by scaffolds to reconstitute lost tissue. Nature Biotechnology, 20(11), 1111–7. doi: 10.1038/nbt751.

Persinger, M. (2003). The Sensed Presence Within Experimental Settings: Implications for the Male and Female Concept of Self. Journal of Psychology. (137)1.5–16. Retrieved October ‎October ‎14, ‎2012, from http://search.proquest.com/docview/213833884.

Persinger, M. (2012). Dr. Michael A. Persinger. Retrieved October 27, 2012, from http://142.51.14.12/Laurentian/Home/Departments/Behavioural+Neuroscience/People/Persinger.htm?Laurentian_Lang=en-CA

Ptolemy, R. (Producer & Director). (2009). Transcendent Man [Film]. Los Angeles: Ptolemaic Productions, Therapy Studios.

Solovey, E., Schermerhorn, P., Scheutz, M., Sassaroli, A., Fantini, S. & Jacob, R. (2012). Brainput: Enhancing Interactive Systems with Streaming fNIRS Brain Input. Retrieved August 5, 2012, from http://web.mit.edu/erinsol/www/papers/Solovey.CHI.2012.Final.pdf.

Tran, F. (Director). (2012). Dream Life of Rats [Video]. Retrieved ?September ?21, ?2012, from http://www.hulu.com/watch/388493.

UPI. (2012, May 31). People with paralysis control robotic arms to reach and grasp using brain computer interface. UPI Space Daily. Retrieved from http://search.proquest.com/docview/1018542919

Weber, J. L. (2002). The iceland map. Nature Genetics, 31(3), 225–6. doi: http://dx.doi.org/10.1038/ng920

Wolpaw, J. (2010, November). Brain-computer interface research comes of age: traditional assumptions meet emerging realities. Journal of Motor Behavior. 42(6)351–353. Retrieved September 10, 2012, from http://www.tandfonline.com/doi/pdf/10.1080/00222895.2010.526471.

Greetings to the Lifeboat Foundation community and blog readers! I’m Reno J. Tibke, creator of Anthrobotic.com and new advisory board member. This is my inaugural post, and I’m honored to be here and grateful for the opportunity to contribute a somewhat… different voice to technology coverage and commentary. Thanks for reading.

This Here Battle Droid’s Gone Haywire
There’s a new semi-indy sci-fi web series up: DR0NE. After one episode, it’s looking pretty clear that the series is most likely going to explore shenanigans that invariably crop up when we start using semi-autonomous drones/robots to do some serious destruction & murdering. Episode 1 is pretty and well made, and stars 237, the android pictured above looking a lot like Abe Sapien’s battle exoskeleton. Active duty drones here in realityland are not yet humanoid, but now that militaries, law enforcement, the USDA, private companies, and even citizens are seriously ramping up drone usage by land, air, and sea, the subject is timely and watching this fiction is totally recommended.

(Update: DR0NE, Episode 2 now available)

It would be nice to hope for some originality, and while DR0NE is visually and means-of-productionally and distributionally novel, it’s looking like yet another angle on a psychology & set of issues that fiction has thoroughly drilled — like, for centuries.

Higher-Def Old Hat?
Okay, so the modern versions go like this: one day an android or otherwise humanlike machine is damaged or reprogrammed or traumatized or touched by Jesus or whatever, and it miraculously “wakes up,” or its neural network remembers a previous life, or what have you. Generally the machine becomes severely bi-polar about its place in the universe; while it often struggles with the guilt of all the murderdeathkilling it did at others’ behest, it simultaneously develops some serious self-preservation instinct and has little compunction about laying waste to its pursuers, i.e., former teammates & commanders who’d done the behesting.

Admittedly, DR0NE’s episode 2 has yet to be released, but it’s not too hard to see where this is going; the trailer shows 237 delivering some vegetablizing kung-fu to it’s human pursuers, and dude, come on — if a human is punched in the head hard enough to throw them across a room and into a wall or is uppercut into a spasticating backflip, they’re probably just going to embolize and die where they land. Clearly 237 already has the stereotypical post-revelatory per-the-plot justifiable body count.

Where have we seen this pattern before? Without Googling, from the top of one robot dork’s head, we’ve got: Archetype, Robocop, iRobot (film), Iron Giant, Short Circuit, Blade Runner, Rossum’s Universal Robots, and going way, way, way back, the golem.

Show Me More Me
Seems we really, really dig on this kind of story. Continue reading “The Recurring Parable of the AWOL Android” | >

A secret agent travels to a secret underground desert base being used to develop space weapons to investigate a series of mysterious murders. The agent finds a secret transmitter was built into a supercomputer that controls the base and a stealth plane flying overhead is controlling the computer and causing the deaths. The agent does battle with two powerful robots in the climax of the story.

Gog is a great story worthy of a sci fi action epic today- and was originally made in 1954. Why can’t they just remake these movies word for word and scene for scene with as few changes as possible? The terrible job done on so many remade sci fi classics is really a mystery. How can such great special effects and actors be used to murder a perfect story that had already been told well once? Amazing.

In contrast to Gog we have the fairly recent movie Stealth released in 2005 that has talent, special effects, and probably the worst story ever conceived. An artificially intelligent fighter plane going off the reservation? The rip-off of HAL from 2001 is so ridiculous.

Fantastic Voyage (1966) was a not so good story that succeeded in spite of stretching suspension of disbelief beyond the limit. It was a great movie and might succeed today if instead of miniaturized and injected into a human body it was instead a submarine exploring a giant organism under the ice of a moon in the outer solar system. Just an idea.

And then there is one of the great sci-fi movies of all time if one can just forget the ending. The Abyss of 1989 was truly a great film in that aquanauts and submarines were portrayed in an almost believable way.

From wiki: The cast and crew endured over six months of grueling six-day, 70-hour weeks on an isolated set. At one point, Mary Elizabeth Mastrantonio had a physical and emotional breakdown on the set and on another occasion, Ed Harris burst into spontaneous sobbing while driving home. Cameron himself admitted, “I knew this was going to be a hard shoot, but even I had no idea just how hard. I don’t ever want to go through this again”

Again, The Abyss, like Fantastic Voyage, brings to mind those oceans under the icy surface of several moons in the outer solar system.

I recently watched Lockdown with Guy Pearce and was as disappointed as I thought I would be. Great actors and expensive special effects just cannot make up for a bad story. When will they learn? It is sad to think they could have just remade Gog and had a hit.

The obvious futures represented by these different movies are worthy of consideration in that even in 1954 the technology to come was being portrayed accurately. In 2005 we have a box office bomb that as a waste of money is parallel to the military industrial complex and their too-good-to-be-true wonder weapons that rarely work as advertised. In Fantastic Voyage and The Abyss we see scenarios that point to space missions to the sub-surface oceans of the outer planet moons.

And in Lockdown we find a prison in space where the prisoners are the victims of cryogenic experimentation and going insane as a result. Being an advocate of cryopreservation for deep space travel I found the story line.……extremely disappointing.

The precursor to manned space exploration of new worlds is typically unmanned exploration, and NASA has made phenomenal progress with remote controlled rovers on the Martian surface in recent years with MER-A Spirit, MER-B Opportunity and now MSL Curiosity. However, for all our success in reliance on AI in such rovers — similar if not more advanced to AI technology we see around us in the automotive and aviation industries — such as operational real-time clear-air turbulence prediction in aviation — such AI is typically to aid control systems and not mission-level decision making. NASA still controls via detailed commands transmitted to the rover directly from Earth, typically 225 kbit/day of commands are transmitted to the rover, at a data rate of 1–2 kbit/s, during a 15 minute transmit window, with larger volumes of data collected by the rover returned via satellite relay — a one-way communication that incorporates a delay of on average 12 or so light minutes. This becomes less and less practical the further away the rover is.

If for example we landed a similar rover on Titan in the future, I would expect the current method of step-by-step remote control would render the mission impractical — Saturn being typically at least 16 times more distant — dependent on time of year.

With the tasks of the science labs well determined in advance, it should be practical to develop AI engines to react to hazards, change course of analysis dependent on data processed — and so on — the perfect playground for advanced AI programmes. The current Curiosity mission incorporates tasks such as 1. Determine the mineralogical composition of the Martian surface and near-surface geological materials. 2. Attempt to detect chemical building blocks of life (bio-signatures). 3. Interpret the processes that have formed and modified rocks and soils. 4. Assess long-timescale (i.e., 4-billion-year) Martian atmospheric evolution processes. 5. Determine present state, distribution, and cycling of water and carbon dioxide. 6. Characterize the broad spectrum of surface radiation, including galactic radiation, cosmic radiation, solar proton events and secondary neutrons. All of these are very deterministic processes in terms of mapping results to action points, which could be the foundation for shaping such into an AI learning engine, so that such rovers can be entrusted with making their own mission-level decisions on next phases of exploration based on such AI analyses.

Whilst the current explorations on Mars works quite well with the remote control strategy, it would show great foresight for NASA to engineer such unmanned rovers to operate in a more independent fashion with AI operating the mission-level control — learning to adapt to its environment as it explores the terrain, with only the return-link in use in the main — to relay back the analyzed data — and the low-bandwidth control-link reserved for maintenance and corrective action only. NASA has taken great strides in the last decade with unmanned missions. One can expect the next generation to be even more fascinating — and perhaps a trailblazer for advanced AI based technology.

AI scientist Hugo de Garis has prophesied the next great historical conflict will be between those who would build gods and those who would stop them.

It seems to be happening before our eyes as the incredible pace of scientific discovery leaves our imaginations behind.

We need only flush the toilet to power the artificial mega mind coming into existence within the next few decades. I am actually not intentionally trying to write anything bizarre- it is just this strange planet we are living on.

http://www.sciencedaily.com/releases/2012/08/120813155525.htm

http://www.sciencedaily.com/releases/2012/08/120813123034.htm

I have just watched this video by Global Futures 2045.

This is my list of things I disagree with:

It starts with scary words about how every crisis comes faster and faster. However this is untrue. Many countries have been running deficits for decades. The financial crisis is no surprise. The reason the US has such high energy costs goes back to government decisions made in the 1970s. And many things that used to be crises no longer happen, like the Black Plague. We have big problems, but we’ve also got many resources we’ve built up over the centuries to help. Much of the challenges we face are political and social, not technical.

We will never fall into a new Dark Ages. The biggest problem is that we aren’t advancing as fast as we could and many are still starving, sick, etc. However, it has always been this way. The 20th century was very brutal! But we are advancing and it is mostly known threats like WMDs which could cause a disaster. In the main, the world is getting safer every day as we better understand it.

We aren’t going to build a new human. It is more like a Renaissance. Those who lost limbs will get increasingly better robotic ones, but they will still be humans. The best reason to build a robotic arm is to attach it to a human.

The video had a collectivist and authoritarian perspective when it said:

“The world’s community and leaders should encourage mankind instead of wasting resources on solving momentary problems.”

This sentence needs to be deconstructed:

1. Government acts via force. Government’s job is to maintain civil order, so having it also out there “encouraging” everyone to never waste resources is creepy. Do you want your policeman to also be your nanny? Here is a quote from C.S. Lewis:

“Of all tyrannies, a tyranny sincerely exercised for the good of its victims may be the most oppressive. It would be better to live under robber barons than under omnipotent moral busybodies. The robber baron’s cruelty may sometimes sleep, his cupidity may at some point be satiated; but those who torment us for our own good will torment us without end for they do so with the approval of their own conscience.”

2. It is wrong to think government is the solution to our problems. Most of the problems that exist today like the Greek Debt Crisis, and the US housing crisis were caused by governments trying to do too much.

3. There is no such thing as the world’s leaders. There is the UN, which doesn’t act in a humanitarian crisis until after everyone is dead. In any case, we don’t need the governments to act. We built Wikipedia.

4. “Managing resources” is codeword for socialism. If their goal is to help with the development of new technologies, then the task of managing existing resources is totally unrelated. If your job is to build robots, then your job is not also to worry about whether the water and air are dirty. Any scientist who talks about managing resources is actually a politician. Here is a quote from Frederic Hayek:

“The curious task of economics is to demonstrate to men how little they really know about what they imagine they can design. Before the obvious economic failure of Eastern European socialism, it was widely thought that a centrally planned economy would deliver not only “social justice” but also a more efficient use of economic resources. This notion appears eminently sensible at first glance. But it proves to overlook the fact that the totality of resources that one could employ in such a plan is simply not knowable to anybody, and therefore can hardly be centrally controlled.”

5. We should let individuals decide what to spend their resources on. People don’t only invest in momentary things. People build houses. In fact, if you are looking for an excuse to drink, being poor because you live in a country with 70% taxes is a good one.

The idea of tasking government to finding the solutions and to do all futuristic research and new products to shove down our throats is wrong and dangerous. We want individuals, and collections of them (corporations) to do it because they will best put it to use in ways that actually improve our lives. Everything is voluntary which encourages good customer relationships. The money will be funded towards the products people actually care about, instead of what some mastermind bureaucrat thinks we should spend money on. There are many historical examples of how government doesn’t innovate as well as the private sector: the French telephone system, Cuba, expensive corn-based ethanol, the International Space Station, healthcare. The free market is imperfect but it leads to fastest technological and social progress for the reasons Frederic Hayek has explained. A lot of government research today is wasted because it never gets put to use commercially. There are many things that can be done to make the private sector more vibrant. There are many ways government can do a better job, and all that evidence should be a warning to not use governments to endorse programs with the goal of social justice. NASA has done great things, but it was only because it existed in a modern society that it was possible.

They come up with a nice list of things that humanity can do, but they haven’t listed that the one of the most important first steps is more Linux. We aren’t going to get cool and smart robots, etc. without a lot of good free software first.

The video says:

“What we need is not just another technological revolution, but a new civilization paradigm, we need philosophy and ideology, new ethics, new culture, new psychology.”

It minimizes the technology aspect when this is the hard work by disparate scientists that will bring us the most benefits.

It is true that we need to refine our understandings of many things, but we are not starting over, just evolving. Anyone who thinks we need to start over doesn’t realize what we’ve already built and all the smart people who’ve come before. The basis of good morals from thousands of years ago still apply. It will just be extended to deal with new situations, like cloning. The general rules of math, science, and biology will remain. In many cases, we are going back to the past. The Linux and free software movement is simply returning computer software to the hundreds of years-old tradition of science. Sometimes the idea has already been discovered, but it isn’t widely used yet. It is a social problem, not a technical one.

The repeated use of the word “new”, etc. makes this video like propaganda. Cults try to get people to reset their perspective into a new world, and convince them that only they have the answers. This video comes off as a sales pitch with them as the solution to our problems, ignoring that it will take millions. Their lists of technologies are random. Some of these problems we could have solved years ago, and some we can’t solve for decades, and they mix both examples. It seems they do no know what is coming next given how disorganized they are. They also pick multiple words that are related and so are repeating themselves. Repetition is used to create an emotional impact, another trait of propaganda.

The thing about innovation and the future is that it is surprising. Many futurists get things wrong. If these guys really had the answers, they’d have invented it and made money on it. And compared to some of the tasks, we are like cavemen.

Technology evolves in a stepwise fashion, and so looking at it as some clear end results on some day in the future is wrong.

For another example: the video makes it sound like going beyond Earth and then beyond the Solar System is a two-step process when in fact it is many steps, and the journey is the reward. If they were that smart, they’d endorse the space elevator which is the only cheap way to get out there, and we can do it in 10 years.

The video suggests that humanity doesn’t have a masterplan, when I just explained that you couldn’t make one.

It also suggests that individuals are afraid of change, when in fact, that is a trait characteristic of governments as well. The government class has known for decades that Social Security is going bankrupt, but they’d rather criticize anyone who wants to reform it rather than fix the underlying problem. This video is again trying to urge collectivism with its criticism of the “mistakes” people make. The video is very arrogant at how it looks down at “the masses.” This is another common characteristic of collectivism.

Here is the first description of their contribution:

“We integrate the latest discoveries and developments from the sciences: physics, energetics, aeronautics, bio-engineering, nanotechnology, neurology, cybernetics, cognitive science.”

That sentence is laughable because it is an impossible task. To understand all of the latest advances would involve talking with millions of scientists. If they are doing all this integration work, what have they produced? They want everyone to join up today, work to be specified later.

The challenge for nuclear power is not the science, it is the lawyers who outlawed new ones in 1970s, and basically have halted all advancements in building safer and better ones. Some of these challenges are mostly political, not scientific. We need to get engineers in corporations like GE, supervised by governments, building safer and cleaner nuclear power.

If you wanted to create all of what they offer, you’d have to hire a million different people. If you were building the pyramids, you could get by with most of your workers having one skill, the ability to move heavy things around. However, the topics they list are so big and complicated, I don’t think you could build an organization that could understand it all, let alone build it.

They mention freedom and speak in egalitarian terms, but this is contradicted by their earlier words. In their world, we will all be happy worker bees, working “optimally” for their collective. Beware of masterminds offering to efficiently manage your resources.

I support discussion and debate. I am all for think-tanks and other institutions that hire scientists. However, those that lobby government to act on their behalf are scary. I don’t want every scientist lobbying the government to institute their pet plan, no matter how good it sounds. They will get so overwhelmed that they won’t be able to do their actual job. The rules of the US Federal government are very limited and generally revolve around an army and a currency. Social welfare is supposed to be handled by the states.

Some of their ideas cannot be turned into laws by the US Congress because they don’t have this authority — the States do. Obamacare is likely to be ruled unconstitutional, and their ideas are potentially much more intrusive towards individual liberty. It would require a Constitutional Amendment, which would never pass and we don’t need.

They offer a social network where scientists can plug in and figure out what they need to do. This could also be considered an actual concrete example of something they are working on. However, there are already social networks where people are advancing the future. SourceForge.net is the biggest community of programmers. There is also Github.com with 1,000,000 projects. Sage has a community advancing the state of mathematics.

If they want to create their own new community solving some aspect, that is great, especially if they have money. But the idea that they are going to make it all happen is impossible. And it will never replace all the other great communities that already exist. Even science happens on Facebook, when people chat about their work.

If they want to add value, they need to specialize. Perhaps they come up with millions of dollars and they can do research in specific areas. However, their fundamental research would very likely get used in ways they never imagined by other people. The more fundamental, the more no one team can possibly take advantage of all aspects of the discovery.

They say there is some research lab they’ve got working on cybernetics. However they don’t demonstrate any results. I don’t imagine they can be that much ahead of the rest of the world who provides them the technology they use to do their work. Imagine a competitor to Henry Ford. Could he really build a car much better given the available technology at the time? My response to anyone who has claims of some advancements is: turn it into a demo or useful product and sell it. All this video offer as evidence here is CGI, which any artist can make.

I support the idea of flying cars. First we need driverless cars and cheaper energy. Unless they are a car or airplane company, I don’t see what this organization will have to do with that task. I have nothing against futuristic videos, but they don’t make clear what is their involvement and instances of ambiguity should be noted.

They are wrong when they say we won’t understand consciousness till 2030 because we already understand it at some level today. Neural networks have been around for decades. IBM’s Jeopardy-playing Watson was a good recent example. However, it is proprietary so not much will come of that particular example. Fortunately, Watson was built on lots of free software, and the community will get there. Google is very proprietary with their AI work. Wolfram Alpha is also proprietary. Etc. We’ve got enough the technical people for an amazing world if we can just get them to work together in free software and Python.

The video’s last sentence suggests that spiritual self-development is the new possibility. But people can work on that today. And again, enlightenment is not a destination but a journey.

We are a generation away from immortality unless things greatly change. I think about LibreOffice, cars that drive themselves and the space elevator, but faster progress in biology is also possible as well if people will follow the free software model. The Microsoft-style proprietary development model has infected many fields.

Steamships, locomotives, electricity; these marvels of the industrial age sparked the imagination of futurists such as Jules Verne. Perhaps no other writer or work inspired so many to reach the stars as did this Frenchman’s famous tale of space travel. Later developments in microbiology, chemistry, and astronomy would inspire H.G. Wells and the notable science fiction authors of the early 20th century.

The submarine, aircraft, the spaceship, time travel, nuclear weapons, and even stealth technology were all predicted in some form by science fiction writers many decades before they were realized. The writers were not simply making up such wonders from fanciful thought or childrens ryhmes. As science advanced in the mid 19th and early 20th century, the probable future developments this new knowledge would bring about were in some cases quite obvious. Though powered flight seems a recent miracle, it was long expected as hydrogen balloons and parachutes had been around for over a century and steam propulsion went through a long gestation before ships and trains were driven by the new engines. Solid rockets were ancient and even multiple stages to increase altitude had been in use by fireworks makers for a very long time before the space age.

Some predictions were seen to come about in ways far removed yet still connected to their fictional counterparts. The U.S. Navy flagged steam driven Nautilus swam the ocean blue under nuclear power not long before rockets took men to the moon. While Verne predicted an electric submarine, his notional Florida space gun never did take three men into space. However there was a Canadian weapons designer named Gerald Bull who met his end while trying to build such a gun for Saddam Hussien. The insane Invisible Man of Wells took the form of invisible aircraft playing a less than human role in the insane game of mutually assured destruction. And a true time machine was found easily enough in the mathematics of Einstein. Simply going fast enough through space will take a human being millions of years into the future. However, traveling back in time is still as much an impossibillity as the anti-gravity Cavorite from the First Men in the Moon. Wells missed on occasion but was not far off with his story of alien invaders defeated by germs- except we are the aliens invading the natural world’s ecosystem with our genetically modified creations and could very well soon meet our end as a result.

While Verne’s Captain Nemo made war on the death merchants of his world with a submarine ram, our own more modern anti-war device was found in the hydrogen bomb. So destructive an agent that no new world war has been possible since nuclear weapons were stockpiled in the second half of the last century. Neither Verne or Wells imagined the destructive power of a single missile submarine able to incinerate all the major cities of earth. The dozens of such superdreadnoughts even now cruising in the icy darkness of the deep ocean proves that truth is more often stranger than fiction. It may seem the golden age of predictive fiction has passed as exceptions to the laws of physics prove impossible despite advertisments to the contrary. Science fiction has given way to science fantasy and the suspension of disbelief possible in the last century has turned to disappointment and the distractions of whimsical technological fairy tales. “Beam me up” was simply a way to cut production costs for special effects and warp drive the only trick that would make a one hour episode work. Unobtainium and wishalloy, handwavium and technobabble- it has watered down what our future could be into childish wish fulfillment and escapism.

The triumvirate of the original visionary authors of the last two centuries is completed with E.E. Doc Smith. With this less famous author the line between predictive fiction and science fantasy was first truly crossed and the new genre of “Space Opera” most fully realized. The film industry has taken Space Opera and run with it in the Star Wars franchise and the works of Canadian film maker James Cameron. Though of course quite entertaining, these movies showcase all that is magical and fantastical- and wrong- concerning science fiction as a predictor of the future. The collective imagination of the public has now been conditioned to violate the reality of what is possible through the violent maiming of basic scientific tenets. This artistic license was something Verne at least tried not to resort to, Wells trespassed upon more frequently, and Smith indulged in without reservation. Just as Madonna found the secret to millions by shocking a jaded audience into pouring money into her bloomers, the formula for ripping off the future has been discovered in the lowest kind of sensationalism. One need only attend a viewing of the latest Transformer movie or download Battlestar Galactica to appreciate that the entertainment industry has cashed in on the ignorance of a poorly educated society by selling intellect decaying brain candy. It is cowboys vs. aliens and has nothing of value to contribute to our culture…well, on second thought, I did get watery eyed when the young man died in Harrison Ford’s arms. I am in no way criticizing the profession of acting and value the talent of these artists- it is rather the greed that corrupts the ancient art of storytelling I am unhappy with. Directors are not directors unless they make money and I feel sorry that these incredibly creative people find themselves less than free to pursue their craft.

The archetype of the modern science fiction movie was 2001 and like many legendary screen epics, a Space Odyssey was not as original as the marketing made it out to be. In an act of cinema cold war many elements were lifted from a Soviet movie. Even though the fantasy element was restricted to a single device in the form of an alien monolith, every artifice of this film has so far proven non-predictive. Interestingly, the propulsion system of the spaceship in 2001 was originally going to use atomic bombs, which are still, a half century later, the only practical means of interplanetary travel. Stanly Kubrick, fresh from Dr. Strangelove, was tired of nukes and passed on portraying this obvious future.

As with the submarine, airplane, and nuclear energy, the technology to come may be predicted with some accuracy if the laws of physics are not insulted but rather just rudely addressed. Though in some cases, the line is crossed and what is rude turns disgusting. A recent proposal for a “NautilusX” spacecraft is one example of a completely vulgar denial of reality. Chemically propelled, with little radiation shielding, and exhibiting a ridiculous doughnut centrifuge, such advertising vehicles are far more dishonest than cinematic fabrications in that they decieve the public without the excuse of entertaining them. In the same vein, space tourism is presented as space exploration when in fact the obscene spending habits of the ultra-wealthy have nothing to do with exploration and everything to do with the attendent taxpayer subsidized business plan. There is nothing to explore in Low Earth Orbit except the joys of zero G bordellos. Rudely undressing by way of the profit motive is followed by a rude address to physics when the key private space scheme for “exploration” is exposed. This supposed key is a false promise of things to come.

While very large and very expensive Heavy Lift Rockets have been proven to be successful in escaping earth’s gravitational field with human passengers, the inferior lift vehicles being marketed as “cheap access to space” are in truth cheap and nasty taxis to space stations going in endless circles. The flim flam investors are basing their hopes of big profit on cryogenic fuel depots and transfer in space. Like the filling station every red blooded American stops at to fill his personal spaceship with fossil fuel, depots are the solution to all the holes in the private space plan for “commercial space.” Unfortunately, storing and transferring hydrogen as a liquified gas a few degrees above absolute zero in a zero G environment has nothing in common with filling a car with gasoline. It will never work as advertised. It is a trick. A way to get those bordellos in orbit courtesy of taxpayer dollars. What a deal.

So what is the obvious future that our present level of knowledge presents to us when entertaining the possible and the impossible? More to come.

Greetings fellow travelers, please allow me to introduce myself; I’m Mike ‘Cyber Shaman’ Kawitzky, independent film maker and writer from Cape Town, South Africa, one of your media/art contributors/co-conspirators.

It’s a bit daunting posting to such an illustrious board, so let me try to imagine, with you; how to regard the present with nostalgia while looking look forward to the past, knowing that a millisecond away in the future exists thoughts to think; it’s the mode of neural text, reverse causality, non-locality and quantum entanglement, where the traveller is the journey into a world in transition; after 9/11, after the economic meltdown, after the oil spill, after the tsunami, after Fukushima, after 21st Century melancholia upholstered by anti-psychotic drugs help us forget ‘the good old days’; because it’s business as usual for the 1%; the rest continue downhill with no brakes. Can’t wait to see how it all works out.

Please excuse me, my time machine is waiting…
Post cyberpunk and into Transhumanism

The Nature of Identity Part 3
(Drawings not reproduced here — contact the author for copies)
We have seen how the identity is defined by the 0,0 point – the centroid or locus of perception.

The main problem we have is finding out how neural signals translate into sensory signals – how neural information is translated into the language we understand – that of perception. How does one neural pattern become Red and another the Scent of coffee. Neurons do not emit any color nor any scent.

As in physics, so in cognitive science, some long cherished theories and explanations are having to change.

Perception, and the concept of an Observer (the 0,0 point), are intimately related to the idea of Identity.

Many years ago I was a member of what was called the Artorga Research Group – a group including some of the early cyberneticists – who were focussed on Artificial Organisms.

One of the main areas of concern was, of course, Memory.

One of our group was a young German engineer who suggested that perhaps memories were in fact re-synthesised in accordance with remembered rules, as opposed to storing huge amounts of data.

Since then similar ideas have arisen in such areas as computer graphics.

Here is an example,

It shows a simple picture on a computer screen. We want to store (memorize) this information.

One way is to store the information about each pixel on the screen – is it white or is it black. With a typical screen resolution that could mean over 2.5 million bits of information

But there is another way….

In this process one simply specifies the start point (A) in terms of its co-ordinates (300 Vertically, 100 Horizontally); and its end point (B) (600 Vertically, 800 Horizontally); and simply instructs – “Draw a line of thickness w between them”.

The whole picture is specified in just a few bits..

The first method, specifying bit by bit, known as the Bit Mapped Protocol (.BMP), uses up lots of memory space.

The other method, based on re-synthesising according to stored instructions, is used in some data reduction formats; and is, essentially, just what that young engineer suggested, many years before.

On your computer you will have a screen saver –almost certainly a colorful scene – and of course that is stored, so that if you are away from the computer for a time it can automatically come on to replace what was showing, and in this way “save” your screen.

So – where are those colors in your screensaver stored, where are the shapes shown in it stored? Is there in the computer a Color Storage Place? Is there a Shape Storage Place?

Of course not.

Yet these are the sort of old, sodden concepts that are sometimes still applied in thinking about the brain and memories.

Patterned streams of binary bits, not unlike neural signals , (but about 70 times larger), are fed to a computer screen. And then the screen takes these patterns of bits as instructions to re-synthesise glowing colors and shapes.

We cannot actually perceive the binary signals, and so they are translated by the screen into a language that we can understand. The screen is a translator – that is its sole function.

This is exactly analogous to the point made earlier about perception and neural signals.

The main point here, though, is that what is stored in the computer memory are not colors and shapes but instructions.

And inherent in these instructions as a whole, there must exist a “map”.

Each instruction must not only tell its bit of the screen what color to glow – but it must also specify the co-ordinates of that bit. If the picture is the head of a black panther with green eyes, we don’t want to see a green head and black eyes. The map has to be right. It is important.

Looking at it in another way the map can be seen as a connectivity table – specifying what goes where. Just two different ways of describing the same thing.

As well as simple perception there are derivatives of what has been perceived that have to be taken into account, for example, the factor called movement.

Movement is not in itself perceptible (as we shall presently show); it is a computation.

Take for example, the following two pictures shown side-by-side.

I would like to suggest that one of these balls is moving. And to ask — which one is moving?

If movement had a visual attribute then one could see which one it was – but movement has no visual attributes – it is a computation.

To determine the speed of something, one has to observe its current position, compare that with the record (memory) of its previous position; check the clock to determine the interval between the two observations; and then divide the distance between the two positions, s; by the elapsed time, t; to determine the speed, v,

s/t = v.

This process is carried out automatically, (subconsciously), in more elaborate organisms by having two eyes spaced apart by a known distance and having light receptors – the retina – where each has a fast turn-on and a slow (about 40 ms) turn off, all followed by a bit of straightforward neural circuitry.

Because of this system, one can look at a TV screen and see someone in a position A, near the left hand edge, and then very rapidly, a series of other still pictures in which the person is seen being closer and closer to B, at the right hand edge.

If the stills are shown fast enough – more than 25 a second — then we will see the person walking across the screen from left to right. What you see is movement – except you don’t actually see anything extra on the screen. Being aware of movement as an aid to survival is very old in evolutionary terms. Even the incredibly old fish, the coelacanth, has two eyes.

The information provided is a derivate of the information provided by the receptors.

And now we ought to look at information in a more mathematical way – as in the concept of Information Space (I-space).

For those who are familiar with the term, it is a Hilbert Space.

Information Space is not “real” space – it is not distance space – it is not measurable in metres and centimetres.

As an example, consider Temperature Space. Take the temperature of the air going in to an air-conditioning (a/c) system; the temperature of the air coming out of the a/c system; and the temperature of the room. These three provide the three dimensions of a Temperature Space. Every point in that space correlates to an outside air temperature, an a/c output temperature and the temperature of the room. No distances are involved – just temperatures.

This is an illustration of what it would look like if we re-mapped it into a drawing.

The drawing shows the concept of a 3-dimensional Temperature Space (T-space). The darkly outlined loop is shown here as a way of indicating the “mapping” of a part of T-space.

But what we are interested in here is I-space. And I-space will have many more dimensions than T-space.

In I-space each location is a different item of information, and the fundamental rule of I-space – indeed of any Hilbert space – is,

Similarity equals Proximity.

This would mean that the region concerned with Taste, for example, would be close to the area concerned with Smell, since the two are closely related.

Pale Red would be closer to Medium Red than to Dark Red.

Perception then would be a matter of connectivity.

An interconnected group we could refer to as a Composition or Feature.

Connect 4 & legs & fur & tail & bark & the word dog & the sound of the word dog – and we have a familiar feature.

Features are patterns of interconnections; and it is these features that determine what a thing or person is seen as. What they are seen as is taken as their identity. It is the identity as seen from outside.

To oneself one is here and now, a 0,0 reference point. To someone else one is not the 0,0 point – one is there — not here, and to that person it is they who are the 0,0 point.

This 0,0 or reference point is crucially important. One could upload a huge mass of data, but if there was no 0,0 point that is all it would be – a huge mass of data.

The way forward towards this evolutionary goal, is not to concentrate on being able to upload more and more data, faster and faster – but instead to concentrate on being able to identify the 0.0 point; and to be able to translate from neural code to the language of perception.