Menu

Blog

Dec 1, 2012

Response to Plaut and McClelland in the Phys.org story

Posted by in categories: information science, neuroscience, philosophy, robotics/AI

A response to McClelland and Plaut’s
comments in the Phys.org story:

Do brain cells need to be connected to have meaning?

Asim Roy
Department of Information Systems
Arizona State University
Tempe, Arizona, USA
www.lifeboat.com/ex/bios.asim.roy

Article reference:

Roy A. (2012). “A theory of the brain: localist representation is used widely in the brain.” Front. Psychology 3:551. doi: 10.3389/fpsyg.2012.00551

Original article: http://www.frontiersin.org/Journal/FullText.aspx?s=196&n…2012.00551

Comments by Plaut and McClelland: http://phys.org/news273783154.html

Note that most of the arguments of Plaut and McClelland are theoretical, whereas the localist theory I presented is very much grounded in four decades of evidence from neurophysiology. Note also that McClelland may have inadvertently subscribed to the localist representation idea with the following statement:

Even here, the principles of distributed representation apply: the same place cell can represent very different places in different environments, for example, and two place cells that represent overlapping places in one environment can represent completely non-overlapping places in other environments.”

The notion that a place cell can “represent” one or more places in different environments is very much a localist idea. It implies that the place cell has meaning and interpretation. I start with responses to McClelland’s comments first. Please reference the Phys.org story to find these quotes from McClelland and Plaut and see the contexts.

1. McClelland – “what basis do I have for thinking that the representation I have for any concept – even a very familiar one – is associated with a single neuron, or even a set of neurons dedicated only to that concept?”

There’s four decades of research in neurophysiology on receptive field cells in the sensory processing systems and on hippocampal place cells that shows that single cells can encode a concept – from motion detection, color coding and line orientation detection to identifying a particular location in an environment. Neurophysiologists have also found category cells in the brains of humans and animals. See the next response which has more details on category cells. The neurophysiological evidence is substantial that single cells encode concepts, starting as early as the retinal ganglion cells. Hubel and Wiesel won a Nobel Prize in physiology and medicine in 1981 for breaking this “secret code” of the brain. Thus there’s enough basis to think that a single neuron can be dedicated to a concept and even at a very low level (e.g. for a dot, a line or an edge).

2. McClelland – “Is each such class represented by a localist representation in the brain?”

Cells that represent categories have been found in human and animal brains. Fried et al. (1997) found some MTL (medial temporal lobe) neurons that respond selectively to gender and facial expression and Kreiman et al. (2000) found MTL neurons that respond to pictures of particular categories of objects, such as animals, faces and houses. Recordings of single-neuron activity in the monkey visual temporal cortex led to the discovery of neurons that respond selectively to certain categories of stimuli such as faces or objects (Logothetis and Sheinberg, 1996; Tanaka, 1996; Freedman and Miller, 2008).

I quote Freedman and Miller (2008): “These studies have revealed that the activity of single neurons, particularly those in the prefrontal and posterior parietal cortices (PPCs), can encode the category membership, or meaning, of visual stimuli that the monkeys had learned to group into arbitrary categories.”

Lin et al. (2007) report finding “nest cells” in mouse hippocampus that fire selectively when the mouse observes a nest or a bed, regardless of the location or environment.

Gothard et al. (2007) found single neurons in the amygdala of monkeys that responded selectively to images of monkey faces, human faces and objects as they viewed them on a computer monitor. They found one neuron that responded in particular to threatening monkey faces. Their general observation is (p. 1674): “These examples illustrate the remarkable selectivity of some neurons in the amygdala for broad categories of stimuli.”

Thus the evidence is substantial that category cells exist in the brain.

References:

  1. Fried, I., McDonald, K. & Wilson, C. (1997). Single neuron activity in human hippocampus and amygdala during recognition of faces and objects. Neuron 18, 753–765.
  2. Kreiman, G., Koch, C. & Fried, I. (2000) Category-specific visual responses of single neurons in the human medial temporal lobe. Nat. Neurosci. 3, 946–953.
  3. Freedman DJ, Miller EK (2008) Neural mechanisms of visual categorization: insights from neurophysiology. Neurosci Biobehav Rev 32:311–329.
  4. Logothetis NK, Sheinberg DL (1996) Visual object recognition. Annu Rev Neurosci 19:577–621.
  5. Tanaka K (1996) Inferotemporal cortex and object vision. Annu Rev Neurosci 19:109–139.
  6. Lin, L. N., Chen, G. F., Kuang, H., Wang, D., & Tsien, J. Z. (2007). Neural encoding of the concept of nest in the mouse brain. Proceedings of theNational Academy of Sciences of the United States of America, 104, 6066–6071.
  7. Gothard, K.M., Battaglia, F.P., Erickson, C.A., Spitler, K.M. & Amaral, D.G. (2007). Neural Responses to Facial Expression and Face Identity in the Monkey Amygdala. J. Neurophysiol. 97, 1671–1683.

3. McClelland – “Do I have a localist representation for each phase of every individual that I know?”

Obviously more research is needed to answer these types of questions. But Saddam Hussein and Jennifer Aniston type cells may provide the clue someday.

4. McClelland – “Let us discuss one such neuron – the neuron that fires substantially more when an individual sees either the Eiffel Tower or the Leaning Tower of Pisa than when he sees other objects. Does this neuron ‘have meaning and interpretation independent of other neurons’? It can have meaning for an external observer, who knows the results of the experiment – but exactly what meaning should we say it has?”

On one hand, this obviously brings into focus a lot of the work in neurophysiology. This could boil down to asking who is to interpret the activity of receptive fields, place and grid cells and so on and whether such interpretation can be independent of other neurons. In neurophysiology, the interpretation of these cells (e.g. for motion detection, color coding, edge detection, place cells and so on) are obviously being verified independently in various research labs throughout the world and with repeated experiments. So it is not that some researcher is arbitrarily assigning meaning to cells and that such results can’t be replicated and verified. For many such cells, assignment of meaning is being verified by different labs.

On the other hand, this probably is a question about whether that cell is a category cell and how to assign meaning to it. The interpretation of a cell that responds to pictures of the Eiffel Tower and the Leaning Tower of Pisa, but not to other landmarks, could be somewhat similar to a place cell that responds to a certain location or it could be similar to a category cell. Similar cells have been found in the MTL region — a neuron firing to two different basketball players, a neuron firing to Luke Skywalker and Yoda, both characters of Star Wars, and another firing to a spider and a snake (but not to other animals) (Quiroga & Kreiman, 2010a). Quian Quiroga et al. (2010b, p. 298) had the following observation on these findings: “…. one could still argue that since the pictures the neurons fired to are related, they could be considered the same concept, in a high level abstract space: ‘the basketball players,’ ‘the landmarks,’ ‘the Jedi of Star Wars,’ and so on.”

If these are category cells, there is obviously the question what other objects are included in the category. But, it’s clear that the cells have meaning although it might include other items.

References:

  1. Quian Quiroga, R. & Kreiman, G. (2010a). Measuring sparseness in the brain: Comment on Bowers (2009). Psychological Review, 117, 1, 291–297.
  2. Quian Quiroga, R. & Kreiman, G. (2010b). Postscript: About Grandmother Cells and Jennifer Aniston Neurons. Psychological Review, 117, 1, 297–299.

5. McClelland – “In the context of these observations, the Cerf experiment considered by Roy may not be as impressive. A neuron can respond to one of four different things without really having a meaning and interpretation equivalent to any one of these items.”

The Cerf experiment is not impressive? What McClelland is really questioning is the existence of highly selective cells in the brains of humans and animals and the meaning and interpretation associated with those cells. This obviously has a broader implication and raises questions about a whole range of neurophysiological studies and their findings. For example, are the “nest cells” of Lin et al. (2007) really category cells sending signals to the mouse brain that there is a nest nearby? Or should one really believe that Freedman and Miller (2008) found category cells in the monkey visual temporal cortex that identify certain categories of stimuli such as faces or objects? Or should one believe that Gothard et al. (2007) found category cells in the amygdala of monkeys that responded selectively to images of monkey faces, human faces and objects as they viewed them on a computer monitor? And how about that one neuron that Gothard et al. (2007) found that responded in particular to threatening monkey faces? And does this question about the meaning and interpretation of highly selective cells also apply to simple and complex receptive fields in the retina ganglion and the primary visual cortex? Note that a Nobel Prize has already been awarded for the discovery of these highly selective cells.

The evidence for the existence of highly selective cells in the brains of humans and animals is substantive and irrefutable although one can theoretically ask “what else does it respond to?” Note that McClelland’s question contradicts his own view that there could exist place cells, which are highly selective cells.

6. McClelland – “While we sometimes (Kumeran & McClelland, 2012 as in McClelland & Rumelhart, 1981) use localist units in our simulation models, it is not the neurons, but their interconnections with other neurons, that gives them meaning and interpretation….Again we come back to the patterns of interconnections as the seat of knowledge, the basis on which one or more neurons in the brain can have meaning and interpretation.”

“one or more neurons in the brain can have meaning and interpretation” – that sounds like localist representation, but obviously that’s not what is meant. Anyway, there’s no denying that there is knowledge embedded in the connections between the neurons, but that knowledge is integrated by the neurons to create additional knowledge. So the neurons have additional knowledge that does not exist in the connections. And single cell studies are focused on discovering the integrated knowledge that exists only in the neurons themselves. For example, the receptive field cells in the sensory processing systems and the hippocampal place cells show that some cells detect direction of motion, some code for color, some detect orientation of a line and some detect a particular location in an environment. And there are cells that code for certain categories of objects. That kind of knowledge is not easily available in the connections. In general, consolidated knowledge exists within the cells and that’s where the general focus has been of single cell studies.

7. Plaut – “Asim’s main argument is that what makes a neural representation localist is that the activation of a single neuron has meaning and interpretation on a stand-alone basis. This is about how scientists interpret neural activity. It differs from the standard argument on neural representation, which is about how the system actually works, not whether we as scientists can make sense of a single neuron. These are two separate questions.”

Doesn’t “how the system actually works” depend on our making “sense of a single neuron?” The representation theory has always been centered around single neurons, whether they have meaning on a stand-alone basis or not. So how does making “sense of a single neuron” become a separate question now? And how are these two separate questions addressed in the literature?

8. Plaut – “My problem is that his claim is a bit vacuous because he’s never very clear about what a coherent ‘meaning and interpretation’ has to be like…. but never lays out the constraints that this is meaning and interpretation, and this isn’t. Since we haven’t figured it out yet, what constitutes evidence against the claim? There’s no way to prove him wrong.

In the article, I used the standard definition from cognitive science for localist units, which is a simple one, that localist units have meaning and interpretation. There is no need to invent a new definition for localist representation. The standard definition is very acceptable, accepted by the cognitive science community and I draw attention to that in the article with verbatim quotes from Plate, Thorpe and Elman. Here they are again.

  • Plate (2002):“Another equivalent property is that in a distributed representation one cannot interpret the meaning of activity on a single neuron in isolation: the meaning of activity on any particular neuron is dependent on the activity in other neurons (Thorpe 1995).”
  • Thorpe (1995, p. 550): “With a local representation, activity in individual units can be interpreted directly … with distributed coding individual units cannot be interpreted without knowing the state of other units in the network.”
  • Elman (1995, p. 210): “These representations are distributed, which typically has the consequence that interpretable information cannot be obtained by examining activity of single hidden units.”

The terms “meaning” and “interpretation” are not bounded in any way other than that by means of the alternative representation scheme where “meaning” of a unit is dependent on other units. That’s how it’s constrained in the standard definition and that’s been there for a long time.

Neither Plaut nor McClelland have questioned the fact that receptive fields in the sensory processing systems have meaning and interpretation. Hubel and Wiesel won the Nobel Prize in physiology and medicine in 1981 for breaking this “secret code” of the brain. Here’s part of the Nobel Prize citation:

“Thus, they have been able to show how the various components of the retinal image are read out and interpreted by the cortical cells in respect to contrast, linear patterns and movement of the picture over the retina. The cells are arranged in columns, and the analysis takes place in a strictly ordered sequence from one nerve cell to another and every nerve cell is responsible for one particular detail in the picture pattern.”

Neither Plaut nor McClelland have questioned the fact that place cells have meaning and interpretation. McClelland, in fact, accepts the fact that place cells indicate locations in an environment, which means that he accepts that they have meaning and interpretation.

9. Plaut – “If you look at the hippocampal cells (the Jennifer Aniston neuron), the problem is that it’s been demonstrated that the very same cell can respond to something else that’s pretty different. For example, the same Jennifer Aniston cell responds to Lisa Kudrow, another actress on the TV show Friends with Aniston. Are we to believe that Lisa Kudrow and Jennifer Aniston are the same concept? Is this neuron a Friends TV show cell?”

Want to clarify three things here. First, localist cells are not necessarily grandmother cells. Grandmother cells are a special case of localist cells and this has been made clear in the article. For example, in the primary visual cortex, there are simple and complex cells that are tuned to visual characteristics such as orientation, color, motion and shape. They are localist cells, but not grandmother cells.

Second, the analysis in the article of the interactive activation (IA) model of McClelland and Rumelhart (1981) shows that a localist unit can respond to more than one concept in the next higher level. For example, a letter unit can respond to many word units. And the simple and complex cells in the primary visual cortex will respond to many different objects.

Third, there are indeed category cells in the brain. Response No. 2 above to McClelland’s comments cites findings in neurophysiology on category cells. So the Jennifer Aniston/Lisa Kudrow cell could very well be a category cell, much like the one that fired to spiders and snakes (but not to other animals) and the one that fired for both the Eiffel Tower and the Tower of Pisa (but not to other landmarks). But category cells have meaning and interpretation too. The Jennifer Aniston/Lisa Kudrow cell could be a Friends TV show cell, as Plaut suggested, but it still has meaning and interpretation. However, note that Koch (2011, p. 18, 19) reports finding another Jennifer Aniston MTL cell that didn’t respond to Lisa Kudrow:

One hippocampal neuron responded only to photos of actress Jennifer Aniston but not to pictures of other blonde women or actresses; moreover, the cell fired in response to seven very different pictures of Jennifer Aniston.

References:

  1. Koch, C. (2011). Being John Malkovich. Scientific American Mind, March/April, 18–19.

10. Plaut “Only a few experiments show the degree of selectivity and interpretability that he’s talking about…. In some regions of the medial temporal lobe and hippocampus, there seem to be fairly highly selective responses, but the notion that cells respond to one concept that is interpretable doesn’t hold up to the data.

There are place cells in the hippocampus that identify locations in an environment. Locations are concepts. And McClelland admits place cells represent locations. There is also plenty of evidence on the existence of category cells in the brain (see Response No. 2 above to McClelland’s comments) and categories are, of course, concepts. And simple and complex receptive fields also represent concepts such as direction of motion, line orientation, edges, shapes, color and so on. There is thus abundance of data in neurophysiology that shows that “cells respond to one concept that is interpretable” and that evidence is growing.

The existence of highly tuned and selective cells that have meaning and interpretation is now beyond doubt, given the volume of evidence from neurophysiology over the last four decades.

15

Comments — comments are now closed.


  1. Evidence from place cells proves the point that individual neurons do not have meaning and interpretation, despite a wording error on my part.

    I confess to have erred in using the word ‘represent’ when discussing the activation of a place cell by unrelated locations in two different environments. The correct wording of my point would be as follows:

    So called place cells in the hippocampus are activated when the animal is in one place in one environment, and when the animal is in another place in a second environment. Because of this, it is only possible to interpret the activation of the place cell in context. This clearly undermines Roy’s claim (for place cells at least) that they ‘have meaning and interpretation’ when considered individually. The place cells participate in the representation of more than one location in space, as claimed by distributed theories of representation, but, by themselves they do not ‘represent’ that location.

  2. Asim Roy says:

    What McClelland is referring to could be the case of place cell “remapping.” But even if the interpretation of the place cell is “in context,” it still has “meaning and interpretation” on a stand-alone basis, it is not dependent on its interpretation on other cell activations.

    Here’s from a recent study of place cells with epilepsy patients at UCLA medical school. Ekstrom et al. (2003) investigated single neuron response in the hippocampus and the parahippocampal region of the human brain. In total, they recorded directly from 317 neurons (67 cells in the hippocampus, 54 in the parahippocampal region, 111 in the amygdala, and 85 in the frontal lobes) while the patients played a taxi driver computer game where they explored a virtual town, searching for random passengers and delivering them to fixed locations (e.g. shops). The town had 9 buildings of which 3 were shops, each with the same distinctive shop-front on all sides. They found cells that respond to specific spatial locations and cells that respond to views of landmarks. They also found cells in the frontal and temporal lobes that responded to the subjects’ navigational goals. Excluding interaction effects, they found 31 out of 279 cells to be bona fide place cells. Of the view cells, they found 29 out of 33 responded preferentially to a single object during navigation, such as a specific shop or passenger. They also report that 59 out of 279 cells responded to the subjects’ goal (that is, one of the target shops or passengers).

    Burgess and O’Keefe (2003), in reviewing these results from Ekstrom et al. (2003), note their consistency with prior studies (p. 517): “These data are consistent with previous single-unit recording work in animals, including the finding of place cells in the hippocampus of rats [O’Keefe, J. and Dostrovsky, J. (1971), O’Keefe, J. and Nadel, L. (1978), Muller, R.U. (1996)] and monkeys [Matsumura, N. et al. (1999)] and the finding of viewpoint- independent spatial view cells in the vicinity of the hippocampus in monkeys [Rolls et al. (1997)].” They also note that (p. 517): “Although the use of intra-cranial EEG depth electrodes for localizing epileptic foci in such patients is widespread, the use of single-unit recording is currently very rare and offers a unique insight into the mechanisms of human cognition.”

    References:

    Burgess, N. and O’Keefe, J. (2003). Neural representations in human spatial memory. Trends in Cognitive Sciences, 7, 12, 517–519.

    Ekstrom, A.D., Kahana, M., Caplan, J., Fields, T., Isham, E., Newman, E., Fried, I. (2003). Cellular networks underlying human spatial navigation. Nature 425, 184–188.

  3. acmf says:

    My ideas are mostly theoretical but I have been thinking about this exact thing for a while and thought I’ll say something.

    To me it appears that there are two major tasks that the brain does. Pattern Recognition (sensory systems) and Pattern Generation (motor system).

    Pattern generation has been shown again and again to be a population activity (Central pattern generators, Churchland et al’s recent paper). To me, the suggestion of a localist representation in this case doesn’t make sense.

    Now Pattern recognition is different. We know that the brain learns patterns in hierarchies. The primary sensory cortices first, higher level sensory cortices next, leading upto multimodal areas. All the evidence points out that the representations in the lower levels in this hierarchy are distributed in the sense that Asim points out in his paper. But as one gets higher up this hierarchy we can see the representation getting less and less distributed until at one point we see higher order concepts being represented by single neurons. It makes sense from a pattern recognition point of view as well. In case of visual system the first few layers recognize patterns in direct sensory data (edges). Next layer recognizes patterns in these edges, the next one recognizes patterns in those patterns until we get to concepts like faces, cars, or even higher like family member, celebrity. My claim is that the degree of distribution (if I can call it that) is getting lower and lower as one goes up this ladder until we get to a point where we get a Jennifer Anniston cell.

    References

    Churchland, Mark M., et al. “Neural population dynamics during reaching.” Nature (2012).

  4. GaryChurch says:

    This has nothing to do with safeguarding the planet.

  5. Hi everyone

    I agree with Asim Roy’s localist neurons, and his assertion that distributed, connectionist networks are composed of such neurons. Indeed, the idea of cognition emerging from complex connections among simple, bit-like neurons is an insult to neurons.

    For a moment, forget neurons and consider a single cell paramecium which can swim around, avoid obstacles and predators, find food, mates and have sex, all without a single synaptic connection. Single cell giant amoebae can escape mazes and solve problems, without synaptic connections. If protozoa are so smart, would neurons be so stupid? How would Plaut and McClelland explain cognitive abilities of single cell creatures? (And how would they explain memory in the brain, as synaptic proteins are transient, yet memories can last a lifetime.)

    Paramecium and amoeba use their cytoskeletal proteins, e.g. microtubules to manage their cognitive abilities. Those same microtubules are plentiful in neurons. In this paper
    http://www.ploscompbiol.org/article/info%3Adoi%2F10.1371%2Fjournal.pcbi.1002421
    we showed how dendritic microtubules can encode synaptic information by CaMKII phosphorylation. Interpretation and meaning in single neurons are easy with microtubule information processing.

    A larger question is how localist neuronal content relates to larger scale networks. Much work in recent years suggests scale-invariant, 1/f, ‘fractal-like’ (~holographic) representation in the brain. This work has assumed a bottom floor of individual neurons, but I’m betting such scale-invariance extends downward within cells to microtubule-based-processing. A billion tubulins (microtubule subunits) per neuron switching i~megahertz is about 1015 operations per second per neuron.

    Two final points: 1) in my opinion, too much attention is paid to spiking/axonal firings. It seems likely that post-synaptic (dendritic-somatic) integration is where cognition occurs, spiking merely conveying the results of the cognition to the next layer of neurons. 2) Gap junction electrical synapses, another form of connection, are related to synchrony, and (in the case of dendritic-dendritic gap junctions) could enable collective integration among many neuronal dendrites and soma. (See: Hameroff, S. (2010) The “conscious pilot”—dendritic synchrony moves through the brain to mediate consciousness. J Biol. Phys., 36,71–93, and Hameroff SR (2012) How quantum brain biology can rescue conscious free will. Frontiers in Integrative Neuroscience doi: 10.3389/fnint.2012.00093).

    A.I. should first attempt to simulate a paramecium before worrying about a brain.

    Stuart Hameroff M.D.
    Professor, Anesthesiology and Psychology
    Director, Center for Consciousness Studies
    The University of Arizona, Tucson, Arizona
    http://www.quantum-mind.org

  6. Shankar-K says:

    If a cell “C” is a place cell that fires at a position x1 in an environment E1 and fires at a position x2 in an environment E2, how can C have a standalone meaning? There has to be at least one other cell that informs about the environment E1 or E2. Information theoretically, all we know is P(C|E).

    I think Asim used a good example of Maclelland’s IA model in the paper to illustrate the similarities of the localist and distributed representation. Let me try to use the same example to point out the main distinction between the two representations. The node corresponding to the letter “t” in the letter-layer will be active for many different nodes in the word-layer (say “trip” or “time” or …). Asim points out that each node in the letter-layer has a precise meaning—the letter itself. But this is possible only when we have the a priori information that we are looking at the letter-category. What would happen if for example, the “t”-node in the letter-layer also gets activated when you smell an orange. Does the node have the meaning of the letter “t” or the smell of orange. We need extra information to infer its meaning. Remember that most place cells also responds to particular odors, and there are many papers that classify cells as conjunctive place-odor cells.

    We need to remember that to prove a theory we need infinite examples but to disprove it, we just need one. The distributed representation is a very loose hypothesis, so it is very difficult to disprove it. But Asim has given a rather tight definition for localist representation, and hence is more vulnerable to disproof. Theoretically, it appears easy to disprove the localist interpretation, because there exists cells conjunctively coding for stimuli across radically different categories (like place and odor).

    May be we do not have to think of across-category-conjunctive coding as evidence against localist representation. However for that, the entire modeling perspective has to be changed—does conjunctive coding learned to enhance output performance or does it happen randomly for no reason? The former would support the localist representation and the latter would support distributed representation.

  7. Shankar-K says:

    Hameroff makes an important point that there are many levels of representations lower than neural spikes that could be responsible for cognition, and researchers are not paying attention there. True. But we have not yet sufficiently explored the possibility of explaining cognition at the level of neurons, so going to deeper levels might result in heavy/unwarranted theoretical speculations as in String theory. We have to be extremely careful here because the hypothesis space would be so enormous that we will not be able to test out any theory.

    For amoeba and paramecium, we don’t have a choice but to resort to the molecular level of theory, because the hypothesis set is naturally bounded above at the single-cell level. But the important point is that the biophysicists and chemists are modeling these only at the molecular level, and not attempting to go lower to the quarks-level. It is not a good idea to abandon hypotheses from a higher/simpler level when it is not completely explored.

  8. Hi again

    • Shankar-K on December 6, 2012 6:10 am
    Hameroff makes an important point that there are many levels of representations lower than neural spikes that could be responsible for cognition, and researchers are not paying attention there. True.

    Stuart
    Actually I am making two points. 1) There is a lower level responsible for cognition inside neurons in microtubules, and 2) dendritic integration is far more relevant to cognition (and consciousness) than spikes.

    Shankar-K
    But we have not yet sufficiently explored the possibility of explaining cognition at the level of neurons, so going to deeper levels might result in heavy/unwarranted theoretical speculations as in String theory.

    Stuart
    I am no fan of string theory, but using it as a bogey-man to fend off consideration of intra-neuronal processes is absurd. The deeper levels to which I am specifically referring are microtubules (including possible quantum computing in microtubules).

    Shankar-k
    We have to be extremely careful here because the hypothesis space would be so enormous that we will not be able to test out any theory.

    Stuart
    The only hypothesis space I am suggesting involves quantum and classical information processing in microtubules. I have published 20 testable predictions of the Penrose-Hameroff Orch OR theory, a number of which have been validated. I am still waiting for any testable prediction of cognition at the level, exclusively, of neuronal membranes (you say at the level of neurons, but neurons are full of microtubules). You can’t explain synaptic membrane plasticity without microtubules (because synaptic membrane proteins are short-lived and memories last lifetimes).

    Shankar-k
    For amoeba and paramecium, we don’t have a choice but to resort to the molecular level of theory, because the hypothesis set is naturally bounded above at the single-cell level.

    Stuart
    Not just molecular (which implies soluble diffusion mechanisms). Microtubules are solid-state devices with information processing capabilities. Do you really think evolution would abandon intra-cellular cognition?

    Shankar-k
    But the important point is that the biophysicists and chemists are modeling these only at the molecular level, and not attempting to go lower to the quarks-level. It is not a good idea to abandon hypotheses from a higher/simpler level when it is not completely explored.

    Stuart
    You don’t have to abandon anything. But face reality, after 60 years of neuronal membrane-level investigations, cognition and consciousness remain mysterious. Do you even have a single testable prediction for neuronal membrane-based cognition without internal microtubules?

  9. Shankar-K says:

    This conversation thread is about localist-vs-distributed representation of information at the level of neurons. We should probably not hijack this conversation thread by further discussing the utility and/or possibility of Hameroff’s sub-neural (possibly quantum) accounts of cognition here. These are crucial issues, but they need a separate thread. So I am going to privately respond to Dr.Hameroff in an email and he can start a separate thread of conversation if he finds it worth following.

  10. Asim Roy says:

    1. Shankar-K – “We need to remember that to prove a theory we need infinite examples but to disprove it, we just need one.”

    Asim –
    Given your criterion for proving a theory, I guess the same should hold the other way around. There’s probably more than one example to disprove distributed representation. Hameroff provides two good examples of single cell creatures. And there probably are billions of such single cell creatures. And there’s more than four decades of research on receptive fields being tuned for particular functions like detection of line orientation, motion, color and so on and two guys won the Nobel Prize for discovering this “secret code.” And there probably are trillions of such receptive fields in all kinds of living things. That’s more than one example to disprove distributed representation.

    2. Shankar-K – “The distributed representation is a very loose hypothesis, so it is very difficult to disprove it.”

    Asim –
    In what way is it a loose hypothesis that it’s difficult to disprove? Here’s a definition from Plate (2002):

    “In distributed representations concepts are represented by patterns of activity over a collection of neurons. This contrasts with local representations, in which each neuron represents a single concept, and each concept is represented by a single neuron. Researchers generally accept that a neural representation with the following two properties is a distributed representation (e.g., Hinton et al, 1986):
    • Each concept (e.g., an entity, token, or value) is represented by more than one neuron (i.e., by a pattern of neural activity in which more than one neuron is active.)
    • Each neuron participates in the representation of more than one concept. Another equivalent property is that in a distributed representation one cannot interpret the meaning of activity on a single neuron in isolation: the meaning of activity on any particular neuron is dependent on the activity in other neurons (Thorpe 1995).”

    Which part is “loose” in the distributed representation theory?

    References:

    Plate T. (2002). Distributed representations. In: Encyclopedia of cognitive science. Nadel L, editor. Macmillan, London.

    Thorpe S. (1995). Localized versus distributed representations. In The Handbook of brain theory and neural networks, ed Arbib. MIT Press, Cambridge.

    3. Shankar-K – “But Asim has given a rather tight definition for localist representation, and hence is more vulnerable to disproof.”

    Asim –
    I have not invented a new definition for localist representation. It’s an existing and well-understood definition from cognitive science. Here are a few more citations on the difference between the two representation schemes:

    • Thorpe (1995, p. 550): “With a local representation, activity in individual units can be interpreted directly … with distributed coding individual units cannot be interpreted without knowing the state of other units in the network.”

    • Elman (1995, p. 210): “These representations are distributed, which typically has the consequence that interpretable information cannot be obtained by examining activity of single hidden units.”

    References:

    Elman, J. (1995). Language as a dynamical system. In R. Port & T. van Gelder (Eds.), Mind as motion: Explorations in the dynamics of cognition (195–223). MIT Press.

    4. Shankar-K – “Theoretically, it appears easy to disprove the localist interpretation, because there exists cells conjunctively coding for stimuli across radically different categories (like place and odor).”

    Asim –
    Conjunctive coding doesn’t disprove localist theory. Conjunctively coded cells are not devoid of “meaning and interpretation” on a stand-alone basis. They actually add to the evidence for localist theory.

    For example, Quian Quiroga et al. (2009) found a neuron in the entorhinal cortex of a subject that responded (p. 1308) “selectively to pictures of Saddam Hussein as well as to the text ‘Saddam Hussein’ and his name pronounced by the computer….. There were no responses to other pictures, texts, or sounds.” They call these neurons “triple invariant” ones — they were those that had the visual invariance property and also had significant response to spoken and written names of the same person. But it’s an example of conjunctively coded cell that has “meaning and interpretation” on a stand-alone basis.

    Here’s from a recent paper by Barry and Doeller (2010):

    “While position is the most distinct correlate of place cell activity, a parallel body of work suggests how other factors might be encoded in addition to, or possibly in place of, the spatial code. Two examples are provided by Wood et al. (1999) and more recently Manns and Eichenbaum (2009). In the former case, rats moved around an open enclosure to perform a delayed-non-match-to-sample task, and in the latter, animals circled an annular maze encountering objects placed onto the track. Under these conditions, many hippocampal pyramidal cells encode nonspatial cues (e.g., presence of a specific odor) in addition to a primary spatial correlate (Manns and Eichenbaum, 2009). These cells exhibit conjunctive properties, for example, responding optimally to a particular combination of position and odor. Similar results have been observed when auditory fear conditioning was conducted while rats freely perambulated: place cells retained their spatial firing but at the same time firing became synchronized to the audible conditioned stimulus (Moita et al., 2003). Also, hippocampal recordings made from humans (patients with pharmacologically intractable epilepsy were asked to navigate in virtual reality) revealed that approximately a quarter of the cells characterized as place cells had conjunctive representations and were modulated by the subject’s destination (Ekstrom et al., 2003).”

    Conjunctive cells have meaning and interpretation too and on a stand-alone basis. The meaning and interpretation is not dependent on reading the activity of other cells.

    References:

    Quian Quiroga, R., Kraskov, A., Koch, C., & Fried, I. (2009). Explicit Encoding of Multimodal Percepts by Single Neurons in the Human Brain. Current Biology, 19, 1308–1313.

    Barry, C., & Doeller, C. F. (2010). Conjunctive representations in the hippocampus: what and where?. The Journal of Neuroscience, 30(3), 799–801.

    5. Shankar-K – “May be we do not have to think of across-category-conjunctive coding as evidence against localist representation. However for that, the entire modeling perspective has to be changed—does conjunctive coding learned to enhance output performance or does it happen randomly for no reason? The former would support the localist representation and the latter would support distributed representation.”

    Asim –
    I am not sure what the question or the issue is. As far as I know, conjunctive coding doesn’t “happen randomly for no reason.” You have to cite the literature where there is evidence for the “no reason” case. And conjunctive coding, whatever the reason for its occurrence, does not support distributed representation. See Response 4 above.

    6. Shankar-K – “What would happen if for example, the “t”-node in the letter-layer also gets activated when you smell an orange. Does the node have the meaning of the letter “t” or the smell of orange. We need extra information to infer its meaning. Remember that most place cells also responds to particular odors, and there are many papers that classify cells as conjunctive place-odor cells.”

    Asim –
    I have answered the conjunctive coding issue in Reponses 4 and 5 above.

    7. Shankar-K – “If a cell “C” is a place cell that fires at a position x1 in an environment E1 and fires at a position x2 in an environment E2, how can C have a standalone meaning?”

    Asim –
    There is remapping of place cells. Perhaps you are referring to that phenomenon. Here’s again a citation from Barry and Doeller (2010):

    “In the open environments that many experimenters prefer, and in the absence of particular task demands, activity is independent of the animal’s orientation, stable between visits to the same position even across days, and robust to the removal of individual spatial cues (O’Keefe, 1976). Transportation of the animal to a different and sufficiently distinct enclosure typically results in a new representation being established: place fields change position relative to one another and radically alter firing rates. Remapping, as this effect is known, has been understood as a process by which the hippocampus generates independent codes to represent distinct spatial contexts (Wills et al., 2005). Far from being a uniquely rodent curiosity, place cells seem to be important to the wider function of the hippocampus, because cells with similar properties have been identified in a range of animals as disparate as birds, monkeys, and humans (Ekstrom et al., 2003).”

    Place cells still have “meaning and interpretation” on a stand-alone basis when in a different location. Its meaning is not dependent on reading other cells.

  11. Hi everyone

    Regarding the interesting discussion between Asim Roy and Shankar-k
    on distributed vs localist representation, let me say again I believe this
    is a false dichotomy.

    Both types of representation occur in the brain, implying a scale-invariant, 1/f, fractal or holographic-like representation, for which ample evidence exists.
    Shankar-K on December 6, 2012 11:15 am
    This conversation thread is about localist-vs-distributed representation of information at the level of neurons. We should probably not hijack this conversation thread by further discussing the utility and/or possibility of Hameroff’s sub-neural (possibly quantum) accounts of cognition here. These are crucial issues, but they need a separate thread. So I am going to privately respond to Dr.Hameroff in an email and he can start a separate thread of conversation if he finds it worth following.
    Stuart
    Sub-neural, possibly quantum processes are very much relevant to the debate about localist-vs-distributed representations (false dichotomy it may be). But I accept the terms of your surrender. I think I’ve made my point that A.I. approaches (those intended to reproduce essential brain functions) based exclusively on neuronal membrane activities are bogus.

    Regarding the overall relevance of this thread to the Lifeboat Foundation mission, if everything goes to hell we may need to resort to downloading consciousness into some artificial medium, and for that we need to understand the mechanism by which consciousness occurs. Current A.I. approaches won’t work.

  12. Shankar-K says:

    I can agree with Asim that conjunctive coding is not necessarily evidence against localist representation. There is no reason why neurons from a localist representation should not be able learn to conjunctively code to enhance performance in specific tasks (Manns and Eichenbaum, 2009 is a good example Asim points out).

    But I’m very confused by Asim’s point about neurons having a “standalone meaning”. I don’t think anybody disagrees that there is significant evidence for neurons with highly selective receptive fields. But in order to attribute a standalone meaning to all those neurons, I think we need to make sure that they do not show activity in completely different contexts—that is, when the animals perform categorically different tasks. It is very possible that some neurons (like the orientation/color detecting cells) have this property and can be legitimately characterized to be a localist representation, but the activity of many others may be context dependent and should not be characterized as a local representation. And in this aspect, I agree with Hameroff that the brain most likely has both local and distributed representations.

    Asim clearly accepts remapping of place cells in different environments but insists that these cells have a meaning in a standalone basis, and that confuses me the most. How can that be? There has to be other cells that disambiguate the two environments, and the meaning of the place cells has to be conditional on the activity of those other cells… right? Here I would like to point out that place-cells do more than just remap in different spatial environments; recent evidence shows that the place-cells can also be interpreted as time-cells (see for eg, Pastalkova Etal 2008, MacDonald Etal 2011) when the task set-up is changed. The same neurons that code for specific places while the rat performs a spatial navigation task, respond very differently when the task set up is changed. When the rat stays in a particular location (say running on a treadmill, or waiting to perform a memory task), those cells respond at different points in time—as though coding for time since the beginning of the trial. The information conveyed by a particular place cell is conditional to the specific environment and the specific task. I can certainly accept that the neuron has a meaning, but I can’t accept that it has a meaning is in a standalone basis. Even if we ascribe a meaning to a neuron in a standalone basis w.r.t a particular task, all we have to do to discard that meaning is to find a completely different task in which that neuron still responds, but for which that meaning will be invalid—like how it is invalid to interpret the neuron as a place cell when the animal performs a task from a fixed location (it makes more sense to interpret it as time-cells in those tasks).

    When I previously said that the definition of distributed representation is loose when compared to the definition of localist representation, this is what I meant. In a distributed representation any task-relevant concept will be represented by a pattern distributed across a set of neurons—but there is no constraint on what that pattern should look like—some of the concepts can even be represented by single neurons. In some sense, the distributed representation encompasses any possible localist representation. So, when a single neuron active in multiple tasks/contexts, I would consider it as a negative-evidence for that neuron denoting a localist representation, but is not a negative-evidence for distributed representation. The looseness of the definition of distributed representation makes it impossible to pick a counterexample to disprove it. But that does not mean that the hypothesis of distributed representation in the entire brain is a good hypothesis, because a hypothesis that does not yield itself to clear testing amounts to not much utility. On the other hand, the definition of localist representation is much more stringent, and much more amenable for AI implementation, but it is much easier to find negative-evidence for it in the brain. Note that I’m not saying that there cannot exist localist representations in the brain, I’m just saying that it if you pinpoint a localist representation in the brain– a neuron with a standalone meaning–then there is a possibility of disproving that by getting that neuron activated in a different context that cannot be attributed the meaning you started with. So, as McClelland previously said, at least place-cells should not be considered as a localist representation.

    To wrap up, it appears to me that my (and probably McClelland’s) disagreement with Asim might be purely at a semantic level, on what he means by “standalone”.

    References:

    1. Pastalkova, E., Itskov, V., Amarasingham, A., & Buzsaki, G. (2008). Internally gen- erated cell assembly sequences in the rat hippocampus. Science, 321(5894), 1322– 1327.

    2. MacDonald, C. J., Lepage, K. O., Eden, U. T., & Eichenbaum, H. (2011). Hippocampal time cells bridge the gap in memory for discontiguous events. Neuron, 71(4), 737–749.

  13. Asim Roy says:

    1. Shankar-K – “When I previously said that the definition of distributed representation is loose when compared to the definition of localist representation, this is what I meant. In a distributed representation any task-relevant concept will be represented by a pattern distributed across a set of neurons—but there is no constraint on what that pattern should look like—some of the concepts can even be represented by single neurons. In some sense, the distributed representation encompasses any possible localist representation.”

    Asim –

    Distributed representation does not “encompass any possible localist representation.” Distributed representation implies a pattern consisting of at least two units and none of them have any meaning on an individual basis. Here’s from Plate (2002) again:

    “Researchers generally accept that a neural representation with the following two properties is a distributed representation (e.g., Hinton et al, 1986):
    • Each concept (e.g., an entity, token, or value) is represented by more than one neuron (i.e., by a pattern of neural activity in which more than one neuron is active.)
    • Each neuron participates in the representation of more than one concept. Another equivalent property is that in a distributed representation one cannot interpret the meaning of activity on a single neuron in isolation: the meaning of activity on any particular neuron is dependent on the activity in other neurons (Thorpe 1995).”

    So distributed representation involves “more than one neuron” and localist representation involves just one neuron. Please take a look at the definition carefully.

    References:

    Plate T. (2002). Distributed representations. In: Encyclopedia of cognitive science. Nadel L, editor. Macmillan, London.

    2. Shankar-K – “So, when a single neuron active in multiple tasks/contexts, I would consider it as a negative-evidence for that neuron denoting a localist representation, but is not a negative-evidence for distributed representation.”

    Asim –

    On the one hand, you admit in the beginning that conjunctive coding does not disprove localist representation: “I can agree with Asim that conjunctive coding is not necessarily evidence against localist representation.” On the other hand, now you say: “when a single neuron active in multiple tasks/contexts, I would consider it as a negative-evidence for that neuron denoting a localist representation.” This is going around in circles and is confusing. Conjunctive coding could involve multiple tasks (e.g. can indicate place, odor, time) and still be localist as long as its meaning does not depend on reading other cells.

    3. Shankar-K – “The looseness of the definition of distributed representation makes it impossible to pick a counterexample to disprove it.”

    Asim –

    You keep on harping on the “looseness” of the definition of distributed representation. There is no “looseness” in the definition and there are plenty of counter examples in neurophysiology to disprove it. Here’s from Plate (2002) again on the definition:

    “In distributed representations concepts are represented by patterns of activity over a collection of neurons. This contrasts with local representations, in which each neuron represents a single concept, and each concept is represented by a single neuron. Researchers generally accept that a neural representation with the following two properties is a distributed representation (e.g., Hinton et al, 1986):
    • Each concept (e.g., an entity, token, or value) is represented by more than one neuron (i.e., by a pattern of neural activity in which more than one neuron is active.)
    • Each neuron participates in the representation of more than one concept. Another equivalent property is that in a distributed representation one cannot interpret the meaning of activity on a single neuron in isolation: the meaning of activity on any particular neuron is dependent on the activity in other neurons (Thorpe 1995).”

    Which part of this definition is “loose?” On counterexamples to disprove distributed representation, I have given you plenty last time.

    References:

    Plate T. (2002). Distributed representations. In: Encyclopedia of cognitive science. Nadel L, editor. Macmillan, London.

    4. Shankar-K – “But that does not mean that the hypothesis of distributed representation in the entire brain is a good hypothesis, because a hypothesis that does not yield itself to clear testing amounts to not much utility.”

    Asim –

    I am assuming that this statement is based on your “looseness” argument. The “looseness” argument doesn’t hold. You have to cite literature to argue for “looseness.” You are trying to make distributed representation a vague concept. It is not. I can cite plenty of papers where the meaning of distributed representation is well-defined and concrete. So the above statement doesn’t make sense.

    5. Shankar-K – “On the other hand, the definition of localist representation is much more stringent, and much more amenable for AI implementation, but it is much easier to find negative-evidence for it in the brain.”

    Asim –

    The definition of distributed representation is as stringent as the localist one and there is plenty of neurophysiological evidence for localist representation as cited in the paper.

    6. Shankar-K – “Note that I’m not saying that there cannot exist localist representations in the brain, I’m just saying that it if you pinpoint a localist representation in the brain– a neuron with a standalone meaning–then there is a possibility of disproving that by getting that neuron activated in a different context that cannot be attributed the meaning you started with.”

    Asim –

    As a start, why don’t you take a look at retinal ganglion and visual cortex cells. They have been shown to be tuned for visual characteristics such as orientation, color, motion, shape and so on. And there are billions of those cells. Why don’t you try to show that they can have a different meaning in a different context.

    7. Shankar-K – “I can agree with Asim that conjunctive coding is not necessarily evidence against localist representation. There is no reason why neurons from a localist representation should not be able learn to conjunctively code to enhance performance in specific tasks (Manns and Eichenbaum, 2009 is a good example Asim points out).”

    Asim – Thanks.

    8. Shankar-K – “But I’m very confused by Asim’s point about neurons having a “standalone meaning”. I don’t think anybody disagrees that there is significant evidence for neurons with highly selective receptive fields. But in order to attribute a standalone meaning to all those neurons, I think we need to make sure that they do not show activity in completely different contexts—that is, when the animals perform categorically different tasks. It is very possible that some neurons (like the orientation/color detecting cells) have this property and can be legitimately characterized to be a localist representation, but the activity of many others may be context dependent and should not be characterized as a local representation.”

    Asim –

    It is fine for a localist cell to be context dependent. There is nothing wrong with that. Place cells, for example, get remapped, but they still can have meaning and interpretation on a stand-alone basis because you don’t have to read other cells to interpret their meaning. Ekstrom et al. (2003) had epilepsy patients play a taxi driver computer game. They found cells in the hippocampus that responded to specific spatial locations, in the parahippocampal region that responded to views of specific landmarks (e.g. shops) and in the frontal and temporal lobes that responded to navigational goals. Note that these place, view and goal cells were created almost instantaneously as the patients learned how to play the game.

    As long as a cell has meaning and interpretation on a stand-alone basis and its meaning does not depend on reading other cells, it is a localist cell. It doesn’t matter if its meaning changes from one context to another. There is nothing in the definition of localist representation that says that the meaning of a cell can’t change. There is substantial evidence for reuse of neural circuits, remapping of place cells being an example (see Anderson 2010). Reuse also means mapping additional functionality onto the same circuit.

    References:

    Anderson, M. L. (2010). Neural reuse: A fundamental organizational principle of the brain. Behavioral and Brain Sciences, 33(4), 245.

    9. Shankar-K – “Asim clearly accepts remapping of place cells in different environments but insists that these cells have a meaning in a standalone basis, and that confuses me the most. How can that be? There has to be other cells that disambiguate the two environments, and the meaning of the place cells has to be conditional on the activity of those other cells… right?”

    Asim –

    I just explained this in Response 8 above. Look at the literature on place cells. Just ask the question: Are they reading other cells in order to interpret the meaning of a particular place cell?

    10. Shankar-K – “Here I would like to point out that place-cells do more than just remap in different spatial environments; recent evidence shows that the place-cells can also be interpreted as time-cells (see for eg, Pastalkova Etal 2008, MacDonald Etal 2011) when the task set-up is changed. The same neurons that code for specific places while the rat performs a spatial navigation task, respond very differently when the task set up is changed. When the rat stays in a particular location (say running on a treadmill, or waiting to perform a memory task), those cells respond at different points in time—as though coding for time since the beginning of the trial. The information conveyed by a particular place cell is conditional to the specific environment and the specific task. I can certainly accept that the neuron has a meaning, but I can’t accept that it has a meaning is in a standalone basis. Even if we ascribe a meaning to a neuron in a standalone basis w.r.t a particular task, all we have to do to discard that meaning is to find a completely different task in which that neuron still responds, but for which that meaning will be invalid—like how it is invalid to interpret the neuron as a place cell when the animal performs a task from a fixed location (it makes more sense to interpret it as time-cells in those tasks).”

    Asim –

    There is substantial evidence for reuse of neural circuits, remapping of place cells being an example (see Anderson 2010). Reuse also means mapping additional functionality onto the same circuit. What you cite is something similar.

    However, there is no conflict between neural reuse, in all its various manifestations, and localist representation. They still have meaning, as you described – “recent evidence shows that the place-cells can also be interpreted as time-cells..…The same neurons that code for specific places while the rat performs a spatial navigation task, respond very differently when the task set up is changed. When the rat stays in a particular location (say running on a treadmill, or waiting to perform a memory task), those cells respond at different points in time—as though coding for time since the beginning of the trial.” Did the interpretation “those cells respond at different points in time—as though coding for time since the beginning of the trial” depend on reading the activity of other cells? Of course not. And that’s what’s meant by “on a stand-alone” basis.

    References:

    Anderson, M. L. (2010). Neural reuse: A fundamental organizational principle of the brain. Behavioral and Brain Sciences, 33(4), 245.

  14. Asim Roy says:

    To Sankar-K,

    You may have misunderstood what is meant by “stand-alone.” Read the section titled “The Cerf experiment” in my paper.

    Asim

  15. northeast says:

    Someone essentially assist to make critically posts I might state. This is the first time I frequented your website page and thus far? I amazed with the research you made to create this particular publish extraordinary. Magnificent process!