Toggle light / dark theme

Deep-learning neural networks have come a long way in the past several years—we now have systems that are capable of beating people at complex games such as shogi, Go and chess. But is the progress of such systems limited by their basic architecture? Shimon Ullman, with the Weizmann Institute of Science, addresses this question in a Perspectives piece in the journal Science and suggests some ways computer scientists might reach beyond simple AI systems to create artificial general intelligence (AGI) systems.

Deep learning networks are able to learn because they have been programmed to create artificial neurons and the connections between them. As they encounter , new neurons and communication paths between them are formed—very much like the way the operates. But such systems require extensive training (and a feedback system) before they are able to do anything useful, which stands in stark contrast to the way that humans learn. We do not need to watch thousands of people in action to learn to follow someone’s gaze, for example, or to figure out that a smile is something positive.

Ullman suggests this is because humans are born with what he describes as preexisting network structures that are encoded into our neural circuitry. Such structures, he explains, provide growing infants with an understanding of the physical world in which they exist—a base upon which they can build more that lead to general intelligence. If computers had similar structures, they, too, might develop physical and social skills without the need for thousands of examples.

Read more

There are a lot of people who like to claim themselves as “supremely logical.” The ones who insist that their every thought is controlled by reason and not emotions. Of course, this is incorrect. Outside of people with certain kinds of brain damage, people are not logical beings. Especially since there’s no definition of logic that fits the definition of what lot’s of people refer to as logic.


Why so many men online love to use “logic” to win an argument, and then disappear before they can find out they’re wrong.

Read more

Genetic, epidemiologic, and biochemical evidence suggests that predisposition to Alzheimer’s disease (AD) may arise from altered cholesterol metabolism, although the molecular pathways that may link cholesterol to AD phenotypes are only partially understood. Here, we perform a phenotypic screen for pTau accumulation in AD-patient iPSC-derived neurons and identify cholesteryl esters (CE), the storage product of excess cholesterol, as upstream regulators of Tau early during AD development. Using isogenic induced pluripotent stem cell (iPSC) lines carrying mutations in the cholesterol-binding domain of APP or APP null alleles, we found that while CE also regulate Aβ secretion, the effects of CE on Tau and Aβ are mediated independent pathways. Efficacy and toxicity screening in iPSC- derived astrocytes and neurons showed that allosteric activation of CYP46A1 lowers CE specifically in neurons and is well tolerated by astrocytes. These data reveal that CE independently regulate Tau and Aβ and identify a druggable CYP46A1-CE-Tau axis in AD.


Van der Kant et al. performed a repurposing drug screen in iPSC-derived AD neurons and identified compounds that reduce aberrant accumulation of phosphorylated Tau (pTau). Reduction of cholesteryl ester levels or allosteric activation of CYP46A1 by lead compounds enhanced pTau degradation independently of APP and Aβ.

Read more

Scientists at the University of Virginia School of Medicine have identified a potential explanation for the mysterious death of specific brain cells seen in Alzheimer’s, Parkinson’s and other neurodegenerative diseases.

The new research suggests that the may die because of naturally occurring in brain cells that were, until recently, assumed to be genetically identical. This variation – called “somatic mosaicism” – could explain why in the are the first to die in Alzheimer’s, for example, and why are the first to die in Parkinson’s.

“This has been a big open question in neuroscience, particularly in various neurodegenerative diseases,” said neuroscientist Michael McConnell of UVA’s Center for Brain Immunology and Glia, or BIG. “What is this selective vulnerability? What underlies it? And so now, with our work, the hypotheses moving forward are that it could be that different regions of the brain actually have a different garden of these [variations] in and that sets up different regions for decline later in life.”

Read more

The so-called “zombie disease” has been reported in deer, elk, and moose across 24 US states, according to a new warning by the US Centers for Disease Control and Prevention (CDC).

As of January 2019, at least 251 counties across the US, from northern Montana to southern Texas, have reported CWD in free-ranging cervids, members of the deer family. Farther afield, there are similar concerns for reindeer in Norway, Finland, and, to a lesser extent, South Korea.

Scientifically known as chronic wasting disease (CWD), the contagious neurological disease gets its sensational nickname because of its effect on the brain of cervids, including North American elk or wapiti, red deer, mule deer, black-tailed deer, white-tailed deer, sika deer, reindeer, and moose. Deer that have been struck with the disease suffer from drastic weight loss, abnormal behavior, stumbling, drooling, lack of coordination, aggression, excessive thirst, and a fear of others.

Read more

An experimental drug that bolsters ailing brain cells has raised hopes of a treatment for memory loss, poor decision making and other mental impairments that often strike in old age.

The drug could be taken as a daily pill by over-55s if clinical trials, which are expected to start within two years, show that the medicine is safe and effective at preventing memory lapses.

Tests in the lab showed that old animals had far better memory skills half an hour after receiving the drug. After two months on the treatment, brain cells which had shrunk in the animals had grown back, scientists found.

Read more

The transition from PCs to QCs will not merely continue the doubling of computing power, in accord with Moore’s Law. It will induce a paradigm shift, both in the power of computing (at least for certain problems) and in the conceptual frameworks we use to understand computation, intelligence, neuroscience, social interactions, and sensory perception.

Today’s PCs depend, of course, on quantum mechanics for their proper operation. But their computations do not exploit two computational resources unique to quantum theory: superposition and entanglement. To call them computational resources is already a major conceptual shift. Until recently, superposition and entanglement have been regarded primarily as mathematically well-defined by psychologically incomprehensible oddities of the quantum world—fodder for interminable and apparently unfruitful philosophical debate. But they turn out to be more than idle curiosities. They are bona fide computational resources that can solve certain problems that are intractable with classical computers. The best known example is Peter Shor’s quantum algorithm which can, in principle, break encryptions that are impenetrable to classical algorithms.

The issue is the “in principle” part. Quantum theory is well established and quantum computation, although a relatively young discipline, has an impressive array of algorithms that can in principle run circles around classical algorithms on several important problems. But what about in practice? Not yet, and not by a long shot. There are formidable materials-science problems that must be solved—such as instantiating quantum bits (qubits) and quantum gates, and avoiding an unwanted noise called decoherence—before the promise of quantum computation can be fulfilled by tangible quantum computers. Many experts bet the problems can’t adequately be solved. I think this bet is premature. We will have laptop QCs, and they will transform our world.

Read more