Toggle light / dark theme

A dozen years ago, an auto accident left Nathan Copeland paralyzed, without any feeling in his fingers. Now that feeling is back, thanks to a robotic hand wired up to a brain implant.

“I can feel just about every finger – it’s a really weird sensation,” the 28-year-old Pennsylvanian told doctors a month after his surgery.

Today the brain-computer interface is taking a share of the spotlight at the White House Frontiers Conference in Pittsburgh, with President Barack Obama and other luminaries in attendance.

Read more

My guess is there is some QC help in this picture.


Artificial neural networks — systems patterned after the arrangement and operation of neurons in the human brain — excel at tasks that require pattern recognition, but are woefully limited when it comes to carrying out instructions that require basic logic and reasoning. This is a problem for scientists working toward the creation of Artificial Intelligence (AI) systems capable of performing complex tasks with minimal human supervision.

In a step toward overcoming this hurdle, researchers at Google’s DeepMind — the company that developed the Go-playing computer program AlphaGo — announced earlier this week the creation of a neural network that can not only learn, but can also use data stored in its memory to “logically reason” and make inferences to answer questions.

For decades the efficient coding hypothesis has been a guiding principle in determining how neural systems can most efficiently represent their inputs. However, conclusions about whether neural circuits are performing optimally depend on assumptions about the noise sources encountered by neural signals as they are transmitted. Here, we provide a coherent picture of how optimal encoding strategies depend on noise strength, type, location, and correlations. Our results reveal that nonlinearities that are efficient if noise enters the circuit in one location may be inefficient if noise actually enters in a different location. This offers new explanations for why different sensory circuits, or even a given circuit under different environmental conditions, might have different encoding properties.

Citation: Brinkman BAW, Weber AI, Rieke F, Shea-Brown E (2016) How Do Efficient Coding Strategies Depend on Origins of Noise in Neural Circuits? PLoS Comput Biol 12(10): e1005150. doi:10.1371/journal.pcbi.1005150

Editor: Jeff Beck, Duke University, UNITED STATES

Read more

The DeepMind artificial intelligence (AI) being developed by Google’s parent company, Alphabet, can now intelligently build on what’s already inside its memory, the system’s programmers have announced.

Their new hybrid system – called a Differential Neural Computer (DNC) – pairs a neural network with the vast data storage of conventional computers, and the AI is smart enough to navigate and learn from this external data bank.

What the DNC is doing is effectively combining external memory (like the external hard drive where all your photos get stored) with the neural network approach of AI, where a massive number of interconnected nodes work dynamically to simulate a brain.

Read more