Toggle light / dark theme

How do you define consciousness?


Some theories are even duking it out in a mano-a-mano test by imaging the brains of volunteers as they perform different tasks in clinical test centers across the globe.

But unlocking the neural basis of consciousness doesn’t have to be confrontational. Rather, theories can be integrated, wrote the authors, who were part of the Human Brain Project —a massive European endeavor to map and understand the brain—and specialize in decoding brain signals related to consciousness.

Not all authors agree on the specific brain mechanisms that allow us to perceive the outer world and construct an inner world of “self.” But by collaborating, they merged their ideas, showing that different theories aren’t necessarily mutually incompatible—in fact, they could be consolidated into a general framework of consciousness and even inspire new ideas that help unravel one of the brain’s greatest mysteries.

A hormone already present in the human body could be used to stop Alzheimer’s disease in its tracks, scientists have announced.

Researchers discovered that a small part of an appetite-suppressing hormone called leptin, which is present in everyone, can have dramatic effects on the brain, including stopping the development of Alzheimer’s disease in its earliest stages.

Their tests have shown that leptin can reduce the effects of two toxic proteins in the brain called amyloid and tau, which build up and lead to memory loss and development of Alzheimer’s disease.

We acquired a rapidly preserved human surgical sample from the temporal lobe of the cerebral cortex. We stained a 1 mm3 volume with heavy metals, embedded it in resin, cut more than 5,000 slices at ∼30 nm and imaged these sections using a high-speed multibeam scanning electron microscope. We used computational methods to render the three-dimensional structure containing 57,216 cells, hundreds of millions of neurites and 133.7 million synaptic connections. The 1.4 petabyte electron microscopy volume, the segmented cells, cell parts, blood vessels, myelin, inhibitory and excitatory synapses, and 104 manually proofread cells are available to peruse online. Many interesting and unusual features were evident in this dataset. Glia outnumbered neurons 2:1 and oligodendrocytes were the most common cell type in the volume. Excitatory spiny neurons comprised 69% of the neuronal population, and excitatory synapses also were in the majority (76%). The synaptic drive onto spiny neurons was biased more strongly toward excitation (70%) than was the case for inhibitory interneurons (48%). Despite incompleteness of the automated segmentation caused by split and merge errors, we could automatically generate (and then validate) connections between most of the excitatory and inhibitory neuron types both within and between layers. In studying these neurons we found that deep layer excitatory cell types can be classified into new subsets, based on structural and connectivity differences, and that chandelier interneurons not only innervate excitatory neuron initial segments as previously described, but also each other’s initial segments. Furthermore, among the thousands of weak connections established on each neuron, there exist rarer highly powerful axonal inputs that establish multi-synaptic contacts (up to ∼20 synapses) with target neurons. Our analysis indicates that these strong inputs are specific, and allow small numbers of axons to have an outsized role in the activity of some of their postsynaptic partners.

The authors have declared no competing interest.

Here’s a nice article discussing the progress of the brain-computer interface industry, some existing startups in the space, and where the industry may go in the future.


Fifty years after the term brain–computer interface was coined, the neurotechnology is being pursued by an array of start-up companies using a variety of different technologies. But the path to clinical and commercial success remains uncertain.

In the vast and ever-evolving landscape of technology, neuromorphic computing emerges as a groundbreaking frontier, reminiscent of uncharted territories awaiting exploration. This novel approach to computation, inspired by the intricate workings of the human brain, offers a path to traverse the complex terrains of artificial intelligence (AI) and advanced data processing with unprecedented efficiency and agility.

Neuromorphic computing, at its core, is an endeavor to mirror the human brain’s architecture and functionality within the realm of computer engineering. It represents a significant shift from traditional computing methods, charting a course towards a future where machines not only compute but also learn and adapt in ways that are strikingly similar to the human brain. This technology deploys artificial neurons and synapses, creating networks that process information in a manner akin to our cognitive processes. The ultimate objective is to develop systems capable of sophisticated tasks, with the agility and energy efficiency that our brain exemplifies.

The genesis of neuromorphic computing can be traced back to the late 20th century, rooted in the pioneering work of researchers who sought to bridge the gap between biological brain functions and electronic computing. The concept gained momentum in the 1980s, driven by the vision of Carver Mead, a physicist who proposed the use of analog circuits to mimic neural processes. Since then, the field has evolved, fueled by advancements in neuroscience and technology, growing from a theoretical concept to a tangible reality with vast potential.

Over 350 million surgeries are performed globally each year. For most of us, it’s likely at some point in our lives we’ll have to undergo a procedure that needs general anaesthesia.

Even though it is one of the safest medical practices, we still don’t have a complete, thorough understanding of precisely how anaesthetic drugs work in the brain.

In fact, it has largely remained a mystery since general anaesthesia was introduced into medicine over 180 years ago.