Toggle light / dark theme

Finding ways to reduce the invasiveness of brain implants could greatly expand their potential applications. A new device tested in mice that sits on the brain’s surface—but can still read activity deep within—could lead to safer and more effective ways to read neural activity.

There are already a variety of technologies that allow us to peer into the inner workings of the brain, but they all come with limitations. Minimally invasive approaches include functional MRI, where an MRI scanner is used to image changes of blood flow in the brain, and EEG, where electrodes placed on the scalp are used to pick up the brain’s electrical signals.

The former requires the patient to sit in an MRI machine though, and the latter is too imprecise for most applications. The gold standard approach involves inserting electrodes deep into brain tissue to obtain the highest quality readouts. But this requires a risky surgical procedure, and scarring and the inevitable shifting of the electrodes can lead to the signal degrading over time.

Summary: The year 2023 witnessed groundbreaking discoveries in neuroscience, offering unprecedented insights into the human brain.

From animal-free brain organoids to the effects of optimism on cognitive skills, these top 10 articles have unveiled the mysteries of the mind.

Research revealed the risks of dietary trends like “dry scooping” and the impact of caffeine on brain plasticity. Additionally, the year showcased the potential of mushroom compounds for memory enhancement and the unexpected influence of virtual communication on brain activity.

In Neuromorphic Computing Part 2, we dive deeper into mapping neuromorphic concepts into chips built from silicon. With the state of modern neuroscience and chip design, the tools the industry is working with we’re working with are simply too different from biology. Mike Davies, Senior Principal Engineer and Director of Intel’s Neuromorphic Computing Lab, explains the process and challenge of creating a chip that can replicate some of the form and functions in biological neural networks.

Mike’s leadership in this specialized field allows him to share the latest insights from the promising future in neuromorphic computing here at Intel. Let’s explore nature’s circuit design of over a billion years of evolution and today’s CMOS semiconductor manufacturing technology supporting incredible computing efficiency, speed and intelligence.

Architecture All Access Season 2 is a master class technology series, featuring Senior Intel Technical Leaders taking an educational approach in explaining the historical impact and future innovations in their technical domains. Here at Intel, our mission is to create world-changing technology that improves the life of every person on earth. If you would like to learn more about AI, Wi-Fi, Ethernet and Neuromorphic Computing, subscribe and hit the bell to get instant notifications of new episodes.

Jump to Chapters:
0:00 Welcome to Neuromorphic Computing.
0:30 How to architect a chip that behaves like a brain.
1:29 Advantages of CMOS semiconductor manufacturing technology.
2:18 Objectives in our design toolbox.
2:36 Sparse distributed asynchronous communication.
4:51 Reaching the level of efficiency and density of the brain.
6:34 Loihi 2 a fully digital chip implemented in a standard CMOS process.
6:57 Asynchronous vs Synchronous.
7:54 Function of the core’s memory.
8:13 Spikes and Table Lookups.
9:24 Loihi learning process.
9:45 Learning rules, input and the network.
10:12 The challenge of architecture and programming today.
10:45 Recent publications to read.

Architecture all access season 2 playlist — • architecture all access season 2

Intel Wireless Technology — https://intel.com/wireless.

Computer simulations of complex systems provide an opportunity to study their time evolution under user control. Simulations of neural circuits are an established tool in computational neuroscience. Through systematic simplification on spatial and temporal scales they provide important insights in the time evolution of networks which in turn leads to an improved understanding of brain functions like learning, memory or behavior. Simulations of large networks are exploiting the concept of weak scaling where the massively parallel biological network structure is naturally mapped on computers with very large numbers of compute nodes. However, this approach is suffering from fundamental limitations. The power consumption is approaching prohibitive levels and, more seriously, the bridging of time-scales from millisecond to years, present in the neurobiology of plasticity, learning and development is inaccessible to classical computers. In the keynote I will argue that these limitations can be overcome by extreme approaches to weak and strong scaling based on brain-inspired computing architectures.

Bio: Karlheinz Meier received his PhD in physics in 1984 from Hamburg University in Germany. He has more than 25years of experience in experimental particle physics with contributions to 4 major experiments at particle colliders at DESY in Hamburg and CERN in Geneva. For the ATLAS experiment at the Large Hadron Collider (LHC) he led a 15 year effort to design, build and operate an electronics data processing system providing on-the-fly data reduction by 3 orders of magnitude enabling among other achievements the discovery of the Higgs Boson. Following scientific staff positions at DESY and CERN he was appointed full professor of physics at Heidelberg university in 1992. In Heidelberg he co-founded the Kirchhoff-Institute for Physics and a laboratory for the development of microelectronic circuits for science experiments. In particle physics he took a leading international role in shaping the future of the field as president of the European Committee for Future Accelerators (ECFA). Around 2005 he gradually shifted his scientific interests towards large-scale electronic implementations of brain-inspired computer architectures. His group pioneered several innovations in the field like the conception of a description language for neural circuits (PyNN), time-compressed mixed-signal neuromorphic computing systems and wafer-scale integration for their implementation. He led 2 major European initiatives, FACETS and BrainScaleS, that both demonstrated the rewarding interdisciplinary collaboration of neuroscience and information science. In 2009 he was one of the initiators of the European Human Brain Project (HBP) that was approved in 2013. In the HBP he leads the subproject on neuromorphic computing with the goal of establishing brain-inspired computing paradigms as tools for neuroscience and generic methods for inference from large data volumes.

When young, these neurons signal fatty tissues to release energy fueling the brain. With age, the line breaks down. Fat cells can no longer orchestrate their many roles, and neurons struggle to pass information along their networks.

Using genetic and chemical methods, the team found a marker for these neurons—a protein called Ppp1r17 (catchy, I know). Changing the protein’s behavior in aged mice with genetic engineering extended their life span by roughly seven percent. For an average 76-year life span in humans, the increase translates to over five years.

The treatment also altered the mice’s health. Mice love to run, but their vigor plummets with age. Reactivating the neurons in elderly mice revived their motivation, transforming them from couch potatoes into impressive joggers.

In the fifth decade of life, our brains start to undergo a radical “rewiring” that results in diverse networks becoming more integrated over the ensuing decades. ⁠ https://bigthink.com/neuropsych/great-brain-rewiring-after-age-40/ Big Think.


In a systematic review published last year in the journal Psychophysiology, researchers from Monash University in Australia swept through the scientific literature, seeking to summarize how the connectivity of the human brain changes over our lifetimes. The gathered evidence suggests that in the fifth decade of life (that is, after a person turns 40), the brain starts to undergo a radical “rewiring” that results in diverse networks becoming more integrated and connected over the ensuing decades, with accompanying effects on cognition.

Since the turn of the century, neuroscientists have increasingly viewed the brain as a complex network, consisting of units broken down into regions, sub-regions, and individual neurons. These units are connected structurally, functionally, or both. With increasingly advanced scanning techniques, neuroscientists can observe the parts of subjects’ brains that “light up” in response to stimuli or when simply at rest, providing a superficial look at how our brains are synced up.

The Monash University team pored over 144 studies that used these imaging techniques to probe the brains of tens of thousands of subjects. From this analysis, the researchers gleaned a general trend in how the networked brain changes over our lifetimes.

Research indicates enhanced mental function in individuals who maintain an active lifestyle and engage in social interactions, alongside managing blood pressure and diabetes effectively.

As federal approval for more Alzheimer’s disease medications progresses, a recent study conducted by UC San Francisco and Kaiser Permanente Washington reveals that tailored health and lifestyle modifications can postpone or prevent memory deterioration in older adults at increased risk.

The two-year study compared cognitive scores, risk factors, and quality of life among 172 participants, of whom half had received personalized coaching to improve their health and lifestyle in areas believed to raise the risk of Alzheimer’s, such as uncontrolled diabetes and physical inactivity. These participants were found to experience a modest boost in cognitive testing, amounting to a 74% improvement over the non-intervention group.

Cognitive neuroscientist Clayton Curtis describes an elegant experiment that leads us to ask: Does the brain honor the distinction implied in most textbooks between spatial attention, motor control, and spatial working memory?

For more info/content, please visit: https://postlab.psych.wisc.edu/cog-ne

Relevant paper:
Jerde, T. A., Merriam, E. P., Riggall, A. C., Hedges, J. H., \& Curtis, C. E. (2012). Prioritized maps of space in human frontoparietal cortex. Journal of Neuroscience, 32(48), 17382–17390.

Have you ever wondered why SSRIs take time to show effects? A new study has delved into why antidepressants like SSRIs take weeks to start working and how this may impact mental health care.


SSRIs, or Selective Serotonin Reuptake Inhibitors, belong to a category of antidepressant drugs designed to elevate serotonin levels in the brain. Notable examples of SSRIs include fluoxetine (Prozac), sertraline (Zoloft), and escitalopram (Lexapro).

These medications generally have few unpleasant side effects and can be highly effective in treating various mood disorders, including depression and certain anxiety disorders. However, one significant drawback of SSRIs is the delayed onset of their therapeutic effects — SSRIs often take several weeks to show noticeable improvements in mood.

This extended period before “kicking in” poses challenges for patients and healthcare providers. Yet, the reason behind this lag in action is not well understood.