Toggle light / dark theme

Can smartphones apps be used to monitor a user’s mental health? This is what a recently submitted study scheduled to be presented at the 2024 ACM CHI Conference on Human Factors in Computing Systems hopes to address as a collaborative team of researchers from Dartmouth College have developed a smartphone app known as MoodCapture capable of evaluating signs of depression from a user with the front-facing camera. This study holds the potential to help scientists, medical professionals, and patients better understand how to identify signs of depression so proper evaluation and treatment can be made.

For the study, the researchers enlisted 177 participants for a 90-day trial designed to use their front-facing camera to capture facial images throughout their daily lives and while the participants answered a survey question with, “I have felt, down, depressed, or hopeless.” All participants consented to the images being taken at random times, not only when they used the camera to unlock their phone. During the study period, the researchers obtained more than 125,000 images and even accounted for the surrounding environment in their final analysis. In the end, the researchers found that MoodCapture exhibited 75 percent accuracy when attempting to identify early signs of depression.

“This is the first time that natural ‘in-the-wild’ images have been used to predict depression,” said Dr. Andrew Campbell, who is a professor in the Computer Science Department at Dartmouth and a co-author on the study. “There’s been a movement for digital mental-health technology to ultimately come up with a tool that can predict mood in people diagnosed with major depression in a reliable and non-intrusive way.”

And this feature distinguishes neuromorphic systems from conventional computing systems. The brain has evolved over billions of years to solve difficult engineering problems by using efficient, parallel, low-power computation. The goal of NE is to design systems capable of brain-like computation. Numerous large-scale neuromorphic projects have emerged recently. This interdisciplinary field was listed among the top 10 technology breakthroughs of 2014 by the MIT Technology Review and among the top 10 emerging technologies of 2015 by the World Economic Forum. NE has two-way goals: one, a scientific goal to understand the computational properties of biological neural systems by using models implemented in integrated circuits (ICs); second, an engineering goal to exploit the known properties of biological systems to design and implement efficient devices for engineering applications. Building hardware neural emulators can be extremely useful for simulating large-scale neural models to explain how intelligent behavior arises in the brain. The principal advantages of neuromorphic emulators are that they are highly energy efficient, parallel and distributed, and require a small silicon area. Thus, compared to conventional CPUs, these neuromorphic emulators are beneficial in many engineering applications such as for the porting of deep learning algorithms for various recognitions tasks. In this review article, we describe some of the most significant neuromorphic spiking emulators, compare the different architectures and approaches used by them, illustrate their advantages and drawbacks, and highlight the capabilities that each can deliver to neural modelers. This article focuses on the discussion of large-scale emulators and is a continuation of a previous review of various neural and synapse circuits (Indiveri et al., 2011). We also explore applications where these emulators have been used and discuss some of their promising future applications.

“Building a vast digital simulation of the brain could transform neuroscience and medicine and reveal new ways of making more powerful computers” (Markram et al., 2011). The human brain is by far the most computationally complex, efficient, and robust computing system operating under low-power and small-size constraints. It utilizes over 100 billion neurons and 100 trillion synapses for achieving these specifications. Even the existing supercomputing platforms are unable to demonstrate full cortex simulation in real-time with the complex detailed neuron models. For example, for mouse-scale (2.5 × 106 neurons) cortical simulations, a personal computer uses 40,000 times more power but runs 9,000 times slower than a mouse brain (Eliasmith et al., 2012). The simulation of a human-scale cortical model (2 × 1010 neurons), which is the goal of the Human Brain Project, is projected to require an exascale supercomputer (1018 flops) and as much power as a quarter-million households (0.5 GW).

The electronics industry is seeking solutions that will enable computers to handle the enormous increase in data processing requirements. Neuromorphic computing is an alternative solution that is inspired by the computational capabilities of the brain. The observation that the brain operates on analog principles of the physics of neural computation that are fundamentally different from digital principles in traditional computing has initiated investigations in the field of neuromorphic engineering (NE) (Mead, 1989a). Silicon neurons are hybrid analog/digital very-large-scale integrated (VLSI) circuits that emulate the electrophysiological behavior of real neurons and synapses. Neural networks using silicon neurons can be emulated directly in hardware rather than being limited to simulations on a general-purpose computer. Such hardware emulations are much more energy efficient than computer simulations, and thus suitable for real-time, large-scale neural emulations.

Summary: Researchers have developed the first stem cell culture method that accurately models the early stages of the human central nervous system (CNS), marking a significant breakthrough in neuroscience. This 3D human organoid system simulates the development of the brain and spinal cord, offering new possibilities for studying human brain development and diseases.

By using patient-derived stem cells, the model can potentially lead to personalized treatment strategies for neurological and neuropsychiatric disorders. The innovation opens new doors for understanding the intricacies of the human CNS and its disorders, surpassing the capabilities of previous models.

A new study from a team of McGill University and Vanderbilt University researchers is shedding light on our understanding of the molecular origins of some forms of autism and intellectual disability.

For the first time, researchers were able to successfully capture atomic resolution images of the fast-moving ionotropic glutamate receptor (iGluR) as it transports calcium. iGluRs and their ability to transport calcium are vitally important for many brain functions such as vision or other information coming from sensory organs. Calcium also brings about changes in the signaling capacity of iGluRs and nerve connections, which are key cellular events that lead to our ability to learn new skills and form memories.

IGluRs are also key players in and their dysfunction through has been shown to give rise to some forms of autism and intellectual disability. However, basic questions about how iGluRs trigger biochemical changes in the brain’s physiology by transporting calcium have remained poorly understood.