Toggle light / dark theme

Can smartphones apps be used to monitor a user’s mental health? This is what a recently submitted study scheduled to be presented at the 2024 ACM CHI Conference on Human Factors in Computing Systems hopes to address as a collaborative team of researchers from Dartmouth College have developed a smartphone app known as MoodCapture capable of evaluating signs of depression from a user with the front-facing camera. This study holds the potential to help scientists, medical professionals, and patients better understand how to identify signs of depression so proper evaluation and treatment can be made.

For the study, the researchers enlisted 177 participants for a 90-day trial designed to use their front-facing camera to capture facial images throughout their daily lives and while the participants answered a survey question with, “I have felt, down, depressed, or hopeless.” All participants consented to the images being taken at random times, not only when they used the camera to unlock their phone. During the study period, the researchers obtained more than 125,000 images and even accounted for the surrounding environment in their final analysis. In the end, the researchers found that MoodCapture exhibited 75 percent accuracy when attempting to identify early signs of depression.

“This is the first time that natural ‘in-the-wild’ images have been used to predict depression,” said Dr. Andrew Campbell, who is a professor in the Computer Science Department at Dartmouth and a co-author on the study. “There’s been a movement for digital mental-health technology to ultimately come up with a tool that can predict mood in people diagnosed with major depression in a reliable and non-intrusive way.”

Patients who suffer a stroke after heart surgery are less than half as likely to receive potentially lifesaving treatments than stroke patients who haven’t undergone heartsurgery, according to a new study in JAMA Neurology.


A Yale study finds that when ischemic stroke follows a heart procedure, the most effective treatment is often not administered, for multiple reasons.

And this feature distinguishes neuromorphic systems from conventional computing systems. The brain has evolved over billions of years to solve difficult engineering problems by using efficient, parallel, low-power computation. The goal of NE is to design systems capable of brain-like computation. Numerous large-scale neuromorphic projects have emerged recently. This interdisciplinary field was listed among the top 10 technology breakthroughs of 2014 by the MIT Technology Review and among the top 10 emerging technologies of 2015 by the World Economic Forum. NE has two-way goals: one, a scientific goal to understand the computational properties of biological neural systems by using models implemented in integrated circuits (ICs); second, an engineering goal to exploit the known properties of biological systems to design and implement efficient devices for engineering applications. Building hardware neural emulators can be extremely useful for simulating large-scale neural models to explain how intelligent behavior arises in the brain. The principal advantages of neuromorphic emulators are that they are highly energy efficient, parallel and distributed, and require a small silicon area. Thus, compared to conventional CPUs, these neuromorphic emulators are beneficial in many engineering applications such as for the porting of deep learning algorithms for various recognitions tasks. In this review article, we describe some of the most significant neuromorphic spiking emulators, compare the different architectures and approaches used by them, illustrate their advantages and drawbacks, and highlight the capabilities that each can deliver to neural modelers. This article focuses on the discussion of large-scale emulators and is a continuation of a previous review of various neural and synapse circuits (Indiveri et al., 2011). We also explore applications where these emulators have been used and discuss some of their promising future applications.

“Building a vast digital simulation of the brain could transform neuroscience and medicine and reveal new ways of making more powerful computers” (Markram et al., 2011). The human brain is by far the most computationally complex, efficient, and robust computing system operating under low-power and small-size constraints. It utilizes over 100 billion neurons and 100 trillion synapses for achieving these specifications. Even the existing supercomputing platforms are unable to demonstrate full cortex simulation in real-time with the complex detailed neuron models. For example, for mouse-scale (2.5 × 106 neurons) cortical simulations, a personal computer uses 40,000 times more power but runs 9,000 times slower than a mouse brain (Eliasmith et al., 2012). The simulation of a human-scale cortical model (2 × 1010 neurons), which is the goal of the Human Brain Project, is projected to require an exascale supercomputer (1018 flops) and as much power as a quarter-million households (0.5 GW).

The electronics industry is seeking solutions that will enable computers to handle the enormous increase in data processing requirements. Neuromorphic computing is an alternative solution that is inspired by the computational capabilities of the brain. The observation that the brain operates on analog principles of the physics of neural computation that are fundamentally different from digital principles in traditional computing has initiated investigations in the field of neuromorphic engineering (NE) (Mead, 1989a). Silicon neurons are hybrid analog/digital very-large-scale integrated (VLSI) circuits that emulate the electrophysiological behavior of real neurons and synapses. Neural networks using silicon neurons can be emulated directly in hardware rather than being limited to simulations on a general-purpose computer. Such hardware emulations are much more energy efficient than computer simulations, and thus suitable for real-time, large-scale neural emulations.

Vanderbilt researchers have developed a new nanoparticle that can more get drugs inside cells to boost the immune system and fight diseases such as cancer.

The research is led by John Wilson, associate professor of chemical and and , as well as a corresponding author on the paper about the research that was recently published in the journal Nanoscale.

Wilson, who is Principal Investigator of the Immunoengineering Lab at Vanderbilt and a Chancellor Faculty Fellow, and his team created a polymeric nanoparticle that can penetrate cell membranes and get drugs into the cytosol—or liquid—inside cells.

Evo is a long-context biological foundation model that generalizes across the fundamental languages of biology: DNA, RNA, and proteins.


Introducing Evo, a biological foundation model that generalizes across the fundamental languages of biology: DNA, RNA and proteins. Evo is capable of both prediction tasks, and generative design from molecular to whole genome scale.

We are all ultimately overtaken by age, yet for some people, the correct genes can make the journey into old age rather slow.

Italian researchers made a unique discovery regarding people who survive well into their 90s and beyond a few years ago: they frequently possess a variant of the gene BPIFB4 that guards against cardiovascular disease and maintains the heart in excellent condition for a longer period of time.

It’s not often you encounter a device that looks like it came straight out of a movie set. But Lenovo’s Project Crystal, supposedly the world’s first laptop with a transparent microLED display, is an example of sci-fi come to life.

Currently there are no plans to turn Project Crystal into a retail product. Instead Lenovo’s latest concept device was commissioned by its ThinkPad division to explore the potential of transparent microLED panels and AI integration. The most obvious use case would be sharing info somewhere, like a doctor’s office or a hotel desk. Instead of needing to flip a screen around, you could simply reverse the display via software, allowing anyone on the other side to see it while getting an in-depth explanation.

There’s some relief for people with food severe allergies. A study published in the New England Journal of Medicine reports the drug Xolair allows people with allergies to tolerate higher doses of allergenic foods before developing a reaction after accidental exposure. Geoff Bennett discussed more with the study’s principal investigator, Dr. Robert Wood of the Johns Hopkins Children’s Center.

Notice: Transcripts are machine and human generated and lightly edited for accuracy. They may contain errors.

Researchers in China have reported the first large-scale study characterizing the proteomics and phosphoproteomics of small cell lung cancer (SCLC) clinical cohorts, providing a comprehensive picture of the proteogenomics landscape of SCLC.

The team is led by Prof. Zhang Peng from Shanghai Pulmonary Hospital of the Tongji University, Prof. Zhou Hu from Shanghai Institute of Materia Medica of the Chinese Academy of Sciences (CAS), and Prof. Gao Daming and Prof. Ji Hongbin from the Center for Excellence in Molecular Cell Science, Shanghai Institute of Biochemistry and Cell Biology of CAS.

This study, published in Cell, reveals the molecular features of SCLC and proposed new molecular subtypes and targeted personalized treatment strategies, and laying a solid foundation for a better understanding of the underlying mechanisms and improvement of clinical therapeutic strategies for SCLC.