Toggle light / dark theme

Researchers may have identified the missing component in the chemistry of the Venusian clouds that would explain their color and splotchiness in the UV range, solving a long-standing mystery.

What are the clouds of Venus made of? Scientists know it’s mainly made of sulfuric acid droplets, with some water, chlorine, and iron. Their concentrations vary with height in the thick and hostile Venusian atmosphere. But until now they have been unable to identify the missing component that would explain the clouds’ patches and streaks, only visible in the UV range.

In a new study published in Science Advances, researchers from the University of Cambridge synthesised iron-bearing sulfate minerals that are stable under the harsh chemical conditions in the Venusian clouds.

The findings are published in Science.

“This is a good example of how understanding a mechanism helps you to develop an alternative therapy that’s more beneficial. Once we identified the mechanism causing the colitis, we could then develop ways to overcome this problem and prevent colitis while preserving the anti-tumor effect,” said senior study author Gabriel Nunez, M.D., Paul de Kruif Professor of Pathology at Michigan Medicine.

Recent research suggests that a number of neuronal characteristics, traditionally believed to stem from the cell body or soma, may actually originate from processes in the dendrites. This discovery has significant implications for the study of degenerative diseases and for understanding the different states of brain activity during sleep and wakefulness.

The brain is an intricate network comprising billions of neurons. Each neuron’s cell body, or soma, engages in simultaneous communication with thousands of other neurons through its synapses. These synapses act as links, facilitating the exchange of information. Additionally, each neuron receives incoming signals through its dendritic trees, which are highly branched and extend for great lengths, resembling the structure of a complex and vast arboreal network.

For the last 75 years, a core hypothesis of neuroscience has been that the basic computational element of the brain is the neuronal soma, where the long and ramified dendritic trees are only cables that enable them to collect incoming signals from its thousands of connecting neurons. This long-lasting hypothesis has now been called into question.

BrainChip, a neuromorphic computing device provider, will present a demonstration featuring its Akida neuromorphic processor operating on Microchips’ embedded platform at CES 2024. This will utilize two evaluation boards, namely Microchip’s SAMv71 Ultra board and SAMA7G54-EK board, with a particular focus on showcasing the efficiency of the Akida neuromorphic processor when integrated with a 32-bit microprocessor unit. BrainChip aims to highlight its capabilities in always-on machine learning tasks, including keyword spotting and visual wake words.

“We look forward to demonstrating the potential and ease of integrating Akida for always-on machine learning applications on embedded devices at CES,” says Rob Telson, vice president of Ecosystem and Partnerships at BrainChip.

Neuromorphic computing systems are designed to execute parallel and distributed processing, mimicking the neural structure and functioning of the human brain. BrainChip Akida is an example of such a neuromorphic computing processor, which is designed for edge applications. It operates on an event-based principle, remaining dormant until activated, thereby reducing power consumption.

New advancements in technology frequently necessitate the development of novel materials – and thanks to supercomputers and advanced simulations, researchers can bypass the time-consuming and often inefficient process of trial-and-error.

The Materials Project, an open-access database founded at the Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab) in 2011, computes the properties of both known and predicted materials. Researchers can focus on promising materials for future technologies – think lighter alloys that improve fuel economy in cars, more efficient solar cells to boost renewable energy, or faster transistors for the next generation of computers.

A new study led by scientists from Spain and Germany has found a fundamental asymmetry showing that heating is consistently faster than cooling, challenging conventional expectations and introducing the concept of “thermal kinematics” to explain this phenomenon. The findings are published in Nature Physics.

Traditionally, heating and , fundamental processes in thermodynamics, have been perceived as symmetric, following similar pathways.

On a , heating involves injecting energy into individual particles, intensifying their motion. On the other hand, cooling entails the release of energy, dampening their motion. However, one question has always remained: Why is heating more efficient than cooling?

Time is one of those things that most of us take for granted. We spend our lives portioning it into work-time, family-time, and me-time. Rarely do we sit and think about how and why we choreograph our lives through this strange medium. A lot of people only appreciate time when they have an experience that makes them realize how limited it is.

My own interest in time grew from one of those “time is running out” experiences. Eighteen years ago, while at university, I was driving down a country lane when another vehicle strayed onto my side of the road and collided with my car. I can still vividly remember the way in which time slowed down, grinding to a near halt, in the moments before my car impacted with the oncoming vehicle. Time literally seemed to stand still. The elasticity of time and its ability to wax and wane in different situations shone out like never before. From that moment I was hooked.

I have spent the last 15 years trying to answer questions such as: Why does time slow down in near-death situations? Does time really pass more quickly as you get older? How do our brains process time?

Finally, after more than a decade of work and studying around 1,500 Type Ia supernovas, the Dark Energy Survey has produced a new best measurement of w. We found w = −0.80 ± 0.18, so it’s somewhere between −0.62 and −0.98.

This is a very interesting result. It is close to −1, but not quite exactly there. To be the cosmological constant, or the energy of empty space, it would need to be exactly −1.

Where does this leave us? With the idea that a more complex model of dark energy may be needed, perhaps one in which this mysterious energy has changed over the life of the universe.