Toggle light / dark theme

Finding the hypothetical particle axion could mean finding out for the first time what happened in the Universe a second after the Big Bang, suggests a new study published in Physical Review D.

How far back into the Universe’s past can we look today? In the electromagnetic spectrum, observations of the Cosmic Microwave Background — commonly referred to as the CMB — allow us to see back almost 14 billion years to when the Universe cooled sufficiently for protons and electrons to combine and form neutral hydrogen. The CMB has taught us an inordinate amount about the evolution of the cosmos, but photons in the CMB were released 400000 years after the Big Bang making it extremely challenging to learn about the history of the universe prior to this epoch.

To open a new window, a trio of theoretical researchers, including Kavli Institute for the Physics and Mathematics of the Universe (Kavli IPMU) Principal Investigator, University of California, Berkeley, MacAdams Professor of Physics and Lawrence Berkeley National Laboratory senior faculty scientist Hitoshi Murayama, Lawrence Berkeley National Laboratory physics researcher and University of California, Berkeley, postdoctoral fellow Jeff Dror (now at University of California, Santa Cruz), and UC Berkeley Miller Research Fellow Nicholas Rodd, looked beyond photons, and into the realm of hypothetical particles known as axions, which may have been emitted in the first second of the Universe’s history.

“What’s so exciting about this result is that it suggests that these types of nanowire networks can be tuned into regimes with diverse, brain-like collective dynamics, which can be leveraged to optimize information processing,” said Zdenka Kuncic from the University of Sydney in a press release.

Today’s deep neural networks already mimic one aspect of the brain: its highly interconnected network of neurons. But artificial neurons behave very differently than biological ones, as they only carry out computations. In the brain, neurons are also able to remember their previous activity, which then influences their future behavior.

This in-built memory is a crucial aspect of how the brain processes information, and a major strand in neuromorphic engineering focuses on trying to recreate this functionality. This has resulted in a wide range of designs for so-called “memristors”: electrical components whose response depends on the previous signals they have been exposed to.

Long vid. Slight annotation in the comments. A few takaways I liked: We need to move to human data instead of mice. People’s attitude towards life extension should change drastically soon. There is human data among this group and have released it, will keep following it, and some to be released soon. Sinclair thinks he can start primate trials this year. And overall everyone is optimistic.


A couple of weeks ago Avi Roy, alongside Nathan Cheng & Laura Minquini, hosted the Longevity Panel discussion, which assembled some of the biggest scientists in the field currently working on reversing aging.

This discussion was intended to illuminate how they are approaching longevity and to know if we are any closer in achieving it.

Voice assistants are a very controversial technological implementation, as some believe they are too intrusive with users’ privacy and do not provide significantly useful features compared to using a conventional computer or smartphone. Its use becomes increasingly questionable considering the constant reports of information collection. The most recent of these reports points out that Google Assistant technology can listen to users’ conversations even without being invoked.

The company had already admitted that it records some of the users’ conversations, though its reports claimed this was impossible if the user didn’t say “OK Google” aloud. However, this new report ensures that it is not necessary for the user to activate the assistant with the renowned voice command.

According to Android Authority researchers, a Google representative admitted that its virtual assistant records audio even when this tool is not used, even recognizing that employees have access to this information. However, Google clarified that this has not been done inadvertently, ensuring that its staff is not listening to users’ conversations. In addition, it was established that Google staff only have access to limited parts of the audio.

A recent security report states that it is possible to hijack sessions on Google Compute Engine virtual machines to gain root access through a DHCP attack. While deploying this attack is impractical, an exploit attempt can be highly functional.

The report, published on GitHub, mentions that a threat actor could allow threat actors to take control of virtual machines because these deployments rely on ISC DHCP software, which employs a very weak random number generator. A successful attack clutters these virtual machines with DHCP traffic, forcing the use of a fake metadata server controlled by an attacker.

If the attack is successful, the virtual machine uses the unauthorized server for its configuration instead of an official Google one, which would allow cybercriminals to log in to the affected device with root access.