Toggle light / dark theme

Physicists at the University of Bonn have experimentally proven that an important theorem of statistical physics applies to so-called “Bose-Einstein condensates.” Their results now make it possible to measure certain properties of the quantum “superparticles” and deduce system characteristics that would otherwise be difficult to observe. The study has now been published in Physical Review Letters.

Suppose in front of you there is a container filled with an unknown liquid. Your goal is to find out by how much the particles in it (atoms or ) move back and forth randomly due to their . However, you do not have a microscope with which you could visualize these position fluctuations known as “Brownian motion”.

It turns out you do not need that at all: You can also simply tie an object to a string and pull it through the liquid. The more force you have to apply, the more viscous your liquid. And the more viscous it is, the lesser the particles in the liquid change their position on average. The viscosity at a given temperature can therefore be used to predict the extent of the fluctuations.

In recent years, many computer scientists have been exploring the notion of metaverse, an online space in which users can access different virtual environments and immersive experiences, using VR and AR headsets. While navigating the metaverse, users might also share personal data, whether to purchase goods, connect with other users, or for other purposes.

Past studies have consistently highlighted the limitations of password authentication systems, as there are now many cyber-attacks and strategies for cracking them. To increase the of users navigating the metaverse, therefore, password-based authentication would be far from ideal.

This inspired a team of researchers at VIT-AP University in India to create MetaSecure, a password-less authentication system for the metaverse. This system, introduced in a paper pre-published on arXiv, combines three different authentication techniques, namely device attestation, and physical security keys.

A multi-disciplinary team of researchers has developed a way to monitor the progression of movement disorders using motion capture technology and AI.

In two ground-breaking studies, published in Nature Medicine, a cross-disciplinary team of AI and clinical researchers have shown that by combining human data gathered from wearable tech with a powerful new medical AI technology they are able to identify clear movement patterns, predict future disease progression and significantly increase the efficiency of clinical trials in two very different rare disorders, Duchenne muscular dystrophy (DMD) and Friedreich’s ataxia (FA).

DMD and FA are rare, degenerative, that affect movement and eventually lead to paralysis. There are currently no cures for either disease, but researchers hope that these results will significantly speed up the search for new treatments.

Scientists have worked out why common anti-depressants cause around half of users to feel emotionally “blunted.” In a study published today in Neuropsychopharmacology, they show that the drugs affect reinforcement learning, an important behavioral process that allows people to learn from their environment.

According to the NHS, more than 8.3 million patients in England received an in 2021/22. A widely used class of antidepressants, particularly for persistent or severe cases, is (SSRIs). These drugs target serotonin, a chemical that carries messages between in the brain and has been dubbed the “pleasure chemical.”

One of the widely reported side effects of SSRIs is “blunting,” where patients report feeling emotionally dull and no longer finding things as pleasurable as they used to. Between 40% and 60% of patients taking SSRIs are believed to experience this side effect.

Researchers from the Chinese Academy of Sciences’ Institute of Modern Physics and their collaborators have identified the most significant isospin mixing observed in beta-decay experiments, directly challenging our current understanding of the nuclear force. The findings were featured as an Editors’ Suggestion in the journal Physical Review Letters.

In 1932, Werner Heisenberg, a Nobel Prize laureate, introduced the idea of isospin to explain the symmetry in atomic nuclei resulting from the similar properties of protons and neutrons. Isospin symmetry is still widely accepted today.

However, isospin symmetry is not strictly conserved due to proton-neutron mass difference, Coulomb interaction, and charge-dependent aspects of nuclear force. Such asymmetry leads to fragmentation of the allowed Fermi transition to many states via strong isospin mixing, instead of being constrained to one state in β decay.

Researchers report a new, highly unusual, structured-light family of 3D topological solitons, the photonic hopfions, where the topological textures and topological numbers can be freely and independently tuned.

We can frequently find in our daily lives a localized wave structure that maintains its shape upon propagation—picture a smoke ring flying in the air. Similar stable structures have been studied in various research fields and can be found in magnets, nuclear systems, and particle physics. In contrast to a ring of smoke, they can be made resilient to perturbations. This is known in mathematics and physics as topological protection.

A typical example is the nanoscale hurricane-like texture of a magnetic field in magnetic thin films, behaving as particles—that is, not changing their shape—called skyrmions. Similar doughnut-shaped (or toroidal) patterns in 3D space, visualizing complex spatial distributions of various properties of a wave, are called hopfions. Achieving such structures with light waves is very elusive.

AI robots, with Elon Musk, Boston Dynamics. To learn AI, visit: https://brilliant.org/digitalengine where you’ll also find loads of fun courses on maths, science and computer science.

Sources:

Future of Life Institute AI discussion with Elon Musk:

AI Alignment study, OpenAI, Oxford and UC Berkeley:

Ray Kurzweil on the Law of accelerating returns:


Ray Kurzweil: Acceleration of technology is the implication of what I call the law of accelerating returns. The nature of technological progress is exponential. If I count linearly 30 steps: 1, 2, 3, 4, 5… I get to 30. If I count exponentially: 2, 4, 8, 16… 30 steps later I’m at a billion. It makes a dramatic difference.

TRANSCENDENT MAN chronicles the life and ideas of Ray Kurzweil, the inventor and futurist known for his bold vision of the Singularity, a point in the near future when technology will be changing so rapidly, that we will have to enhance ourselves with artificial intelligence to keep up. Ray predicts this will be the dawning of a new civilization in which we will no longer be dependent on our physical bodies, we will be billions of times more intelligent and there will be no clear distinction between human and machine, real reality and virtual reality.

AI is being used to generate everything from images to text to artificial proteins, and now another thing has been added to the list: speech. Last week researchers from Microsoft released a paper on a new AI called VALL-E that can accurately simulate anyone’s voice based on a sample just three seconds long. VALL-E isn’t the first speech simulator to be created, but it’s built in a different way than its predecessors—and could carry a greater risk for potential misuse.

Most existing text-to-speech models use waveforms (graphical representations of sound waves as they move through a medium over time) to create fake voices, tweaking characteristics like tone or pitch to approximate a given voice. VALL-E, though, takes a sample of someone’s voice and breaks it down into components called tokens, then uses those tokens to create new sounds based on the “rules” it already learned about this voice. If a voice is particularly deep, or a speaker pronounces their A’s in a nasal-y way, or they’re more monotone than average, these are all traits the AI would pick up on and be able to replicate.

The model is based on a technology called EnCodec by Meta, which was just released this part October. The tool uses a three-part system to compress audio to 10 times smaller than MP3s with no loss in quality; its creators meant for one of its uses to be improving the quality of voice and music on calls made over low-bandwidth connections.

Chengyi wang*, sanyuan chen*, yu wu*, ziqiang zhang, long zhou, shujie liu, zhuo chen, yanqing liu, huaming wang, jinyu li, lei he, sheng zhao, furu wei.

Microsoft

Abstract. We introduce a language modeling approach for text to speech synthesis (TTS). Specifically, we train a neural codec language model (called VALL-E) using discrete codes derived from an off-the-shelf neural audio codec model, and regard TTS as a conditional language modeling task rather than continuous signal regression as in previous work. During the pre-training stage, we scale up the TTS training data to 60K hours of English speech which is hundreds of times larger than existing systems. VALL-E emerges in-context learning capabilities and can be used to synthesize high-quality personalized speech with only a 3-second enrolled recording of an unseen speaker as an acoustic prompt. Experiment results show that VALL-E significantly outperforms the state-of-the-art zero-shot TTS system in terms of speech naturalness and speaker similarity. In addition, we find VALL-E could preserve the speaker’s emotion and acoustic environment of the acoustic prompt in synthesis.