Toggle light / dark theme

LOS ANGELES — UCLA neuroscientists reported Monday that they have transferred a memory from one animal to another via injections of RNA, a startling result that challenges the widely held view of where and how memories are stored in the brain.

The finding from the lab of David Glanzman hints at the potential for new RNA-based treatments to one day restore lost memories and, if correct, could shake up the field of memory and learning.


“It’s pretty shocking,” said Dr. Todd Sacktor, a neurologist and memory researcher at SUNY Downstate Medical Center in Brooklyn, N.Y. “The big picture is we’re working out the basic alphabet of how memories are stored for the first time.” He was not involved in the research, which was published in eNeuro, the online journal of the Society for Neuroscience.

Many scientists are expected to view the research more cautiously. The work is in snails, animals that have proven a powerful model organism for neuroscience but whose simple brains work far differently than those of humans. The experiments will need to be replicated, including in animals with more complex brains. And the results fly in the face of a massive amount of evidence supporting the deeply entrenched idea that memories are stored through changes in the strength of connections, or synapses, between neurons.

The world’s most powerful laser is scheduled for a slate of experiments next year.

The laser, in Romania, managed to fire at 10 petawatts — that’s one-tenth the power of all the sunlight that reaches Earth concentrated into a single laser beam — during a test run in March. Now, according to ExtremeTech, the scientists behind it intend to discover new high-energy cancer treatments and simulate supernovas to reveal how the stellar explosions form heavy metals.

The laser is part of the European Union’s Extreme Light Infrastructure project. The hope is that lasers will lead to new medical techniques, a better understanding of how the universe works, and improved nuclear safety.

Musk’s tweet offering guidance on timing for an anticipated increase in Tesla’s market cap has since been deleted, but screenshots were widely shared on Twitter.

The U.S. Securities and Exchange Commission has clashed with Musk and Tesla over the CEO’s unfettered use of Twitter before.

In the third quarter of 2018, Musk faced securities fraud charges from the SEC after he tweeted to his tens of millions of followers then that he was planning to take Tesla private at $420 a share, and had secured funding to do so. Tesla’s stock price jumped more than 6 percent that day.

A “self-portrait” by humanoid robot Sophia, who “interpreted” a depiction of her own face, has sold at auction for over $688000.


A hand-painted “self-portrait” by the world-famous humanoid robot, Sophia, has sold at auction for over $688000.

The work, which saw Sophia “interpret” a depiction of her own face, was offered as a non-fungible token, or NFT, an encrypted digital signature that has revolutionized the art market in recent months.

Titled “Sophia Instantiation,” the image was created in collaboration with Andrea Bonaceto, an artist and partner at blockchain investment firm Eterna Capital. Bonaceto began the process by producing a brightly colored portrait of Sophia, which was processed by the robot’s neural networks. Sophia then painted an interpretation of the image.

PowerLight Technologies is turning wireless power transmission from science fiction into science fact… with frickin’ laser beams.


Wireless power transmission has been the stuff of science fiction for more than a century, but now PowerLight Technologies is turning it into science fact … with frickin’ laser beams.

“Laser power is closer than you think,” PowerLight CEO Richard Gustafson told GeekWire this week.

Artificial microswimmers that can replicate the complex behavior of active matter are often designed to mimic the self-propulsion of microscopic living organisms. However, compared with their living counterparts, artificial microswimmers have a limited ability to adapt to environmental signals or to retain a physical memory to yield optimized emergent behavior. Different from macroscopic living systems and robots, both microscopic living organisms and artificial microswimmers are subject to Brownian motion, which randomizes their position and propulsion direction. Here, we combine real-world artificial active particles with machine learning algorithms to explore their adaptive behavior in a noisy environment with reinforcement learning. We use a real-time control of self-thermophoretic active particles to demonstrate the solution of a simple standard navigation problem under the inevitable influence of Brownian motion at these length scales. We show that, with external control, collective learning is possible. Concerning the learning under noise, we find that noise decreases the learning speed, modifies the optimal behavior, and also increases the strength of the decisions made. As a consequence of time delay in the feedback loop controlling the particles, an optimum velocity, reminiscent of optimal run-and-tumble times of bacteria, is found for the system, which is conjectured to be a universal property of systems exhibiting delayed response in a noisy environment.

Living organisms adapt their behavior according to their environment to achieve a particular goal. Information about the state of the environment is sensed, processed, and encoded in biochemical processes in the organism to provide appropriate actions or properties. These learning or adaptive processes occur within the lifetime of a generation, over multiple generations, or over evolutionarily relevant time scales. They lead to specific behaviors of individuals and collectives. Swarms of fish or flocks of birds have developed collective strategies adapted to the existence of predators (1), and collective hunting may represent a more efficient foraging tactic (2). Birds learn how to use convective air flows (3). Sperm have evolved complex swimming patterns to explore chemical gradients in chemotaxis (4), and bacteria express specific shapes to follow gravity (5).

Inspired by these optimization processes, learning strategies that reduce the complexity of the physical and chemical processes in living matter to a mathematical procedure have been developed. Many of these learning strategies have been implemented into robotic systems (7–9). One particular framework is reinforcement learning (RL), in which an agent gains experience by interacting with its environment (10). The value of this experience relates to rewards (or penalties) connected to the states that the agent can occupy. The learning process then maximizes the cumulative reward for a chain of actions to obtain the so-called policy. This policy advises the agent which action to take. Recent computational studies, for example, reveal that RL can provide optimal strategies for the navigation of active particles through flows (11–13), the swarming of robots (14–16), the soaring of birds , or the development of collective motion (17).