AlphaEvolve, an AI system created by Google DeepMind, is helping mathematicians do research at a scale that was previously impossible — even if it does occasionally “cheat” to find a solution
Rapidly expanding advances in computational prediction capabilities have led to the identification of many potential materials that were previously unknown, including millions of solid-state compounds and hundreds of nanoparticles with complex compositions and morphologies. Autonomous workflows are being developed to accelerate experimental validation of these bulk and nanoscale materials through synthesis. For colloidal nanoparticles, such strategies have focused primarily on compositionally simple systems, due in part to limitations in the generalizability of chemical reactions and incompatibilities between automated setups and mainstream laboratory methods. As a result, the scope of theoretical versus synthesizable materials is rapidly diverging. Here, we use a simple automated platform to drive a massively generalizable reaction capable of producing more than 651 quadrillion distinct core@multishell nanoparticles using a single set of reaction conditions. As a strategic model system, we chose a family of seven isostructural layered rare earth (RE) oxychloride compounds, REOCl (RE = La, Ce, Pr, Nd, Sm, Gd, Dy), which are well-known 2D materials with composition-dependent optical, electronic, and catalytic properties. By integrating a computer-driven, hobbyist-level pump system with a laboratory-scale synthesis setup, we could grow up to 20 REOCl shells in any sequence on a REOCl nanoparticle core. Reagent injection sequences were programmed to introduce composition gradients, luminescent dopants, and binary through high-entropy solid solutions, which expands the library to a near-infinite scope. We also used ChatGPT to randomly select several core@multishell nanoparticle targets within predefined constraints and then direct the automated setup to synthesize them. This platform, which includes both massively generalizable nanochemical reactions and laboratory-scale automated synthesis, is poised for plug-and-play integration into autonomous materials discovery workflows to expand the translation of prediction to realization through efficient synthesis.
New research shows that the magnetic part of light actively shapes how light interacts with matter, challenging a 180-year-old belief.
The team demonstrated that this magnetic component significantly contributes to the Faraday Effect, even accounting for up to 70% of the rotation in the infrared range. By proving that light can magnetically torque materials, the findings open unexpected pathways for advanced optical and magnetic technologies.
Revealing Light’s Hidden Magnetic Power
Necrosis Inhibitors To Pause The Diseases Of Aging — Dr. Carina Kern Ph.D. — CEO, LinkGevity
Dr. Carina Kern, Ph.D. is the CEO of LinkGevity (https://www.linkgevity.com/), an AI-powered biotech company driving innovation in drug discovery for aging and resilience loss.
Dr. Kern has developed a new Blueprint Theory of Aging, which takes an integrative approach to understanding aging, combining evolutionary theory, genetics, molecular mechanisms and medicine, and is used to structure LinkGevity’s AI.
Dr. Kern’s labs are based at the Babraham Research Campus, affiliated with the University of Cambridge and her research has led to the development of a first-in-class necrosis inhibitor targeting cellular degeneration (Anti-Necrotic™). This novel therapeutic is ready to begin Phase II clinical trials later this year, as a potential breakthrough treatment for aging, with UK Government, Francis Crick Institute KQ labs, and European Union (Horizon) support.
The Anti-Necrotic™ has also been selected as one of only 12 global innovations for NASA’s Space-Health program, recognizing its potential to mitigate accelerated aging in astronauts on long-duration space missions.
A new material bends that rule.
Researchers in South Korea say they have built a soft, magnetic artificial muscle that hits hard numbers without turning into a stiff piston. The material flexes, contracts and relaxes like flesh, yet ramps up stiffness on demand when asked to do real work. That mix has long sat out of reach for humanoid robots that need both agility and strength.
Most humanoids move with a cocktail of motors, gears and pneumatic lines. These systems deliver power, but they also add bulk and make contact risky. Soft actuators change the equation. They integrate into limbs, cushion impacts and tolerate misalignment. They also weigh far less than hydraulic stacks and slot neatly inside compact forms like hands, faces and torsos.
Large language models (LLMs) like ChatGPT can write an essay or plan a menu almost instantly. But until recently, it was also easy to stump them. The models, which rely on language patterns to respond to users’ queries, often failed at math problems and were not good at complex reasoning. Suddenly, however, they’ve gotten a lot better at these things.
A new generation of LLMs known as reasoning models are being trained to solve complex problems. Like humans, they need some time to think through problems like these—and remarkably, scientists at MIT’s McGovern Institute for Brain Research have found that the kinds of problems that require the most processing from reasoning models are the very same problems that people need to take their time with.
In other words, they report in the journal PNAS, the “cost of thinking” for a reasoning model is similar to the cost of thinking for a human.