Toggle light / dark theme

Atomic simulations deepen the mystery of how engineered materials known as refractory high-entropy alloys can suffer so little damage by radiation.

Refractory high-entropy alloys are materials made from multiple high-melting-point metals in roughly equal proportions. Those containing tungsten exhibit minimal changes in mechanical properties when exposed to continuous radiation and could be used to shield the crucial components of future nuclear reactors. Now Jesper Byggmästar and his colleagues at the University of Helsinki have performed atomic simulations that explore the uncertain origins of this radiation resistance [1]. The findings could help scientists design novel materials that are even more robust than these alloys in extreme environments.

The researchers studied a tungsten-based refractory high-entropy alloy using state-of-the-art simulations guided by machine learning. In particular, they modeled the main mechanism by which radiation can disrupt such an alloy’s atomic structure. In this mechanism, the incoming radiation causes one atom in the alloy to displace another atom, forming one or more structural defects. The team determined the threshold energy needed to induce such displacements and its dependence on the masses of the two involved atoms.

A new visual recognition approach improved a machine learning technique’s ability to both identify an object and how it is oriented in space, according to a study presented in October at the European Conference on Computer Vision in Milan, Italy.

Self-supervised learning is a machine learning approach that trains on unlabeled data, extending generalizability to real-world data. While it excels at identifying objects, a task called semantic classification, it may struggle to recognize objects in new poses.

This weakness quickly becomes a problem in situations like autonomous vehicle navigation, where an algorithm must assess whether an approaching car is a head-on collision threat or side-oriented and just passing by.

Artificial intelligence (AI) systems tend to take on human biases and amplify them, causing people who use that AI to become more biased themselves, finds a new study by UCL researchers.

Human and AI biases can consequently create a , with small initial biases increasing the risk of human error, according to the findings published in Nature Human Behaviour.

The researchers demonstrated that AI bias can have real-world consequences, as they found that people interacting with biased AIs became more likely to underestimate women’s performance and overestimate white men’s likelihood of holding high-status jobs.

Google’s Willow chip achieves scalable quantum error correction, reducing errors, and maintaining stability across a million cycles.

OpenAI is giving users a new way to talk to its viral chatbot: 1–800-CHATGPT.

By dialing the U.S. number (1−800−242−8478) or messaging it via WhatsApp, users can access an “easy, convenient, and low-cost way to try it out through familiar channels,” OpenAI said Wednesday. At first, the company said callers will get 15 minutes free per month.

The news follows a barrage of updates from OpenAI as part of a 12-day release event. The most notable announcement was the official rollout of Sora, OpenAI’s buzzy AI video-generation tool.

As the amount of genomic data grows, so too does the challenge of organizing it into a usable database. Indeed, the lack of a searchable database of genomic information from the literature has posed a challenge to the research community. Now, Genomenon’s AI-based approach—the Genomenon Genomic Graph (G3) knowledgebase—combines patient and biological data from nearly all published scientific and medical studies, including demographics, clinical characteristics, phenotypes, treatments, outcomes, and disease-associated genes and variants.

Training of the underlying large language model for G3 uses Genomenon’s proprietary, curated genomic datasets. The knowledgebase will power AI-driven predictive models for clinical diagnostics and drug development applications.

The Ann Arbor, MI, based Genomenon—a provider of genomic intelligence solutions—notes that this advancement represents the first time that content from the entire corpus of clinically relevant literature will be captured in a single, searchable knowledgebase.

Mapping the geometry of quantum worlds: measuring the quantum geometric tensor in solids.

Quantum states are like complex shapes in a hidden world, and understanding their geometry is key to unlocking the mysteries of modern physics. One of the most important tools for studying this geometry is the quantum geometric tensor (QGT). This mathematical object reveals how quantum states “curve” and interact, shaping phenomena ranging from exotic materials to groundbreaking technologies.

The QGT has two parts, each with distinct significance:

1. The Berry curvature (the imaginary part): This governs topological phenomena, such as unusual electrical and magnetic behaviors in advanced materials.

2. The quantum metric (the real part): Recently gaining attention, this influences surprising effects like flat-band superfluidity, quantum Landau levels, and even the nonlinear Hall effect.

While the QGT is crucial for understanding these phenomena, measuring it directly has been a challenge, previously limited to simple, artificial systems.

A breakthrough now allows scientists to measure the QGT in real crystalline solids. Using an advanced technique involving polarization-, spin-, and angle-resolved photoemission spectroscopy, researchers have reconstructed the QGT in a material called CoSn, a “kagome metal” with unique quantum properties like topological flat bands. This metal forms patterns resembling a woven basket, hosting quantum effects that were previously only theorized.

Research utilizing AI tool AlphaFold has revealed a new protein complex that initiates the fertilization process between sperm and egg, shedding light on the molecular interactions essential for successful fertilization.

Genetic research has uncovered many proteins involved in the initial contact between sperm and egg. However, direct proof of how these proteins bind or form complexes to enable fertilization remained unclear. Now, Andrea Pauli’s lab at the IMP, working with international collaborators, has combined AI-driven structural predictions with experimental evidence to reveal a key fertilization complex. Their findings, based on studies in zebrafish, mice, and human cells, were published in the journal Cell.

Fertilization is the first step in forming an embryo, starting with the sperm’s journey toward the egg, guided by chemical signals. When the sperm reaches the egg, it binds to the egg’s surface through specific protein interactions. This binding readies their membranes to merge, allowing their genetic material to combine and create a zygote—a single cell that will eventually develop into a new organism.

In 1956, a group of pioneering minds gathered at Dartmouth College to define what we now call artificial intelligence (AI). Even in the early 1990s when colleagues and I were working for early-stage expert systems software companies, the notion that machines could mimic human intelligence was an audacious one. Today, AI drives businesses, automates processes, creates content, and personalizes experiences in every industry. It aids and abets more economic activity than we “ignorant savages” (as one of the founding fathers of AI, Marvin Minsky, referred to our coterie) could have ever imagined. Admittedly, the journey is still early—a journey that may take us from narrow AI to artificial general intelligence (AGI) and ultimately to artificial superintelligence (ASI).

As business and technology leaders, it’s crucial to understand what’s coming: where AI is headed, how far off AGI and ASI might be, and what opportunities and risks lie ahead. To ignore this evolution would be like a factory owner in 1900 dismissing electricity as a passing trend.

Let’s first take stock of where we are. Modern AI is narrow AI —technologies built to handle specific tasks. Whether it’s a large language model (LLM) chatbot responding to customers, algorithms optimizing supply chains, or systems predicting loan defaults, today’s AI excels at isolated functions.