Toggle light / dark theme

Mapping the geometry of quantum worlds: measuring the quantum geometric tensor in solids.

Quantum states are like complex shapes in a hidden world, and understanding their geometry is key to unlocking the mysteries of modern physics. One of the most important tools for studying this geometry is the quantum geometric tensor (QGT). This mathematical object reveals how quantum states “curve” and interact, shaping phenomena ranging from exotic materials to groundbreaking technologies.

The QGT has two parts, each with distinct significance:

1. The Berry curvature (the imaginary part): This governs topological phenomena, such as unusual electrical and magnetic behaviors in advanced materials.

2. The quantum metric (the real part): Recently gaining attention, this influences surprising effects like flat-band superfluidity, quantum Landau levels, and even the nonlinear Hall effect.

Research utilizing AI tool AlphaFold has revealed a new protein complex that initiates the fertilization process between sperm and egg, shedding light on the molecular interactions essential for successful fertilization.

Genetic research has uncovered many proteins involved in the initial contact between sperm and egg. However, direct proof of how these proteins bind or form complexes to enable fertilization remained unclear. Now, Andrea Pauli’s lab at the IMP, working with international collaborators, has combined AI-driven structural predictions with experimental evidence to reveal a key fertilization complex. Their findings, based on studies in zebrafish, mice, and human cells, were published in the journal Cell.

Fertilization is the first step in forming an embryo, starting with the sperm’s journey toward the egg, guided by chemical signals. When the sperm reaches the egg, it binds to the egg’s surface through specific protein interactions. This binding readies their membranes to merge, allowing their genetic material to combine and create a zygote—a single cell that will eventually develop into a new organism.

In 1956, a group of pioneering minds gathered at Dartmouth College to define what we now call artificial intelligence (AI). Even in the early 1990s when colleagues and I were working for early-stage expert systems software companies, the notion that machines could mimic human intelligence was an audacious one. Today, AI drives businesses, automates processes, creates content, and personalizes experiences in every industry. It aids and abets more economic activity than we “ignorant savages” (as one of the founding fathers of AI, Marvin Minsky, referred to our coterie) could have ever imagined. Admittedly, the journey is still early—a journey that may take us from narrow AI to artificial general intelligence (AGI) and ultimately to artificial superintelligence (ASI).

As business and technology leaders, it’s crucial to understand what’s coming: where AI is headed, how far off AGI and ASI might be, and what opportunities and risks lie ahead. To ignore this evolution would be like a factory owner in 1900 dismissing electricity as a passing trend.

Let’s first take stock of where we are. Modern AI is narrow AI —technologies built to handle specific tasks. Whether it’s a large language model (LLM) chatbot responding to customers, algorithms optimizing supply chains, or systems predicting loan defaults, today’s AI excels at isolated functions.

Chris McHenry is Vice President of Product Management at Aviatrix.

Enterprise reliance on cloud computing is no longer a question of “if” but “how much” and “how secure.” The cloud has become the backbone of modern business, enabling rapid scaling, seamless integration and global reach.

However, as cloud adoption matures, so do its associated costs—driven significantly by the rise of artificial intelligence (AI) and the escalating energy demands of data centers. For instance, OpenAI recently revealed plans to increase its prices by 120% over the next five years, even after securing an industry-record $6.6 billion in funding.

The notion of entropy grew out of an attempt at perfecting machinery during the industrial revolution. A 28-year-old French military engineer named Sadi Carnot set out to calculate the ultimate efficiency of the steam-powered engine. In 1824, he published a 118-page book(opens a new tab) titled Reflections on the Motive Power of Fire, which he sold on the banks of the Seine for 3 francs. Carnot’s book was largely disregarded by the scientific community, and he died several years later of cholera. His body was burned, as were many of his papers. But some copies of his book survived, and in them lay the embers of a new science of thermodynamics — the motive power of fire.

Carnot realized that the steam engine is, at its core, a machine that exploits the tendency for heat to flow from hot objects to cold ones. He drew up the most efficient engine conceivable, instituting a bound on the fraction of heat that can be converted to work, a result now known as Carnot’s theorem. His most consequential statement comes as a caveat on the last page of the book: “We should not expect ever to utilize in practice all the motive power of combustibles.” Some energy will always be dissipated through friction, vibration, or another unwanted form of motion. Perfection is unattainable.

Reading through Carnot’s book a few decades later, in 1865, the German physicist Rudolf Clausius coined a term for the proportion of energy that’s locked up in futility. He called it “entropy,” after the Greek word for transformation. He then laid out what became known as the second law of thermodynamics: “The entropy of the universe tends to a maximum.”

Physicists of the era erroneously believed that heat was a fluid (called “caloric”). Over the following decades, they realized heat was rather a byproduct of individual molecules bumping around. This shift in perspective allowed the Austrian physicist Ludwig Boltzmann to reframe and sharpen the idea of entropy using probabilities.

Boltzmann distinguished the microscopic properties of molecules, such as their individual locations and velocities, from bulk macroscopic properties of a gas like temperature and pressure…

The field of artificial intelligence (AI) has witnessed extraordinary advancements in recent years, ranging from natural language processing breakthroughs to the development of sophisticated robotics. Among these innovations, multi-agent systems (MAS) have emerged as a transformative approach for solving problems that single agents struggle to address. Multi-agent collaboration harnesses the power of interactions between autonomous entities, or “agents,” to achieve shared or individual objectives. In this article, we explore one specific and impactful technique within multi-agent collaboration: role-based collaboration enhanced by prompt engineering. This approach has proven particularly effective in practical applications, such as developing a software application.