Toggle light / dark theme

Scientists at the National Institute of Standards and Technology (NIST) with colleagues elsewhere have employed neutron imaging and a reconstruction algorithm to reveal for the first time the 3D shapes and dynamics of very small tornado-like atomic magnetic arrangements in bulk materials.

These collective atomic arrangements, called skyrmions—if fully characterized and understood—could be used to process and store information in a densely packed form that uses several orders of magnitude less energy than is typical now.

The conventional, semiconductor-based method of processing information in binary form (on or off, 0 or 1) employs electrical charge states that must be constantly refreshed by current which encounters resistance as it passes through transistors and connectors. That’s the main reason that computers get hot.

Black holes may be less unique than previously thought, as the expansion due to a cosmological constant can hold apart a pair of holes and allow them to mimic a single black hole.

Black holes are astonishing objects that can pack the mass of Earth into a space the size of a pea. A remarkable attribute is their stunning simplicity, which is encapsulated in the celebrated uniqueness theorems [1]. Briefly stated, these theorems say that there is only one solution to Einstein’s equations of general relativity for a fully collapsed (nonevolving) system having fixed mass and angular momentum [2]. The implication is that all black holes that have settled down to equilibrium with the same mass and rotation are precisely the same: their entire behavior described by a single equation—the so-called Kerr solution—filling only a few lines of paper!

But there is a catch. The uniqueness theorems make a number of assumptions, the key one being that the space around the black hole is “empty”—in other words, there is no energy that might influence the black hole. Such energy can arise from fields, for example, those of the standard model, or from a “cosmological constant,” which is a form of dark energy that might be behind the accelerated expansion of our Universe today. In a fascinating study, Óscar Dias from the University of Southampton, UK, and colleagues demonstrate that uniqueness is violated in the presence of a positive cosmological constant [3]. Specifically, they show that a pair of black holes whose mutual attraction is balanced by the cosmic expansion would look the same to a distant observer as a single isolated black hole. The results may lead to a rethinking of how simple black holes really are.

The group at Shanghai Jiao Tong University has demonstrated a DNA computer system using DNA integrated circuits (DICs) that can solve quadratic equations with 30 logic gates.

Published in Nature, the system integrates multiple layers of DNA-based programmable gate arrays (DPGAs). This uses generic single-stranded oligonucleotides as a uniform transmission signal can reliably integrate large-scale DICs with minimal leakage and high fidelity for general-purpose computing.

To control the intrinsically random collision of molecules, the team designed DNA origami registers to provide the directionality for asynchronous execution of cascaded DPGAs. This was used to assemble a DIC that can solve quadratic equations with three layers of cascade DPGAs comprising 30 logic gates with around 500 DNA strands.

Researchers from the University of Science and Technology of China(USTC) of the Chinese Academy of Sciences (CAS) have developed an ultra-cold atom quantum simulator to study the relationship between the non-equilibrium thermalization process and quantum criticality in lattice gauge field theories. The research was led by Pan Jianwei and Yuan Zhensheng, in collaboration with Zhai Hui from Tsinghua University and Yao Zhiyuan from Lanzhou University.

Their findings reveal that multi-body systems possessing gauge symmetry tend to thermalize to an equilibrium state more easily when situated in a critical region. The results were published in Physical Review Letters.

Gauge and are two foundational theories of physics. From the Maxwell’s equations of classical electromagnetism to and the Standard Model, which describe the interactions of fundamental particles, all adhere to specific gauge symmetries. On the other hand, statistical mechanics connects the microscopic states of large ensembles of particles (such as atoms and molecules) to their macroscopic statistical behaviors, based on the principle of maximum entropy proposed by Boltzmann and others. It elucidates, for instance, how the energy distribution of microscopic particles affects macroscopic quantities like pressure, volume, or temperature.

Engineering researchers at Lehigh University have discovered that sand can actually flow uphill.

The team’s findings were published today in the journal Nature Communications. A corresponding video shows what happens when torque and an is applied to each grain—the grains flow uphill, up walls, and up and down stairs.

“After using equations that describe the flow of granular materials,” says James Gilchrist, the Ruth H. and Sam Madrid Professor of Chemical and Biomolecular Engineering in Lehigh’s P.C. Rossin College of Engineering and Applied Science and one of the authors of the paper, “we were able to conclusively show that these particles were indeed moving like a , except they were flowing uphill.”

A team of researchers in Japan claims to have figured out a way to translate the clucking of chickens with the use of artificial intelligence.

As detailed in a yet-to-be-peer-reviewed preprint, the team led by University of Tokyo professor Adrian David Cheok — who has previously studied sex robots — came up with a “system capable of interpreting various emotional states in chickens, including hunger, fear, anger, contentment, excitement, and distress” by using “cutting-edge AI technique we call Deep Emotional Analysis Learning.”

They say the technique is “rooted in complex mathematical algorithms” and can even be used to adapt to the ever-changing vocal patterns of chickens, meaning that it only gets better at deciphering “chicken vocalizations” over time.

Quantum behavior is a strange, fragile thing that hovers on the edge of reality, between a world of possibility and a Universe of absolutes. In that mathematical haze lies the potential of quantum computing; the promise of devices that could quickly solve algorithms that would take classic computers too long to process.

For now, quantum computers are confined to cool rooms close to absolute zero (−273 degrees Celsius) where particles are less likely to tumble out of their critical quantum states.

Breaking through this temperature barrier to develop materials that still exhibit quantum properties at room temperatures has long been the goal of quantum computing. Though the low temperatures help keep the particle’s properties from collapsing out of their useful fog of possibility, the bulk and expense of the equipment limits their potential and ability to be scaled up for general use.

The API-AI nexus isn’t just for tech enthusiasts; its influence has widespread real-world implications. Consider the healthcare sector, where APIs can allow diagnostic AI algorithms to access patient medical records while adhering to privacy regulations. In the financial sector, advanced APIs can connect risk-assessment AIs to real-time market data. In education, APIs can provide the data backbone for AI algorithms designed to create personalized, adaptive learning paths.

However, this fusion of AI and APIs also raises critical questions about data privacy, ethical use and governance. As we continue to knit together more aspects of our digital world, these concerns will need to be addressed to foster a harmonious and responsible AI-API ecosystem.

We stand at the crossroads of a monumental technological paradigm shift. As AI continues to advance, APIs are evolving in parallel to unlock and amplify this potential. If you’re in the realm of digital products, the message is clear: The future is not just automated; it’s API-fied. Whether you’re a developer, a business leader or an end user, this new age promises unprecedented levels of interaction, personalization and efficiency—but it’s upon us to navigate it responsibly.

Kevin Slagle, Quantum 7, 1113 (2023). Although tensor networks are powerful tools for simulating low-dimensional quantum physics, tensor network algorithms are very computationally costly in higher spatial dimensions. We introduce $\textit{quantum gauge networks}$: a different kind of tensor network ansatz for which the computation cost of simulations does not explicitly increase for larger spatial dimensions. We take inspiration from the gauge picture of quantum dynamics, which consists of a local wavefunction for each patch of space, with neighboring patches related by unitary connections. A quantum gauge network (QGN) has a similar structure, except the Hilbert space dimensions of the local wavefunctions and connections are truncated. We describe how a QGN can be obtained from a generic wavefunction or matrix product state (MPS). All $2k$-point correlation functions of any wavefunction for $M$ many operators can be encoded exactly by a QGN with bond dimension $O(M^k)$. In comparison, for just $k=1$, an exponentially larger bond dimension of $2^{M/6}$ is generically required for an MPS of qubits. We provide a simple QGN algorithm for approximate simulations of quantum dynamics in any spatial dimension. The approximate dynamics can achieve exact energy conservation for time-independent Hamiltonians, and spatial symmetries can also be maintained exactly. We benchmark the algorithm by simulating the quantum quench of fermionic Hamiltonians in up to three spatial dimensions.

When people program new deep learning AI models — those that can focus on the right features of data by themselves — the vast majority rely on optimization algorithms, or optimizers, to ensure the models have a high enough rate of accuracy. But one of the most commonly used optimizers — derivative-based optimizers— run into trouble handling real-world applications.

In a new paper, researchers from DeepMind propose a new way: Optimization by PROmpting (OPRO), a method that uses AI large language models (LLM) as optimizers. The unique aspect of this approach is that the optimization task is defined in natural language rather than through formal mathematical definitions.

The researchers write, “Instead of formally defining the optimization problem and deriving the update step with a programmed solver, we describe the optimization problem in natural language, then instruct the LLM to iteratively generate new solutions based on the problem description and the previously found solutions.”