Toggle light / dark theme

A machine-learning algorithm rapidly generates designs that can be simpler than those developed by humans.

Researchers in optics and photonics rely on devices that interact with light in order to transport it, amplify it, or change its frequency, and designing these devices can be painstaking work requiring human ingenuity. Now a research team has demonstrated that the discovery of the core design concepts can be automated using machine learning, which can rapidly provide efficient designs for a wide range of uses [1]. The team hopes the approach will streamline research and development for scientists and engineers who work with optical, mechanical, or electrical waves, or with combinations of these wave types.

When a researcher needs a transducer, an amplifier, or a similar element in their experimental setup, they draw on design concepts tested and proven in earlier experiments. “There are literally hundreds of articles that describe ideas for the design of devices,” says Florian Marquardt of the University of Erlangen-Nuremberg in Germany. Researchers often adapt an existing design to their specific needs. But there is no standard procedure to find the best design, and researchers could miss out on simpler designs that would be easier to implement.

Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have developed a novel artificial intelligence (AI) model inspired by neural oscillations in the brain, with the goal of significantly advancing how machine learning algorithms handle long sequences of data.

AI often struggles with analyzing complex information that unfolds over long periods of time, such as climate trends, biological signals, or financial data. One new type of AI model called “state-space models” has been designed specifically to understand these sequential patterns more effectively. However, existing state-space models often face challenges—they can become unstable or require a significant amount of computational resources when processing long data sequences.

To address these issues, CSAIL researchers T. Konstantin Rusch and Daniela Rus have developed what they call “linear oscillatory state-space models” (LinOSS), which leverage principles of forced harmonic oscillators—a concept deeply rooted in physics and observed in .

Delivery robots made by companies such as Starship Technologies and Kiwibot autonomously make their way along city streets and through neighborhoods.

Under the hood, these robots—like most in use today—use a variety of different sensors and software-based algorithms to navigate in these environments.

Lidar sensors—which send out pulses of light to help calculate the distances of objects—have become a mainstay, enabling these robots to conduct simultaneous localization and mapping, otherwise known as SLAM.

Most people’s experiences with polynomial equations don’t extend much further than high school algebra and the quadratic formula. Still, these numeric puzzles remain a foundational component of everything from calculating planetary orbits to computer programming. Although solving lower order polynomials—where the x in an equation is raised up to the fourth power—is often a simple task, things get complicated once you start seeing powers of five or greater. For centuries, mathematicians accepted this as simply an inherent challenge to their work, but not Norman Wildberger. According to his new approach detailed in The American Mathematical Monthly, there’s a much more elegant approach to high order polynomials—all you need to do is get rid of pesky notions like irrational numbers.

Babylonians first conceived of two-degree polynomials around 1800 BCE, but it took until the 16th century for mathematicians to evolve the concept to incorporate three-and four-degree variables using root numbers, also known as radicals. Polynomials remained there for another two centuries, with larger examples stumping experts until in 1832. That year, French mathematician Évariste Galois finally illustrated why this was such a problem—the underlying mathematical symmetry in the established methods for lower-order polynomials simply became too complicated for degree five or higher. For Galois, this meant there just wasn’t a general formula available for them.

Mathematicians have since developed approximate solutions, but they require integrating concepts like irrational numbers into the classical formula.

For decades, neuroscientists have developed mathematical frameworks to explain how brain activity drives behavior in predictable, repetitive scenarios, such as while playing a game. These algorithms have not only described brain cell activity with remarkable precision but also helped develop artificial intelligence with superhuman achievements in specific tasks, such as playing Atari or Go.

Yet these frameworks fall short of capturing the essence of human and animal behavior: our extraordinary ability to generalize, infer and adapt. Our study, published in Nature late last year, provides insights into how in mice enable this more complex, intelligent behavior.

Unlike machines, humans and animals can flexibly navigate new challenges. Every day, we solve new problems by generalizing from our knowledge or drawing from our experiences. We cook new recipes, meet new people, take a new path—and we can imagine the aftermath of entirely novel choices.

A UNSW Sydney mathematician has discovered a new method to tackle algebra’s oldest challenge—solving higher polynomial equations.

Polynomials are equations involving a variable raised to powers, such as the degree two polynomial: 1 + 4x – 3x2 = 0.

The equations are fundamental to math as well as science, where they have broad applications, like helping describe the movement of planets or writing computer programs.

The quantum black hole with (almost) no equations by Professor Gerard ‘t Hooft.

How to reconcile Einstein’s theory of General Relativity with Quantum Mechanics is a notorious problem. Special relativity, on the other hand, was united completely with quantum mechanics when the Standard Model, including Higgs mechanism, was formulated as a relativistic quantum field theory.

Since Stephen Hawking shed new light on quantum mechanical effects in black holes, it was hoped that black holes may be used to obtain a more complete picture of Nature’s laws in that domain, but he arrived at claims that are difficult to use in this respect. Was he right? What happens with information sent into a black hole?

The discussion is not over; in this lecture it is shown that a mild conical singularity at the black hole horizon may be inevitable, while it doubles the temperature of quantum radiation emitted by a black hole, we illustrate the situation with only few equations.

About the Higgs Lecture.

The Faculty of Natural, Mathematical & Engineering Sciences is delighted to present the Annual Higgs Lecture. The inaugural Annual Higgs Lecture was delivered in December 2012 by its name bearer, Professor Peter Higgs, who returned to King’s after graduating in 1950 with a first-class honours degree in Physics, and who famously predicted the Higgs Boson particle.

A quantum computer can solve optimization problems faster than classical supercomputers, a process known as “quantum advantage” and demonstrated by a USC researcher in a paper recently published in Physical Review Letters.

The study shows how , a specialized form of quantum computing, outperforms the best current classical algorithms when searching for near-optimal solutions to complex problems.

“The way quantum annealing works is by finding low-energy states in , which correspond to optimal or near-optimal solutions to the problems being solved,” said Daniel Lidar, corresponding author of the study and professor of electrical and computer engineering, chemistry, and physics and astronomy at the USC Viterbi School of Engineering and the USC Dornsife College of Letters, Arts and Sciences.