Toggle light / dark theme

The second law of thermodynamics delineates an asymmetry in how physical systems evolve over time, known as the arrow of time. In macroscopic systems, this asymmetry has a clear direction (e.g., one can easily notice if a video showing a system’s evolution over time is being played normally or backward).

In the microscopic world, however, this direction is not always apparent. In fact, fluctuations in microscopic systems can lead to clear violations of the , causing the arrow of to become blurry and less defined. As a result, when watching a video of a microscopic process, it can be difficult, if not impossible, to determine whether it is being played normally or backwards.

Researchers at University of Maryland developed a that can infer the direction of the thermodynamic arrow of time in both macroscopic and microscopic processes. This algorithm, presented in a paper published in Nature Physics, could ultimately help to uncover new physical principles related to thermodynamics.

So now, there are AI doctors.


Machine learning is taking medical diagnosis by storm. From eye disease, breast and other cancers, to more amorphous neurological disorders, AI is routinely matching physician performance, if not beating them outright.

Yet how much can we take those results at face value? When it comes to life and death decisions, when can we put our full trust in enigmatic algorithms—“black boxes” that even their creators cannot fully explain or understand? The problem gets more complex as medical AI crosses multiple disciplines and developers, including both academic and industry powerhouses such as Google, Amazon, or Apple, with disparate incentives.

This week, the two sides battled it out in a heated duel in one of the most prestigious science journals, Nature. On one side are prominent AI researchers at the Princess Margaret Cancer Centre, University of Toronto, Stanford University, Johns Hopkins, Harvard, MIT, and others. On the other side is the titan Google Health.

Now Karlin, Klein and Oveis Gharan have proved that an algorithm devised a decade ago beats Christofides’ 50 percent factor, though they were only able to subtract 0.2 billionth of a trillionth of a trillionth of a percent. Yet this minuscule improvement breaks through both a theoretical logjam and a psychological one. Researchers hope that it will open the floodgates to further improvements.

“This is a result I have wanted all my career,” said David Williamson of Cornell University, who has been studying the traveling salesperson problem since the 1980s.

The traveling salesperson problem is one of a handful of foundational problems that theoretical computer scientists turn to again and again to test the limits of efficient computation. The new result “is the first step towards showing that the frontiers of efficient computation are in fact better than what we thought,” Williamson said.

Recently, researchers from the Institute of Intelligent Machines developed a new wavelength selection algorithm based on combined moving window (CMW) and variable dimension particle swarm optimization (VDPSO) algorithm.

CMW retained the advantages of the moving window algorithm, and different windows could overlap each other to realize automatic optimization of spectral interval width and number. VDPSO algorithms improved the traditional particle swarm optimization (PSO) algorithm.

This new algorithm, which is called VDPSO-CMW, could search the data space in different dimensions, and reduce the risk of limited local extrema and over fitting.

A team of researchers affiliated with a host of institutions in Korea and one in Estonia has found a way to use math to study paintings to learn more about the evolution of art history in the western world. In their paper published in Proceedings of the National Academy of Sciences, the group describes how they scanned thousands of paintings and then used mathematical algorithms to find commonalities between them over time.

Beauty, as the saying goes, is in the eye of the beholder—and so it is also with art. Two people looking at the same can walk away with vastly different impressions. But art also serves, the researchers contend, as a barometer for visualizing the emotional tone of a given society. This suggests that the study of art history can serve as a channel of sorts—illuminating societal trends over time. The researchers further note that to date, most studies of art history have been qualitatively based, which has led to interpretive results. To overcome such bias, the researchers with this new effort looked to mathematics to see if it might be useful in uncovering features of paintings that have been overlooked by human scholars.

The work involved digitally scanning 14,912 paintings—all of which (except for two) were painted by Western artists. The data for each of the paintings was then sent through a mathematical that drew partitions on the based on contrasting colors. The researchers ran the algorithm on each painting multiple times, each time creating more partitions. As an example, the first run of the algorithm might have simply created two partitions on a painting—everything on land, and everything in the sky. The second might have split the land into buildings in one partition and farmland in another.

Humans are innately able to adapt their behavior and actions according to the movements of other humans in their surroundings. For instance, human drivers may suddenly stop, slow down, steer or start their car based on the actions of other drivers, pedestrians or cyclists, as they have a sense of which maneuvers are risky in specific scenarios.

However, developing robots and autonomous vehicles that can similarly predict movements and assess the risk of performing different actions in a given scenario has so far proved highly challenging. This has resulted in a number of accidents, including the tragic death of a pedestrian who was struck by a self-driving Uber vehicle in March 2018.

Researchers at Stanford University and Toyota Research Institute (TRI) have recently developed a framework that could prevent these accidents in the future, increasing the safety of autonomous vehicles and other robotic systems operating in crowded environments. This framework, presented in a paper pre-published on arXiv, combines two tools, a and a technique to achieve risk-sensitive control.