Menu

Blog

Archive for the ‘information science’ category: Page 149

Dec 25, 2020

How ‘spooky’ is quantum physics? The answer could be incalculable

Posted by in categories: information science, mathematics, quantum physics

Proof at the nexus of pure mathematics and algorithms puts ‘quantum weirdness’ on a whole new level.

Dec 22, 2020

LUCIDGames: A technique to plan adaptive trajectories for autonomous vehicles

Posted by in categories: information science, robotics/AI, transportation

While many self-driving vehicles have achieved remarkable performance in simulations or initial trials, when tested on real streets, they are often unable to adapt their trajectories or movements based on those of other vehicles or agents in their surroundings. This is particularly true in situations that require a certain degree of negotiation, for instance, at intersections or on streets with multiple lanes.

Researchers at Stanford University recently created LUCIDGames, a that can predict and plan adaptive trajectories for autonomous vehicles. This technique, presented in a paper pre-published on arXiv, integrates an algorithm based on game theory and an estimation method.

Continue reading “LUCIDGames: A technique to plan adaptive trajectories for autonomous vehicles” »

Dec 21, 2020

Artificial intelligence solves Schrödinger’s equation

Posted by in categories: chemistry, information science, mathematics, particle physics, quantum physics, robotics/AI, space

A team of scientists at Freie Universität Berlin has developed an artificial intelligence (AI) method for calculating the ground state of the Schrödinger equation in quantum chemistry. The goal of quantum chemistry is to predict chemical and physical properties of molecules based solely on the arrangement of their atoms in space, avoiding the need for resource-intensive and time-consuming laboratory experiments. In principle, this can be achieved by solving the Schrödinger equation, but in practice this is extremely difficult.

Up to now, it has been impossible to find an exact solution for arbitrary molecules that can be efficiently computed. But the team at Freie Universität has developed a deep learning method that can achieve an unprecedented combination of accuracy and computational efficiency. AI has transformed many technological and scientific areas, from computer vision to materials science. “We believe that our approach may significantly impact the future of quantum ,” says Professor Frank Noé, who led the team effort. The results were published in the reputed journal Nature Chemistry.

Central to both quantum chemistry and the Schrödinger equation is the —a mathematical object that completely specifies the behavior of the electrons in a molecule. The wave function is a high-dimensional entity, and it is therefore extremely difficult to capture all the nuances that encode how the individual electrons affect each other. Many methods of quantum chemistry in fact give up on expressing the wave function altogether, instead attempting only to determine the energy of a given molecule. This however requires approximations to be made, limiting the prediction quality of such methods.

Dec 20, 2020

Wall Street’s latest shiny new thing: quantum computing

Posted by in categories: economics, finance, information science, quantum physics, robotics/AI

THE FINANCE industry has had a long and profitable relationship with computing. It was an early adopter of everything from mainframe computers to artificial intelligence (see timeline). For most of the past decade more trades have been done at high frequency by complex algorithms than by humans. Now big banks have their eyes on quantum computing, another cutting-edge technology.


A fundamentally new kind of computing will shake up finance—the question is when.

Finance & economics Dec 19th 2020 edition.

Continue reading “Wall Street’s latest shiny new thing: quantum computing” »

Dec 19, 2020

Left of Launch: Artificial Intelligence at the Nuclear Nexus

Posted by in categories: information science, military, policy, robotics/AI, space, surveillance

Popular media and policy-oriented discussions on the incorporation of artificial intelligence (AI) into nuclear weapons systems frequently focus on matters of launch authority—that is, whether AI, especially machine learning (ML) capabilities, should be incorporated into the decision to use nuclear weapons and thereby reduce the role of human control in the decisionmaking process. This is a future we should avoid. Yet while the extreme case of automating nuclear weapons use is high stakes, and thus existential to get right, there are many other areas of potential AI adoption into the nuclear enterprise that require assessment. Moreover, as the conventional military moves rapidly to adopt AI tools in a host of mission areas, the overlapping consequences for the nuclear mission space, including in nuclear command, control, and communications (NC3), may be underappreciated.

AI may be used in ways that do not directly involve or are not immediately recognizable to senior decisionmakers. These areas of AI application are far left of an operational decision or decision to launch and include four priority sectors: security and defense; intelligence activities and indications and warning; modeling and simulation, optimization, and data analytics; and logistics and maintenance. Given the rapid pace of development, even if algorithms are not used to launch nuclear weapons, ML could shape the design of the next-generation ballistic missile or be embedded in the underlying logistics infrastructure. ML vision models may undergird the intelligence process that detects the movement of adversary mobile missile launchers and optimize the tipping and queuing of overhead surveillance assets, even as a human decisionmaker remains firmly in the loop in any ultimate decisions about nuclear use. Understanding and navigating these developments in the context of nuclear deterrence and the understanding of escalation risks will require the analytical attention of the nuclear community and likely the adoption of risk management approaches, especially where the exclusion of AI is not reasonable or feasible.

Dec 16, 2020

Ultracold atoms reveal a new type of quantum magnetic behavior

Posted by in categories: information science, particle physics, quantum physics

A new study illuminates surprising choreography among spinning atoms. In a paper appearing in the journal Nature, researchers from MIT and Harvard University reveal how magnetic forces at the quantum, atomic scale affect how atoms orient their spins.

In experiments with ultracold lithium , the researchers observed different ways in which the spins of the atoms evolve. Like tippy ballerinas pirouetting back to upright positions, the spinning atoms return to an equilibrium orientation in a way that depends on the between individual atoms. For example, the atoms can spin into equilibrium in an extremely fast, “ballistic” fashion or in a slower, more diffuse pattern.

The researchers found that these behaviors, which had not been observed until now, could be described mathematically by the Heisenberg model, a set of equations commonly used to predict magnetic behavior. Their results address the fundamental nature of magnetism, revealing a diversity of behavior in one of the simplest magnetic materials.

Dec 16, 2020

Modeling behaviorally relevant neural dynamics enabled by preferential subspace identification

Posted by in categories: information science, neuroscience

This work develops PSID, a dynamic modeling method to dissociate and prioritize neural dynamics relevant to a given behavior.

Dec 14, 2020

Software developers: How plans to automate coding could mean big changes ahead

Posted by in categories: information science, robotics/AI

A team of researchers from MIT and Intel have created an algorithm that can create algorithms. In the long term, that could radically change the role of software developers.

Dec 14, 2020

Improving portraits

Posted by in categories: information science, mobile phones

Recently, Google introduced Portrait Light, a feature on its Pixel phones that can be used to enhance portraits by adding an external light source not present at the time the photo was taken. In a new blog post, Google explains how they made this possible.

In their post, engineers at Google Research note that professional photographers discovered long ago that the best way to make people look their best in portraits is by using secondary flash devices that are not attached to the camera. Such flash devices can be situated by the photographer prior to photographing a subject by taking into account the direction their face is pointing, other available, skin tone and other factors. Google has attempted to capture those factors with its new -enhancing . The system does not require the camera operator to use another . Instead, the software simply pretends that there was another light source all along, and then allows the user to determine the most flattering configuration for the subject.

The engineers explain they achieved this feat using two algorithms. The first, which they call automatic directional light placement, places synthetic light into the scene as a professional photographer would. The second algorithm is called synthetic post-capture relighting. It allows for repositioning the light after the fact in a realistic and natural-looking way.

Dec 14, 2020

New Deep Learning Method Helps Robots Become Jacks-of-all-Trades

Posted by in categories: information science, robotics/AI, transportation

Put a robot in a tightly-controlled environment and it can quickly surpass human performance at complex tasks, from building cars to playing table tennis. But throw these machines a curve ball and they’re in trouble—just check out this compilation of some of the world’s most advanced robots coming unstuck in the face of notoriously challenging obstacles like sand, steps, and doorways.

The reason robots tend to be so fragile is that the algorithms that control them are often manually designed. If they encounter a situation the designer didn’t think of, which is almost inevitable in the chaotic real world, then they simply don’t have the tools to react.

Continue reading “New Deep Learning Method Helps Robots Become Jacks-of-all-Trades” »