Toggle light / dark theme

AI System – Using Neural Networks With Deep Learning – Beats Stock Market in Simulation

Researchers in Italy have melded the emerging science of convolutional neural networks (CNNs) with deep learning — a discipline within artificial intelligence — to achieve a system of market forecasting with the potential for greater gains and fewer losses than previous attempts to use AI methods to manage stock portfolios. The team, led by Prof. Silvio Barra at the University of Cagliari, published their findings on IEEE/CAA Journal of Automatica Sinica.

The University of Cagliari-based team set out to create an AI-managed “buy and hold” (B&H) strategy — a system of deciding whether to take one of three possible actions — a long action (buying a stock and selling it before the market closes), a short action (selling a stock, then buying it back before the market closes), and a hold (deciding not to invest in a stock that day). At the heart of their proposed system is an automated cycle of analyzing layered images generated from current and past market data. Older B&H systems based their decisions on machine learning, a discipline that leans heavily on predictions based on past performance.

By letting their proposed network analyze current data layered over past data, they are taking market forecasting a step further, allowing for a type of learning that more closely mirrors the intuition of a seasoned investor rather than a robot. Their proposed network can adjust its buy/sell thresholds based on what is happening both in the present moment and the past. Taking into account present-day factors increases the yield over both random guessing and trading algorithms not capable of real-time learning.

100-Year-Old Physics Problem Finally Solved – Accurately Predicts Transmission of Infectious Diseases

A Bristol academic has achieved a milestone in statistical/mathematical physics by solving a 100-year-old physics problem – the discrete diffusion equation in finite space.

The long-sought-after solution could be used to accurately predict encounter and transmission probability between individuals in a closed environment, without the need for time-consuming computer simulations.

In his paper, published in Physical Review X, Dr. Luca Giuggioli from the Department of Engineering Mathematics at the University of Bristol describes how to analytically calculate the probability of occupation (in discrete time and discrete space) of a diffusing particle or entity in a confined space – something that until now was only possible computationally.

Self-driving laboratory for accelerated discovery of thin-film materials

Discovering and optimizing commercially viable materials for clean energy applications typically takes more than a decade. Self-driving laboratories that iteratively design, execute, and learn from materials science experiments in a fully autonomous loop present an opportunity to accelerate this research process. We report here a modular robotic platform driven by a model-based optimization algorithm capable of autonomously optimizing the optical and electronic properties of thin-film materials by modifying the film composition and processing conditions. We demonstrate the power of this platform by using it to maximize the hole mobility of organic hole transport materials commonly used in perovskite solar cells and consumer electronics. This demonstration highlights the possibilities of using autonomous laboratories to discover organic and inorganic materials relevant to materials sciences and clean energy technologies.

Optimizing the properties of thin films is time intensive because of the large number of compositional, deposition, and processing parameters available (1, 2). These parameters are often correlated and can have a profound effect on the structure and physical properties of the film and any adjacent layers present in a device. There exist few computational tools for predicting the properties of materials with compositional and structural disorder, and thus, the materials discovery process still relies heavily on empirical data. High-throughput experimentation (HTE) is an established method for sampling a large parameter space (4, 5), but it is still nearly impossible to sample the full set of combinatorial parameters available for thin films. Parallelized methodologies are also constrained by the experimental techniques that can be used effectively in practice.

Scientists Use Artificial Intelligence and Computer Vision to Study Lithium-Ion Batteries

New machine learning methods bring insights into how lithium ion batteries degrade, and show it’s more complicated than many thought.

Lithium-ion batteries lose their juice over time, causing scientists and engineers to work hard to understand that process in detail. Now, scientists at the Department of Energy’s SLAC National Accelerator Laboratory have combined sophisticated machine learning algorithms with X-ray tomography data to produce a detailed picture of how one battery component, the cathode, degrades with use.

The new study, published this month in Nature Communications, focused on how to better visualize what’s going on in cathodes made of nickel-manganese-cobalt, or NMC. In these cathodes, NMC particles are held together by a conductive carbon matrix, and researchers have speculated that one cause of performance decline could be particles breaking away from that matrix. The team’s goal was to combine cutting-edge capabilities at SLAC’s Stanford Synchrotron Radiation Lightsource (SSRL) and the European Synchrotron Radiation Facility (ESRF) to develop a comprehensive picture of how NMC particles break apart and break away from the matrix and how that might contribute to performance losses.

What Do the Quark Oddities at the Large Hadron Collider Mean?

In their latest analysis, first presented at a seminar in March, the LHCb physicists found that several measurements involving the decay of B mesons conflict slightly with the predictions of the standard model of particle physics—the reigning set of equations describing the subatomic world. Taken alone, each oddity looks like a statistical fluctuation, and they may all evaporate with additional data, as has happened before. But their collective drift suggests that the aberrations may be breadcrumbs leading beyond the standard model to a more complete theory.

“For the first time in certainly my working life, there are a confluence of different decays that are showing anomalies that match up,” said Mitesh Patel, a particle physicist at Imperial College London who is part of LHCb.

The B meson is so named because it contains a bottom quark, one of six fundamental quark particles that account for most of the universe’s visible matter. For unknown reasons, the quarks break down into three generations: heavy, medium, and light, each with quarks of opposite electric charge. Heavier quarks decay into their lighter variations, almost always switching their charge, too. For instance, when the negatively charged heavy bottom quark in a B meson drops a generation, it usually becomes a middleweight, positively charged “charm” quark.

Algorithm quickly simulates a roll of loaded dice

The fast and efficient generation of random numbers has long been an important challenge. For centuries, games of chance have relied on the roll of a die, the flip of a coin, or the shuffling of cards to bring some randomness into the proceedings. In the second half of the 20th century, computers started taking over that role, for applications in cryptography, statistics, and artificial intelligence, as well as for various simulations—climatic, epidemiological, financial, and so forth.

OpenAI Finds Machine Learning Efficiency Is Outpacing Moore’s Law

Eight years ago a machine learning algorithm learned to identify a cat —and it stunned the world. A few years later AI could accurately translate languages and take down world champion Go players. Now, machine learning has begun to excel at complex multiplayer video games like Starcraft and Dota 2 and subtle games like poker. AI, it would appear, is improving fast.

But how fast is fast, and what’s driving the pace? While better computer chips are key, AI research organization OpenAI thinks we should measure the pace of improvement of the actual machine learning algorithms too.

In a blog post and paper —authored by OpenAI’s Danny Hernandez and Tom Brown and published on the arXiv, an open repository for pre-print (or not-yet-peer-reviewed) studies—the researchers say they’ve begun tracking a new measure for machine learning efficiency (that is, doing more with less). Using this measure, they show AI has been getting more efficient at a wicked pace.