Toggle light / dark theme

Scientists have developed a new machine-learning platform that makes the algorithms that control particle beams and lasers smarter than ever before. Their work could help lead to the development of new and improved particle accelerators that will help scientists unlock the secrets of the subatomic world.

Daniele Filippetto and colleagues at the Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab) developed the setup to automatically compensate for real-time changes to accelerator beams and other components, such as magnets. Their machine learning approach is also better than contemporary beam control systems at both understanding why things fail, and then using physics to formulate a response. A paper describing the research was published late last year in Nature Scientific Reports.

“We are trying to teach physics to a chip, while at the same time providing it with the wisdom and experience of a senior scientist operating the machine,” said Filippetto, a staff scientist at the Accelerator Technology & Applied Physics Division (ATAP) at Berkeley Lab and deputy director of the Berkeley Accelerator Controls and Instrumentation Program (BACI) program.

Circa 2021


Astrophysicist at Göttingen University discovers new theoretical hyper-fast soliton solutions.

If travel to distant stars within an individual’s lifetime is going to be possible, a means of faster-than-light propulsion will have to be found. To date, even recent research about superluminal (faster-than-light) transport based on Einstein’s theory of general relativity would require vast amounts of hypothetical particles and states of matter that have “exotic” physical properties such as negative energy density. This type of matter either cannot currently be found or cannot be manufactured in viable quantities. In contrast, new research carried out at the University of Göttingen gets around this problem by constructing a new class of hyper-fast ‘solitons’ using sources with only positive energies that can enable travel at any speed. This reignites debate about the possibility of faster-than-light travel based on conventional physics. The research is published in the journal Classical and Quantum Gravity.

The author of the paper, Dr. Erik Lentz, analyzed existing research and discovered gaps in previous ‘warp drive’ studies. Lentz noticed that there existed yet-to-be explored configurations of space-time curvature organized into ‘solitons’ that have the potential to solve the puzzle while being physically viable. A soliton – in this context also informally referred to as a ‘warp bubble’ – is a compact wave that maintains its shape and moves at constant velocity. Lentz derived the Einstein equations for unexplored soliton configurations (where the space-time metric’s shift vector components obey a hyperbolic relation), finding that the altered space-time geometries could be formed in a way that worked even with conventional energy sources. In essence, the new method uses the very structure of space and time arranged in a soliton to provide a solution to faster-than-light travel, which – unlike other research – would only need sources with positive energy densities.

A machine-learning algorithm that includes a quantum circuit generates realistic handwritten digits and performs better than its classical counterpart.

Machine learning allows computers to recognize complex patterns such as faces and also to create new and realistic-looking examples of such patterns. Working toward improving these techniques, researchers have now given the first clear demonstration of a quantum algorithm performing well when generating these realistic examples, in this case, creating authentic-looking handwritten digits [1]. The researchers see the result as an important step toward building quantum devices able to go beyond the capabilities of classical machine learning.

The most common use of neural networks is classification—recognizing handwritten letters, for example. But researchers increasingly aim to use algorithms on more creative tasks such as generating new and realistic artworks, pieces of music, or human faces. These so-called generative neural networks can also be used in automated editing of photos—to remove unwanted details, such as rain.

The bottomless bucket is Karl Marx’s utopian creed: “From each according to his ability, to each according to his needs.” In this idyllic world, everyone works for the good of society, with the fruits of their labor distributed freely — everyone taking what they need, and only what they need. We know how that worked out. When rewards are unrelated to effort, being a slacker is more appealing than being a worker. With more slackers than workers, not nearly enough is produced to satisfy everyone’s needs. A common joke in the Soviet Union was, “They pretend to pay us, and we pretend to work.”

In addition to helping those who in the great lottery of life have drawn blanks, governments should adopt myriad policies that expand the economic pie, including education, infrastructure, and the enforcement of laws and contracts. Public safety, national defense, dealing with externalities are also important. There are many legitimate government activities and there are inevitably tradeoffs. Governing a country is completely different from playing a simple, rigged distribution game.

I love computers. I use them every day — not just for word processing but for mathematical calculations, statistical analyses, and Monte Carlo simulations that would literally take me several lifetimes to do by hand. Computers have benefited and entertained all of us. However, AI is nowhere near ready to rule the world because computer algorithms do not have the intelligence, wisdom, or commonsense required to make rational decisions.

A DeepMind research group conducted a comprehensive generalization study on neural network architectures in the paper ‘Neural Networks and the Chomsky Hierarchy’, which investigates whether insights from the theory of computation and the Chomsky hierarchy can predict the actual limitations of neural network generalization.

While we understand that developing powerful machine learning models requires an accurate generalization to out-of-distribution inputs. However, how and why neural networks can generalize on algorithmic sequence prediction tasks is unclear.

The research group performed a thorough generalization study on more than 2000 individual models spread across 16 tasks of cutting-edge neural network architectures and memory-augmented neural networks on a battery of sequence-prediction tasks encompassing all tiers of the Chomsky hierarchy that can be evaluated practically with finite-time computation.

Using reinforcement learning (RL) to train robots directly in real-world environments has been considered impractical due to the huge amount of trial and error operations typically required before the agent finally gets it right. The use of deep RL in simulated environments has thus become the go-to alternative, but this approach is far from ideal, as it requires designing simulated tasks and collecting expert demonstrations. Moreover, simulations can fail to capture the complexities of real-world environments, are prone to inaccuracies, and the resulting robot behaviours will not adapt to real-world environmental changes.

The Dreamer algorithm proposed by Hafner et al. at ICLR 2020 introduced an RL agent capable of solving long-horizon tasks purely via latent imagination. Although Dreamer has demonstrated its potential for learning from small amounts of interaction in the compact state space of a learned world model, learning accurate real-world models remains challenging, and it was unknown whether Dreamer could enable faster learning on physical robots.

In the new paper DayDreamer: World Models for Physical Robot Learning, Hafner and a research team from the University of California, Berkeley leverage recent advances in the Dreamer world model to enable online RL for robot training without simulators or demonstrations. The novel approach achieves promising results and establishes a strong baseline for efficient real-world robot training.

In today’s business world, machine-learning algorithms are increasingly being applied to decision-making processes, which affects employment, education, and access to credit. But firms usually keep algorithms secret, citing concerns over gaming by users that can harm the predictive power of algorithms. Amid growing calls to require firms to make their algorithms transparent, a new study developed an analytical model to compare the profit of firms with and without such transparency. The study concluded that there are benefits but also risks in algorithmic transparency.

Conducted by researchers at Carnegie Mellon University (CMU) and the University of Michigan, the study appears in Management Science.

“As managers face calls to boost , our findings can help them make decisions to benefit their firms,” says Param Vir Singh, Professor of Business Technologies and Marketing at CMU’s Tepper School of Business, who coauthored the study.

Text-to-image generation is the hot algorithmic process right now, with OpenAI’s Craiyon (formerly DALL-E mini) and Google’s Imagen AIs unleashing tidal waves of wonderfully weird procedurally generated art synthesized from human and computer imaginations. On Tuesday, Meta revealed that it too has developed an AI image generation engine, one that it hopes will help to build immersive worlds in the Metaverse and create high digital art.

A lot of work into creating an image based on just the phrase, “there’s a horse in the hospital,” when using a generation AI. First the phrase itself is fed through a transformer model, a neural network that parses the words of the sentence and develops a contextual understanding of their relationship to one another. Once it gets the gist of what the user is describing, the AI will synthesize a new image using a set of GANs (generative adversarial networks).

Thanks to efforts in recent years to train ML models on increasingly expandisve, high-definition image sets with well-curated text descriptions, today’s state-of-the-art AIs can create photorealistic images of most whatever nonsense you feed them. The specific creation process differs between AIs.