Toggle light / dark theme

And just like a unicorn, it doesn’t currently exist.


Never mind buying a robot dog for your kids — you might just get them a mythical creature instead. Chinese EV maker Xpeng has teased a robot unicorn meant for children to ride. As SCMP notes, the quadruped will take advantage of Xpeng’s experiences with autonomous driving and other AI tasks to navigate multiple terrain types, recognize objects and provide “emotional interaction.”

The company is shy on most other details, although the design looks and trots like a cuter, more kid-friendly version of Boston Robotics’ Spot. It’s appropriately about as tall as a child. Sorry, folks, you won’t prance your way to work.

new study shows.


When you know you’re being watched by somebody, it’s hard to pretend they’re not there. It can be difficult to block them out and keep focus, feeling their gaze bearing down upon you.

Strangely enough, it doesn’t even seem to really matter whether they’re alive or not.

It’s not just salespeople, traders, compliance professionals and people formatting pitchbooks who risk losing their banking jobs to technology. It turns out that private equity professionals do too. A new study by a professor at one of France’s top finance universities explains how.

Professor Thomas Åstebro at Paris-based HEC says private equity firms are using artificial intelligence (AI) to push the limits of human cognition and to support decision-making. Åstebro says t he sorts of people employed by private equity funds is changing as a result.

Åstebro looked at the use of AI systems across various private equity and venture capital firms. He found that funds that have embraced AI are using decision support systems (DSS) across the investment decision-making process, including to source potential targets for investments before rivals.

CERN Courier


Jennifer Ngadiuba and Maurizio Pierini describe how ‘unsupervised’ machine learning could keep watch for signs of new physics at the LHC that have not yet been dreamt up by physicists.

In the 1970s, the robust mathematical framework of the Standard Model ℠ replaced data observation as the dominant starting point for scientific inquiry in particle physics. Decades-long physics programmes were put together based on its predictions. Physicists built complex and highly successful experiments at particle colliders, culminating in the discovery of the Higgs boson at the LHC in 2012.

Along this journey, particle physicists adapted their methods to deal with ever growing data volumes and rates. To handle the large amount of data generated in collisions, they had to optimise real-time selection algorithms, or triggers. The field became an early adopter of artificial intelligence (AI) techniques, especially those falling under the umbrella of “supervised” machine learning. Verifying the SM’s predictions or exposing its shortcomings became the main goal of particle physics. But with the SM now apparently complete, and supervised studies incrementally excluding favoured models of new physics, “unsupervised” learning has the potential to lead the field into the uncharted waters beyond the SM.

Reinforcement learning (RL) is the most widely used machine learning algorithm, besides supervised and unsupervised learning and the less common self-supervised and semi-supervised learning. RL focuses on the controlled learning process, where a machine learning algorithm is provided with a set of actions, parameters, and end values. It teaches the machine trial and error.

From a data efficiency perspective, several methods have been proposed, including online setting, reply buffer, storing experience in a transition memory, etc. In recent years, off-policy actor-critic algorithms have been gaining prominence, where RL algorithms can learn from limited data sets entirely without interaction (offline RL).

Summary: Findings could advance the development of deep learning networks based on real neurons that will enable them to perform more complex and more efficient learning processes.

Source: Hebrew University of Jerusalem.

We are in the midst of a scientific and technological revolution. The computers of today use artificial intelligence to learn from example and to execute sophisticated functions that, until recently, were thought impossible. These smart algorithms can recognize faces and even drive autonomous vehicles.

In Hawaii, project partners, including Saab, a world leader in electric underwater robotics, the National Oceanic and Atmospheric Administration (NOAA), and BioSonics, will pair the SeaRAY AOPS with their electronics, which collects data on methane and carbon levels, fish activity, and more. Normally, autonomous underwater vehicles like Saab’s need power from a topside ship that emits about 7,000 cars’ worth of carbon dioxide per year.

“With Saab,” Lesemann said, “we’re looking to show that you can avoid that carbon dioxide production and, at the same time, reduce costs and operational complexity while enabling autonomous operations that are not possible today.”

The SeaRAY autonomous offshore power system has about 70 sensors that collect massive amounts of data. SeaRAY’s wave energy converter uses two floats, one on each side, which rolls with the ocean waves and connects to a power take-off system – a mechanical machine that transforms that motion into energy. This system then runs a generator that connects to the seabed batteries, a storage system that NREL will also test before the sea trial.