Toggle light / dark theme

It’s not just salespeople, traders, compliance professionals and people formatting pitchbooks who risk losing their banking jobs to technology. It turns out that private equity professionals do too. A new study by a professor at one of France’s top finance universities explains how.

Professor Thomas Åstebro at Paris-based HEC says private equity firms are using artificial intelligence (AI) to push the limits of human cognition and to support decision-making. Åstebro says t he sorts of people employed by private equity funds is changing as a result.

Åstebro looked at the use of AI systems across various private equity and venture capital firms. He found that funds that have embraced AI are using decision support systems (DSS) across the investment decision-making process, including to source potential targets for investments before rivals.

CERN Courier


Jennifer Ngadiuba and Maurizio Pierini describe how ‘unsupervised’ machine learning could keep watch for signs of new physics at the LHC that have not yet been dreamt up by physicists.

In the 1970s, the robust mathematical framework of the Standard Model ℠ replaced data observation as the dominant starting point for scientific inquiry in particle physics. Decades-long physics programmes were put together based on its predictions. Physicists built complex and highly successful experiments at particle colliders, culminating in the discovery of the Higgs boson at the LHC in 2012.

Along this journey, particle physicists adapted their methods to deal with ever growing data volumes and rates. To handle the large amount of data generated in collisions, they had to optimise real-time selection algorithms, or triggers. The field became an early adopter of artificial intelligence (AI) techniques, especially those falling under the umbrella of “supervised” machine learning. Verifying the SM’s predictions or exposing its shortcomings became the main goal of particle physics. But with the SM now apparently complete, and supervised studies incrementally excluding favoured models of new physics, “unsupervised” learning has the potential to lead the field into the uncharted waters beyond the SM.

Reinforcement learning (RL) is the most widely used machine learning algorithm, besides supervised and unsupervised learning and the less common self-supervised and semi-supervised learning. RL focuses on the controlled learning process, where a machine learning algorithm is provided with a set of actions, parameters, and end values. It teaches the machine trial and error.

From a data efficiency perspective, several methods have been proposed, including online setting, reply buffer, storing experience in a transition memory, etc. In recent years, off-policy actor-critic algorithms have been gaining prominence, where RL algorithms can learn from limited data sets entirely without interaction (offline RL).

Summary: Findings could advance the development of deep learning networks based on real neurons that will enable them to perform more complex and more efficient learning processes.

Source: Hebrew University of Jerusalem.

We are in the midst of a scientific and technological revolution. The computers of today use artificial intelligence to learn from example and to execute sophisticated functions that, until recently, were thought impossible. These smart algorithms can recognize faces and even drive autonomous vehicles.

In Hawaii, project partners, including Saab, a world leader in electric underwater robotics, the National Oceanic and Atmospheric Administration (NOAA), and BioSonics, will pair the SeaRAY AOPS with their electronics, which collects data on methane and carbon levels, fish activity, and more. Normally, autonomous underwater vehicles like Saab’s need power from a topside ship that emits about 7,000 cars’ worth of carbon dioxide per year.

“With Saab,” Lesemann said, “we’re looking to show that you can avoid that carbon dioxide production and, at the same time, reduce costs and operational complexity while enabling autonomous operations that are not possible today.”

The SeaRAY autonomous offshore power system has about 70 sensors that collect massive amounts of data. SeaRAY’s wave energy converter uses two floats, one on each side, which rolls with the ocean waves and connects to a power take-off system – a mechanical machine that transforms that motion into energy. This system then runs a generator that connects to the seabed batteries, a storage system that NREL will also test before the sea trial.

Apple and Tesla have a lot in common, but there’s much to be desired — oddly enough — when it comes to how their products work together.


Apple Inc. and Tesla Inc. have a lot in common, but there’s much to be desired — oddly enough — when it comes to how their products work together.

Both companies are known for design, advanced technology and a controlling approach to their ecosystems. Tesla’s cars use a giant iPad-like screen instead of physical controls, and customers can use a smartphone as their key. It’s also steadily moving toward autonomous driving. That’s led people to call Tesla the Apple of carmakers. Elon Musk even tried to sell Tesla to Apple, and consumers frequently say that a Tesla is an “iPhone on wheels.”

But for Apple users, the experience of owning a Tesla can be frustrating.

A more general definition of entropy was proposed by Boltzmann (1877) as S = k ln W, where k is Boltzmann’s constant, and W is the number of possible states of a system, in the units J⋅K−1, tying entropy to statistical mechanics. Szilard (1929) suggested that entropy is fundamentally a measure of the information content of a system. Shannon (1948) defined informational entropy as \(S=-\sum_{i}{p}_{i}{log}_{b}{p}_{i}\) where pi is the probability of finding message number i in the defined message space, and b is the base of the logarithm used (typically 2 resulting in units of bits). Landauer (1961) proposed that informational entropy is interconvertible with thermodynamic entropy such that for a computational operation in which 1 bit of information is erased, the amount of thermodynamic entropy generated is at least k ln 2. This prediction has been recently experimentally verified in several independent studies (Bérut et al. 2012; Jun et al. 2014; Hong et al. 2016; Gaudenzi et al. 2018).

The equivalency of thermodynamic and informational entropy suggests that critical points of instability and subsequent self-organization observed in thermodynamic systems may be observable in computational systems as well. Indeed, this agrees with observations in cellular automata (e.g., Langton 1986; 1990) and neural networks (e.g., Wang et al. 1990; Inoue and Kashima 1994), which self-organize to maximize informational entropy production (e.g., Solé and Miramontes 1995). The source of additional information used for self-organization has been identified as bifurcation and deterministic chaos (Langton 1990; Inoue and Kashima 1994; Solé and Miramontes 1995; Bahi et al. 2012) as defined by Devaney (1986). This may provide an explanation for the phenomenon termed emergence, known since classical antiquity (Aristotle, c. 330 BCE) but lacking a satisfactory explanation (refer to Appendix A for discussion on deterministic chaos, and Appendix B for discussion on emergence). It is also in full agreement with extensive observations of deterministic chaos in chemical (e.g., Nicolis 1990; Györgyi and Field 1992), physical (e.g., Maurer and Libchaber 1979; Mandelbrot 1983; Shaw 1984; Barnsley et al. 1988) and biological (e.g., May 1975; Chay et al. 1995; Jia et al. 2012) dissipative structures and systems.

This theoretical framework establishes a deep fundamental connection between cyberneticFootnote 1 and biological systems, and implicitly predicts that as more work is put into cybernetic systems composed of hierarchical dissipative structures, their complexity increases, allowing for more possibilities of coupled feedback and emergence at increasingly higher levels. Such high-level self-organization is routinely exploited in machine learning, where artificial neural networks (ANNs) self-organize in response to inputs from the environment similarly to neurons in the brain (e.g., Lake et al. 2017; Fong et al. 2018). The recent development of a highly organized (low entropy) immutable information carrier, in conjunction with ANN-based artificial intelligence (AI) and distributed computing systems, presents new possibilities for self-organization and emergence.

The robots can tumble up slopes.


A new study investigates tiny tumbling soft robots that can be controlled using rotating magnetic fields. The technology could be useful for delivering drugs to the nervous system. In this latest study, researchers put the robots through their paces and showed that they can climb slopes, tumble upstream against fluid flow and deliver substances at precise locations to neural tissue.

Would you let a tiny MANiAC travel around your nervous system to treat you with drugs? You may be inclined to say no, but in the future, “magnetically aligned nanorods in alginate capsules” (MANiACs) may be part of an advanced arsenal of drug delivery technologies at doctors’ disposal. A recent study in Frontiers in Robotics and AI is the first to investigate how such tiny robots might perform as drug delivery vehicles in neural tissue. The study finds that when controlled using a magnetic field, the tiny tumbling soft robots can move against fluid flow, climb slopes and move about neural tissues, such as the spinal cord, and deposit substances at precise locations.

Diseases in the central nervous system can be difficult to treat. “Delivering drugs orally or intravenously, for example, to target cancers or neurologic diseases, may affect regions of the body and nervous system that are unrelated to the disease,” explained Lamar Mair of Weinberg Medical Physics, a medical device company based in the US and an industrial partner on the study. “Targeted drug delivery may lead to improved efficacy and reduced side-effects due to lower off-target dosing.”