Toggle light / dark theme

This AI system only needs a small amount of data to predict molecular properties

Discovering new materials and drugs typically involves a manual, trial-and-error process that can take decades and cost millions of dollars. To streamline this process, scientists often use machine learning to predict molecular properties and narrow down the molecules they need to synthesize and test in the lab.

Researchers from MIT and the MIT-Watson AI Lab have developed a new, unified framework that can simultaneously predict molecular properties and generate new much more efficiently than these popular deep-learning approaches.

To teach a to predict a molecule’s biological or , researchers must show it millions of labeled molecular structures—a process known as training. Due to the expense of discovering and the challenges of hand-labeling millions of structures, large training datasets are often hard to come by, which limits the effectiveness of machine-learning approaches.

Humans to Achieve Immortality by 2030, Google Engineer Claims

Immortality has been a dream of human beings since the dawn of time. Mankind´s fascination with cheating death is reflected in scientific records, mythology, and folklore dating back at least to ancient Egypt.

Now, Ray Kurzweil, a former Google engineer, claims that humans will achieve immortality by 2030 – and 86 percent of his 147 predictions have been correct.

Kurzweil spoke with the YouTube channel Adagio, discussing the expansion in genetics, nanotechnology, and robotics, which he believes will lead to age-reversing “nanobots.”

AI Singularity realistically by 2029: year-by-year milestones

This existential threat could even come as early as, say, 2026. Or might even be a good thing, but whatever the Singularity exactly is, although it’s uncertain in nature, it’s becoming clearer in timing and much closer than most predicted.

AI is nevertheless hard to predict, but many agree with me that with GPT-4 we’re close to AGI (artificial general intelligence) already.

DARPA’S New SHIELD Program Plans to Purge Your Blood of Pathogens, Roomba-Style

Welcome to this week’s installment of The Intelligence Brief… in recent days, DARPA has announced a new program that aims to protect warfighters from bloodstream infections caused by bacterial and fungal agents. This week, we’ll be examining 1) the announcement of the agency’s new SHIELD program, 2) past challenges that inspired the new DARPA initiative, and 3) how they say SHIELD will manage to clean your bloodstream, similar to a Roomba.

Quote of the Week

“If an alien visited Earth, they would take some note of humans, but probably spend most of their time trying to understand the dominant form of life on our planet – microorganisms like bacteria and viruses.”

DIAS turns to advanced AI to better predict space weather

The ARCAFF project aims to use deep learning AI to make better predictions of space weather events and calculate how probable these predictions are, to help protect vital technology and infrastructure.

A new project led by the Dublin Institute for Advanced Studies (DIAS) is using AI as a way of getting faster and more accurate warnings about space weather events like solar flares.

These solar flares have the potential to disrupt vital technologies and infrastructure, including radio communications, electrical power grids and navigation systems. They can also present risks to spacecraft and astronauts.

UN tech agency rolls out human-looking robots for questions at a Geneva news conference

BERLIN — A United Nations technology agency assembled a group of robots that physically resembled humans at a news conference Friday, inviting reporters to ask them questions in an event meant to spark discussion about the future of artificial intelligence.

The nine robots were seated and posed upright along with some of the people who helped make them at a podium in a Geneva conference center for what the U.N.’s International Telecommunication Union billed as the world’s first news conference featuring humanoid social robots.

Among them: Sophia, the first robot innovation ambassador for the U.N. Development Program, or UNDP; Grace, described as a health care robot; and Desdemona, a rock star robot. Two, Geminoid and Nadine, resembled their makers.

Big robot bugs reveal force-sensing secrets of insect locomotion

Researchers have combined research with real and robotic insects to better understand how they sense forces in their limbs while walking, providing new insights into the biomechanics and neural dynamics of insects and informing new applications for large legged robots. They presented their findings at the SEB Centenary Conference 2023.

Campaniform sensilla (CS) are force receptors found in the limbs of insects that respond to stress and strain, providing important information for controlling locomotion. Similar force receptors exist in mammals known as golgi tendon organs, suggesting that understanding the role of force sensors in insects may also provide new insights into their functions in vertebrates such as humans.

“I study the role of force sensors in walking insects because these sensors are critical for successful locomotion,” says Dr. Szczecinski, an assistant professor in the Department of Mechanical and Aerospace Engineering in the Statler College of Engineering and Mineral Resources at West Virginia University, U.S. “The feedback they provide is critical for proper posture and coordination.”

Encoding integers and rationals on neuromorphic computers using virtual neuron

Neuromorphic computers perform computations by emulating the human brain1. Akin to the human brain, they are extremely energy efficient in performing computations2. For instance, while CPUs and GPUs consume around 70–250 W of power, a neuromorphic computer such as IBM’s TrueNorth consumes around 65 mW of power, (i.e., 4–5 orders of magnitude less power than CPUs and GPUs)3. The structural and functional units of neuromorphic computation are neurons and synapses, which can be implemented on digital or analog hardware and can have different architectures, devices, and materials in their implementations4. Although there are a wide variety of neuromorphic computing systems, we focus our attention on spiking neuromorphic systems composed of these neurons and synapses. Spiking neuromorphic hardware implementations include Intel’s Loihi5, SpiNNaker26, BrainScales27, TrueNorth3, and DYNAPS8. These characteristics are crucial for the energy efficiency of neuromorphic computers. For the purposes of this paper, we define neuromorphic computing as any computing paradigm (theoretical, simulated, or hardware) that performs computations by emulating the human brain by using neurons and synapses to communicate with binary-valued signals (also known as spikes).

Neuromorphic computing is primarily used in machine learning applications, almost exclusively by leveraging spiking neural networks (SNNs)9. In recent years, however, it has also been used in non-machine learning applications such as graph algorithms, Boolean linear algebra, and neuromorphic simulations10,11,12. Researchers have also shown that neuromorphic computing is Turing-complete (i.e., capable of general-purpose computation)13. This ability to perform general-purpose computations and potentially use orders of magnitude less energy in doing so is why neuromorphic computing is poised to be an indispensable part of the energy-efficient computing landscape in the future.

Neuromorphic computers are seen as accelerators for machine learning tasks by using SNNs. To perform any other operation (e.g., arithmetic, logical, relational), we still resort to CPUs and GPUs because no good neuromorphic methods exist for these operations. These general-purpose operations are important for preprocessing data before it is transferred to a neuromorphic processor. In the current neuromorphic workflow— preprocessing on CPU/GPU and inferencing on neuromorphic processor—more than 99% of the time is spent in data transfer (see Table 7). This is highly inefficient and can be avoided if we do the preprocessing on the neuromorphic processor. Devising neuromorphic approaches for performing these preprocessing operations would drastically reduce the cost of transferring data between a neuromorphic computer and CPU/GPU. This would enable performing all types of computation (preprocessing as well as inferencing) efficiently on low-power neuromorphic computers deployed on the edge.

/* */