Toggle light / dark theme

Brains Could Help Solve a Fundamental Problem in Computer Engineering

In recent years, these technological limitations have become far more pressing. Deep neural networks have radically expanded the limits of artificial intelligence—but they have also created a monstrous demand for computational resources, and these resources present an enormous financial and environmental burden. Training GPT-3, a text predictor so accurate that it easily tricks people into thinking its words were written by a human, costs $4.6 million and emits a sobering volume of carbon dioxide—as much as 1,300 cars, according to Boahen.

With the free time afforded by the pandemic, Boahen, who is faculty affiliate at the Wu Tsai Neurosciences Institute at Stanford and the Stanford Institute for Human-Centered AI (HAI), applied himself single mindedly to this problem. “Every 10 years, I realize some blind spot that I have or some dogma that I’ve accepted,” he says. “I call it ‘raising my consciousness.’”

This time around, raising his consciousness meant looking toward dendrites, the spindly protrusions that neurons use to detect signals, for a completely novel way of thinking about computer chips. And, as he writes in Nature, he thinks he’s figured out how to make chips so efficient that the enormous GPT-3 language prediction neural network could one day be run on a cell phone. Just as Feynman posited the “quantum supremacy” of quantum computers over traditional computers, Boahen wants to work toward a “neural supremacy.”

Researchers build AI model database to find new alloys for nuclear fusion facilities

A study led by the Department of Energy’s Oak Ridge National Laboratory details how artificial intelligence researchers have created an AI model to help identify new alloys used as shielding for housing fusion applications components in a nuclear fusion reactor. The findings mark a major step towards improving nuclear fusion facilities.

A quantum neural network can see optical illusions like humans do. Could it be the future of AI?

Optical illusions, quantum mechanics and neural networks might seem to be quite unrelated topics at first glance. However, in new research published in APL Machine Learning, I have used a phenomenon called “quantum tunneling” to design a neural network that can “see” optical illusions in much the same way humans do.

My neural network did well at simulating human perception of the famous Necker cube and Rubin’s vase illusions—and in fact better than some much larger conventional used in computer vision.

This work may also shed light on the question of whether (AI) systems can ever truly achieve something like human cognition.

Assessing China’s AI development and forecasting its future tech priorities

SUBJECT: Assessing China’s current AI development and forecasting its future technology priorities.

In July 2024, the Atlantic Council Global China Hub (AC GCH) and the Special Competitive Studies Project (SCSP) convened experts and policymakers in the second of a two-part private workshop series to gather insights into China’s technology priorities today and in the future. Participants discussed Beijing’s posture on artificial intelligence (AI) development and deployment today, including the hurdles China’s AI industry faces amid US-China technology competition, as well as Beijing’s policy priorities over the next decade. This memo summarizes insights gathered during the workshop.

In today’s strategic competition between the United States and China, both countries seek to bolster their nations’ innovation ecosystems and enhance their ability to develop and deploy breakthrough technologies. The United States is committed to maintaining US technological leadership in the long term, as Secretary of Commerce Gina Raimondo demonstrated at the Reagan National Defense Forum in December 2023, when she stated that “America leads the world in artificial intelligence. America leads the world in advanced semiconductor design, period… e’re a couple years ahead of China. No way are we going to let them catch up. We cannot let them catch up.”

Physicists reveal evolution of shell structure using machine learning

A research team has used a machine learning approach to investigate the evolution of shell structure for nuclei far from the stability valley. The study, published in Physics Letters B and conducted by researchers from the Institute of Modern Physics (IMP) of the Chinese Academy of Sciences, Huzhou University, and the University of Paris-Saclay, reveals the double-magic nature of tin-100 and the disappearance of the magic number 20 in oxygen-28.

The path to more general artificial intelligence

A small-N comparative analysis of six different areas of applied artificial intelligence (AI) suggests that the next period of development will require a merging of narrow-AI and strong-AI approaches. This will be necessary as programmers seek to move beyond developing narrowly defined tools to developing software agents capable of acting independently in complex environments. The present stage of artificial intelligence development is propitious for this because of the exponential increases in computer power and in available data streams over the last 25 years, and because of better understanding of the complex logic of intelligence. Applied areas chosen for examination were heart pacemakers, socialist economic planning, computer-based trading, self-driving automobiles, surveillance and sousveillance and artificial intelligence in medicine.

Neuromorphic platform presents significant leap forward in computing efficiency

Researchers at the Indian Institute of Science (IISc) have developed a brain-inspired analog computing platform capable of storing and processing data in an astonishing 16,500 conductance states within a molecular film. Published today in the journal Nature, this breakthrough represents a huge step forward over traditional digital computers in which data storage and processing are limited to just two states.

Such a platform could potentially bring complex AI tasks, like training Large Language Models (LLMs), to personal devices like laptops and smartphones, thus taking us closer to democratizing the development of AI tools. These developments are currently restricted to resource-heavy data centers, due to a lack of energy-efficient hardware. With silicon electronics nearing saturation, designing brain-inspired accelerators that can work alongside silicon chips to deliver faster, more efficient AI is also becoming crucial.

“Neuromorphic computing has had its fair share of unsolved challenges for over a decade,” explains Sreetosh Goswami, Assistant Professor at the Centre for Nano Science and Engineering (CeNSE), IISc, who led the research team. “With this discovery, we have almost nailed the perfect system—a rare feat.”

/* */