Toggle light / dark theme

Striking parallels between biological brains and AI during social interaction suggest fundamental principles

UCLA researchers have made a significant discovery showing that biological brains and artificial intelligence systems develop remarkably similar neural patterns during social interaction. This first-of-its-kind study reveals that when mice interact socially, specific brain cell types synchronize in “shared neural spaces,” and AI agents develop analogous patterns when engaging in social behaviors.

The study, “Inter-brain neural dynamics in biological and artificial intelligence systems,” appears in the journal Nature.

This new research represents a striking convergence of neuroscience and artificial intelligence, two of today’s most rapidly advancing fields. By directly comparing how biological brains and AI systems process social information, scientists reveal fundamental principles that govern across different types of intelligent systems.

AI designs new underwater gliders with shapes inspired by marine animals

Marine scientists have long marveled at how animals like fish and seals swim so efficiently despite having different shapes. Their bodies are optimized for efficient aquatic navigation (or hydrodynamics), so they can exert minimal energy when traveling long distances.

Autonomous vehicles can drift through the ocean in a similar way, collecting data about vast underwater environments. However, the shapes of these gliding machines are less diverse than what we find in marine life—the go-to designs often resemble tubes or torpedoes, since they’re fairly hydrodynamic. Plus, testing new builds requires lots of real-world trial-and-error.

Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and the University of Wisconsin-Madison propose that AI could help us explore uncharted glider designs more conveniently. The research is published on the arXiv preprint server.

Playing games with robots makes people see them as more humanlike

The more we interact with robots, the more human we perceive them to become—according to new research from the University of East Anglia, published in the Journal of Experimental Psychology: Human Perception and Performance.

It may sound like a scene from Blade Runner, but psychologists have been investigating exactly what makes interactions feel more human.

The paper reveals that playing games with robots to “break the ice” can help bring out their human side.

AI and biophysics unite to forecast high-risk viral variants before outbreaks

When the first reports of a new COVID-19 variant emerge, scientists worldwide scramble to answer a critical question: Will this new strain be more contagious or more severe than its predecessors? By the time answers arrive, it’s frequently too late to inform immediate public policy decisions or adjust vaccine strategies, costing public health officials valuable time, effort, and resources.

In a pair of recent publications in Proceedings of the National Academy of Sciences, a research team in the Department of Chemistry and Chemical Biology combined biophysics with artificial intelligence to identify high-risk viral variants in record time—offering a transformative approach for handling pandemics. Their goal: to get ahead of a virus by forecasting its evolutionary leaps before it threatens public health.

“As a society, we are often very unprepared for the emergence of new viruses and pandemics, so our lab has been working on ways to be more proactive,” said senior author Eugene Shakhnovich, Roy G. Gordon Professor of Chemistry. “We used fundamental principles of physics and chemistry to develop a multiscale model to predict the course of evolution of a particular variant and to predict which variants will become dominant in populations.”

OpenAI co-founder Sutskever sets up new AI company devoted to ‘safe superintelligence’

(AP) — Ilya Sutskever, one of the founders of OpenAI who was involved in a failed effort to push out CEO Sam Altman, said he’s starting a safety-focused artificial intelligence company.

Sutskever, a respected AI researcher who left the ChatGPT maker last month, said in a social media post Wednesday that he’s created Safe Superintelligence Inc. with two co-founders. The company’s only goal and focus is safely developing “superintelligence” — a reference to AI systems that are smarter than humans.

The company vowed not to be distracted by “management overhead or product cycles,” and under its business model, work on safety and security would be “insulated from short-term commercial pressures,” Sutskever and his co-founders Daniel Gross and Daniel Levy said in a prepared statement.

AI helps discover optimal new material for removing radioactive iodine contamination

Managing radioactive waste is one of the core challenges in the use of nuclear energy. In particular, radioactive iodine poses serious environmental and health risks due to its long half-life (15.7 million years in the case of I-129), high mobility, and toxicity to living organisms.

A Korean research team has successfully used artificial intelligence to discover a new material that can remove iodine for nuclear environmental remediation. The team plans to push forward with commercialization through various industry–academia collaborations, from iodine-adsorbing powders to contaminated water treatment filters.

Professor Ho Jin Ryu’s research team from the Department of Nuclear and Quantum Engineering, in collaboration with Dr. Juhwan Noh of the Digital Chemistry Research Center at the Korea Research Institute of Chemical Technology, developed a technique using AI to discover new materials that effectively remove contaminants. Their research is published in the Journal of Hazardous Materials.

/* */