A system called Coscientist scours the Internet for instructions, then designs and executes experiments to synthesize molecules.

Taking inspiration from the human brain, researchers have developed a new synaptic transistor capable of higher-level thinking.
Designed by researchers at Northwestern University, Boston College and the Massachusetts Institute of Technology (MIT), the device simultaneously processes and stores information just like the human brain. In new experiments, the researchers demonstrated that the transistor goes beyond simple machine-learning tasks to categorize data and is capable of performing associative learning.
Although previous studies have leveraged similar strategies to develop brain-like computing devices, those transistors cannot function outside cryogenic temperatures. The new device, by contrast, is stable at room temperatures. It also operates at fast speeds, consumes very little energy and retains stored information even when power is removed, making it ideal for real-world applications.
Researchers have discovered that AI memory consolidation processes resemble those in the human brain, specifically in the hippocampus, offering potential for advancements in AI and a deeper understanding of human memory mechanisms.
An interdisciplinary team consisting of researchers from the Center for Cognition and Sociality and the Data Science Group within the Institute for Basic Science (IBS) revealed a striking similarity between the memory processing of artificial intelligence (AI) models and the hippocampus of the human brain. This new finding provides a novel perspective on memory consolidation, which is a process that transforms short-term memories into long-term ones, in AI systems.
Advancing AI through understanding human intelligence.
Researchers at MIT, the Broad Institute of MIT and Harvard, Integrated Biosciences, the Wyss Institute for Biologically Inspired Engineering, and the Leibniz Institute of Polymer Research have identified a new structural class of antibiotics.
Scientists have discovered one of the first new classes of antibiotics identified in the past 60 years, and the first discovered leveraging an AI-powered platform built around explainable deep learning.
Published in Nature today, December 20, the peer-reviewed paper, entitled “Discovery of a structural class of antibiotics with explainable deep learning,” was co-authored by a team of 21 researchers, led by Felix Wong, Ph.D., co-founder of Integrated Biosciences, and James J. Collins, Ph.D., Termeer Professor of Medical Engineering and Science at MIT and founding chair of the Integrated Biosciences Scientific Advisory Board.
A non-organic intelligent system has for the first time designed, planned and executed a chemistry experiment, Carnegie Mellon University researchers report in the Dec. 21 issue of the journal Nature.
“We anticipate that intelligent agent systems for autonomous scientific experimentation will bring tremendous discoveries, unforeseen therapies and new materials. While we cannot predict what those discoveries will be, we hope to see a new way of conducting research given by the synergetic partnership between humans and machines,” the Carnegie Mellon research team wrote in their paper.
The system, called Coscientist, was designed by Assistant Professor of Chemistry and Chemical Engineering Gabe Gomes and chemical engineering doctoral students Daniil Boiko and Robert MacKnight. It uses large language models (LLMs), including OpenAI’s GPT-4 and Anthropic’s Claude, to execute the full range of the experimental process with a simple, plain language prompt.
Claim: reinforcement learning AI beats humans in physical domain, playing the “labyrinth” game.
In six hours of training, it solves faster than any human.
Video is fun to watch.
We have created an AI robot named CyberRunner whose task is to learn how to play the popular and widely accessible labyrinth marble game. The labyrinth is a game of physical skill whose goal is to steer a marble from a given start point to the end point. In doing so, the player must prevent the ball from falling into any of the holes that are present on the labyrinth board.
CyberRunner applies recent advances in model-based reinforcement learning to the physical world and exploits its ability to make informed decisions about potentially successful behaviors by planning real-world decisions and actions into the future.
The robot is blind and cannot see its environment but can continue to balance and walk, even if an object is hurled at it.
UC researchers Ilija Radosavovic and Bike Zhang wondered if “reinforcement learning,” a concept made popular by large language models (LLMs) last year, could also teach the robot how to adapt to changing needs. To test their theory, the duo started with one of the most basic functions humans can perform — walking.
Transformer model for learning
The researchers started in the simulation world, running billions of scenarios in Isaac Gym, a high-performance GPU-based physics simulation environment. The algorithm in the simulator rewarded actions that mimicked human-like walking while punishing the ones that didn’t. Once the simulation perfected the task, it was transferred to a real-world humanoid bot that did not require further fine-tuning.
Google, the internet giant now a subsidiary of Alphabet, announced on Tuesday that it will limit the kinds of queries related to elections that its chatbot Bard and search generative experience can answer ahead of the 2024 U.S. Presidential election.
The company said that the new restrictions will be implemented by early 2024. The company recently made a landmark change to its location data privacy which has made it difficult for law enforcement agencies to access private location data of people near the crime scene and issue geofence warrants.
According to Reuters, the U.S. is not the only country that will witness crucial elections in 2024. India, the world’s largest democracy, and South Africa, among others, will also hold national elections in the same year.
Subscribe: RSS
Marvin Minsky is often called the Father of Artificial Intelligence and I have been looking for an opportunity to interview him for years. I was hoping that I will finally get my chance at the GF2045 conference in NY City. Unfortunately, Prof. Minsky had bronchitis and consequently had to speak via video. A week later, though still recovering, Marvin generously gave me a 30 min interview while attending the ISTAS13 Veilance conference in Toronto. I hope that you enjoy this brief but rare opportunity as much as I did!