Toggle light / dark theme

Genetically engineered bacteria solve computational problems

Researchers have developed a groundbreaking system that uses bacteria to mimic the problem-solving capabilities of artificial neural networks.


Cell-based biocomputing is a novel technique that uses cellular processes to perform computations. Such micron-scale biocomputers could overcome many of the energy, cost and technological limitations of conventional microprocessor-based computers, but the technology is still very much in its infancy. One of the key challenges is the creation of cell-based systems that can solve complex computational problems.

Now a research team from the Saha Institute of Nuclear Physics in India has used genetically modified bacteria to create a cell-based biocomputer with problem-solving capabilities. The researchers created 14 engineered bacterial cells, each of which functioned as a modular and configurable system. They demonstrated that by mixing and matching appropriate modules, the resulting multicellular system could solve nine yes/no computational decision problems and one optimization problem.

The cellular system, described in Nature Chemical Biology, can identify prime numbers, check whether a given letter is a vowel, and even determine the maximum number of pizza or pie slices obtained from a specific number of straight cuts. Here, senior author Sangram Bagh explains the study’s aims and findings.

Ethics, AI, and Neuroscience Converge at Mental Health, Brain, and Behavioral Science Research Day

Mental health issues are one of the most common causes of disability, affecting more than a billion people worldwide. Addressing mental health difficulties can present extraordinarily tough problems: what can providers do to help people in the most precarious situations? How do changes in the physical brain affect our thoughts and experiences? And at the end of the day, how can everyone get the care they need?

Answering those questions was the shared goal of the researchers who attended the Mental Health, Brain, and Behavioral Science Research Day in September. While the problems they faced were serious, the new solutions they started to build could ultimately help improve mental health care at individual and societal levels.

“We’re building something that there’s no blueprint for,” said Mark Rapaport, MD, CEO of Huntsman Mental Health Institute at the University of Utah. “We’re developing new and durable ways of addressing some of the most difficult issues we face in society.”

World’s first AI art museum to explore ‘creative potential of machines’ in LA

Dataland co-founder Refik Anadol, 38, is a media artist whose “crowd-pleasing – and controversial” works using artificial intelligence have been displayed around the world, including at the Museum of Modern Art in New York, the Serpentine and, most recently, the United Nations headquarters.

In the past two years, Anadol has found himself at the center of debates over the value of AI-generated art, as crowds have been reportedly “transfixed” by his massive interactive digital canvases, while some art critics have panned them as over-hyped and mediocre.

Now Anadol is looking to build artists like himself a permanent exhibition space among some of LA’s most prominent high-culture venues, and he is pledging that the AI art museum will promote “ethical AI” and use renewable energy sources.

Shrinking augmented reality displays into eyeglasses to expand their use

Augmented reality (AR) takes digital images and superimposes them onto real-world views. But AR is more than a new way to play video games; it could transform surgery and self-driving cars. To make the technology easier to integrate into common personal devices, researchers report in ACS Photonics how to combine two optical technologies into a single, high-resolution AR display. In an eyeglasses prototype, the researchers enhanced image quality with a computer algorithm that removed distortions.

OpenAI Pitched White House on Unprecedented Data Center Buildout

OpenAI pitched the White House on building data centers in the US as large as 5GW capacity — for ref, that’s enough to power 3 mil homes.

OAI’s analysis says it could add tens of thousands of jobs, boost GDP, and keep US ahead of China on AI.

Altman has spent much of this year trying to form a…


OpenAI has pitched the Biden administration on the need for massive data centers that could each use as much power as entire cities, framing the unprecedented expansion as necessary to develop more advanced artificial intelligence models and compete with China.

Language agents help large language models ‘think’ better and cheaper

The large language models that have increasingly taken over the tech world are not “cheap” in many ways. The most prominent LLMs, such as GPT-4, took some $100 million to build in the form of legal costs of accessing training data, computational power costs for what could be billions or trillions of parameters, the energy and water needed to fuel computation, and the many coders developing the training algorithms that must run cycle after cycle so the machine will “learn.”

But, if a researcher needs to do a specialized task that a machine could do more efficiently and they don’t have access to a large institution that offers access to generative AI tools, what other options are available? Say, a parent wants to prep their child for a difficult test and needs to show many examples of how to solve complicated math problems.

Building their own LLM is an onerous prospect for costs mentioned above, and making direct use of the big models like GPT-4 and Llama 3.1 might not immediately be suited for the complex in logic and math their task requires.

/* */