Toggle light / dark theme

Xenobots: Scientists create a new generation of living bots

“These are novel living machines. They are not a traditional robot or a known species of animals. It is a new class of artifacts: a living and programmable organism,” says Joshua Bongard, an expert in computer science and robotics at the University of Vermont (UVM) and one of the leaders of the find.

As the scientist explains, these living bots do not look like traditional robots : they do not have shiny gears or robotic arms. Rather, they look more like a tiny blob of pink meat in motion, a biological machine that researchers say can accomplish things traditional robots cannot.

Xenobots are synthetic organisms designed automatically by a supercomputer to perform a specific task, using a process of trial and error (an evolutionary algorithm), and are built by a combination of different biological tissues.

Google researchers show artificial intelligence can design microchips better and faster than humans

THIS is that upward exponential point that heralds the arrival of the Technological Singularity.


This is an Inside Science story.

Artificial intelligence can design computer microchips that perform at least as well as those designed by human experts, devising such blueprints thousands of times faster. This new research from Google is already helping with the design of microchips for the company’s next generation of AI computer systems.

The process of designing the physical layout of a chip’s parts, known as floor planning, is key to a device’s ultimate performance. This complex task often requires months of intense efforts from experts, and despite five decades of research, no automated floorplanning technique has reached human-level performance until now.

Machine learning aids in materials design

A long-held goal by chemists across many industries, including energy, pharmaceuticals, energetics, food additives and organic semiconductors, is to imagine the chemical structure of a new molecule and be able to predict how it will function for a desired application. In practice, this vision is difficult, often requiring extensive laboratory work to synthesize, isolate, purify and characterize newly designed molecules to obtain the desired information.

Recently, a team of Lawrence Livermore National Laboratory (LLNL) materials and computer scientists have brought this vision to fruition for energetic molecules by creating machine learning (ML) models that can predict molecules’ crystalline properties from their alone, such as molecular density. Predicting crystal structure descriptors (rather than the entire crystal structure) offers an efficient method to infer a material’s properties, thus expediting materials design and discovery. The research appears in the Journal of Chemical Information and Modeling.

“One of the team’s most prominent ML models is capable of predicting the crystalline density of energetic and energetic-like molecules with a high degree of accuracy compared to previous ML-based methods,” said Phan Nguyen, LLNL applied mathematician and co-first author of the paper.

Nano Robots Walk Inside Blood When Hit With Lasers

Circa 2020 o,.o!


Every robot is, at its heart, a computer that can move. That is true from the largest plane-sized flying machines down to the smallest of controllable nanomachines, small enough to someday even navigate through blood vessels.

New research, published August 26 in Nature, shows that it is possible to build legs into robots mere microns in length. When powered by lasers, these tiny machines can move, and some day, they may save lives in operating rooms or even, possibly, on the battlefield.

This project, funded in part by the Army Research Office and the Air Force Office of Scientific Research, demonstrated that, adapting principles from origami, nano-scale legged robots could be printed and then directed.

Sam Altman on the A.I. Revolution, Trillionaires and the Future of Political Power

But what struck me about his essay is that last clause: “if we as a society manage it responsibly.” Because, as Altman also admits, if he is right then A.I. will generate phenomenal wealth largely by destroying countless jobs — that’s a big part of how everything gets cheaper — and shifting huge amounts of wealth from labor to capital. And whether that world becomes a post-scarcity utopia or a feudal dystopia hinges on how wealth, power and dignity are then distributed — it hinges, in other words, on politics.


Will A.I. give us the lives of leisure we long for — or usher in a feudal dystopia? It depends.

A bio-inspired technique to mitigate catastrophic forgetting in binarized neural networks

Deep neural networks have achieved highly promising results on several tasks, including image and text classification. Nonetheless, many of these computational methods are prone to what is known as catastrophic forgetting, which essentially means that when they are trained on a new task, they tend to rapidly forget how to complete tasks they were trained to complete in the past.

Researchers at Université Paris-Saclay-CNRS recently introduced a new technique to alleviate forgetting in binarized . This technique, presented in a paper published in Nature Communications, is inspired by the idea of synaptic metaplasticity, the process through which synapses (junctions between two ) adapt and change over time in response to experiences.

“My group had been working on binarized neural networks for a few years,” Damien Querlioz, one of the researchers who carried out the study, told TechXplore. “These are a highly simplified form of deep neural networks, the flagship method of modern artificial intelligence, which can perform complex tasks with reduced memory requirements and energy consumption. In parallel, Axel, then a first-year Ph.D. student in our group, started to work on the synaptic metaplasticity models introduced in 2005 by Stefano Fusi.”

Mythic launches analog AI processor that consumes 10 times less power

Analog AI processor company Mythic launched its M1076 Analog Matrix Processor today to provide low-power AI processing.

The company uses analog circuits rather than digital to create its processor, making it easier to integrate memory into the processor and operate its device with 10 times less power than a typical system-on-chip or graphics processing unit (GPU).

The M1076 AMP can support up to 25 trillion operations per second (TOPS) of AI compute in a 3-watt power envelope. It is targeted at AI at the edge applications, but the company said it can scale from the edge to server applications, addressing multiple vertical markets including smart cities, industrial applications, enterprise applications, and consumer devices.

Building a Garden that Cares for Itself

Designing an autonomous, learning smart garden.


In the first episode of Build Out, Colt and Reto — tasked with designing the architecture for a “Smart Garden” — supplied two very different concepts, that nevertheless featured many overlapping elements. Take a look at the video to see what they came up with, then continue reading to see how you can learn from their explorations to build your very own Smart Garden.

Both solutions aim to optimize plant care using sensors, weather forecasts, and machine learning. Watering and fertilizing routines for the plants are updated regularly to guarantee the best growth, health, and fruit yield possible.

Colt’s solution is optimized for small-scale home farming, using a modified CNC machine to care for a fruit or vegetable patch. The drill bit is replaced with a liquid spout, UV light, and camera, while the cutting area is replaced with a plant bed that includes sensors to track moisture, nutrient levels, and weight.

Amazing New Chinese A.I.-Powered Language Model Wu Dao 2.0 Unveiled

It’s ten times more powerful than the current U.S. effort.


Earlier this month, Chinese artificial intelligence (A.I.) researchers at the Beijing Academy of Artificial Intelligence (BAAI) unveiled Wu Dao 2.0, the world’s biggest natural language processing (NLP) model. And it’s a big deal.

NLP is a branch of A.I. research that aims to give computers the ability to understand text and spoken words and respond to them in much the same way human beings can.

Last year, the San Francisco–based nonprofit A.I. research laboratory OpenAI wowed the world when it released its GPT-3 (Generative Pre-trained Transformer 3) language model. GPT-3 is a 175 billion–parameter deep learning model trained on text datasets with hundreds of billions of words. A parameter is a calculation in a neural network that shapes the model’s data by assigning to each chunk a greater or lesser weighting, thus providing the neural network a learned perspective on the data.