‘Self-replicating Robotic Systems’ published in ‘Encyclopedia of Complexity and Systems Science’

A self-replicating machine is a type of autonomous robot that is capable of reproducing itself autonomously using raw materials found in the environment, thus exhibiting self-replication in a way analogous to that found in nature. Homer Jacobson, Edward F. Moore, Freeman Dyson, John von Neumann, Konrad Zuse and in more recent times by K. Eric Drexler in his book on nanotechnology, Engines of Creation (coining the term clanking replicator for such machines) and by Robert Freitas and Ralph Merkle in their review Kinematic Self-Replicating Machinesmoons and asteroid belts for ore and other materials, the creation of lunar factories, and even the construction of solar power satellites in space. The von Neumann probeuniversal constructor, a self-replicating machine that would be able to evolve and which he formalized in a cellular automata environment. Notably, Von Neumann’s Self-Reproducing Automata scheme posited that open-ended evolution requires inherited information to be copied and passed to offspring separately from the self-replicating machine, an insight that preceded the discovery of the structure of the DNA molecule by Watson and Crick and how it is separately translated and replicated in the cell.https://en.m.wikipedia.org/wiki/Self-replicating_machine#:~:...n_probe_is, [ 9 ] A self-replicating machine is an artificial self-replicating system that relies on conventional large-scale technology and automation. The concept, first proposed by Von Neumann no later than the 1940s, has attracted a range of different approaches involving various types of technology. Certain idiosyncratic terms are occasionally found in the literature. For example, the term clanking replicator was once used by Drexler [ 10 ] to distinguish macroscale replicating systems from the microscopic nanorobots or “assemblers” that nanotechnology may make possible, but the term is informal and is rarely used by others in popular or technical discussions. Replicators have also been called “von Neumann machines” after John von Neumann, who first rigorously studied the idea.
Google DeepMind has announced an impressive grab bag of new products and prototypes that may just let it seize back its lead in the race to turn generative artificial intelligence into a mass-market concern.
Top billing goes to Gemini 2.0—the latest iteration of Google DeepMind’s family of multimodal large language models, now redesigned around the ability to control agents—and a new version of Project Astra, the experimental everything app that the company teased at Google I/O in May.
MIT Technology Review got to try out Astra in a closed-door live demo last week. It was a stunning experience, but there’s a gulf between polished promo and live demo.
Google just launched a ton of new products—including Gemini 2.0, which could power a new world of agents. And we got a first look.
Large-scale protein and gene profiling have massively expanded the landscape of cancer-associated proteins and gene mutations, but it has been difficult to discern whether they play an active role in the disease or are innocent bystanders.
In a study published in Nature Cancer, researchers at Baylor College of Medicine revealed a powerful and unbiased machine learning-based approach called FunMap for assessing the role of cancer-associated mutations and understudied proteins, with broad implications for advancing cancer biology and informing therapeutic strategies.
“Gaining functional information on the genes and proteins associated with cancer is an important step toward better understanding the disease and identifying potential therapeutic targets,” said corresponding author Dr. Bing Zhang, professor of molecular and human genetics and part of the Lester and Sue Smith Breast Center at Baylor.
PillBot’s thrusters and high-res cameras make remote stomach diagnostics a reality—revolutionizing gastroenterology.
Endiatx’s swallowable camera uses pumpjet thrusters for remote stomach exams, replacing invasive procedures and advancing telemedicine.
Anyone who has operated a 3D printer before, especially those new to using these specialized tools, has likely had problems with the print bed. The bed might not always be the correct temperature leading to problems with adhesion of the print, it could be uncalibrated or dirty or cause any number of other issues that ultimately lead to a failed print. Most of us work these problems out through trial and error and eventually get settled in, but this novel 3D printer instead removes the bed itself and prints on whatever surface happens to be nearby.
The printer is the product of [Daniel Campos Zamora] at the University of Washington and is called MobiPrint. It uses a fairly standard, commercially available 3D printer head but attaches it to the base of a modified robotic vacuum cleaner. The vacuum cleaner is modified with open-source software that allows it to map its environment without the need for the manufacturer’s cloud services, which in turn lets the 3D printer print on whichever surface the robot finds in its travels. The goal isn’t necessarily to eliminate printer bed problems; a robot with this capability could have many more applications in the realm of accessibility or even, in the future, printing while on the move.
There were a few surprising discoveries along the way which were mentioned in an IEEE Spectrum article, as [Campos Zamora] found while testing various household surfaces that carpet is surprisingly good at adhering to these prints and almost can’t be unstuck from the prints made on it. There are a few other 3D printers out there that we’ve seen that are incredibly mobile, but none that allow interacting with their environment in quite this way.
Originally published on Towards AI.
In its most basic form, Bayesian Inference is just a technique for summarizing statistical inference which states how likely an hypothesis is given any new evidence. The method comes from Bayes’ Theorem, which provides a way to calculate the probability that an event will happen or has happened, given any prior knowledge of conditions (from which an event may not happen):
Here’s a somewhat rough rendering of Bayes’ Theorem:
UNIVERSITY PARK, Pa. — A recently developed electronic tongue is capable of identifying differences in similar liquids, such as milk with varying water content; diverse products, including soda types and coffee blends; signs of spoilage in fruit juices; and instances of food safety concerns. The team, led by researchers at Penn State, also found that results were even more accurate when artificial intelligence (AI) used its own assessment parameters to interpret the data generated by the electronic tongue.
(Many people already posted this. This is the press release from Penn Sate who did the research)
The tongue comprises a graphene-based ion-sensitive field-effect transistor, or a conductive device that can detect chemical ions, linked to an artificial neural network, trained on various datasets. Critically, Das noted, the sensors are non-functionalized, meaning that one sensor can detect different types of chemicals, rather than having a specific sensor dedicated to each potential chemical. The researchers provided the neural network with 20 specific parameters to assess, all of which are related to how a sample liquid interacts with the sensor’s electrical properties. Based on these researcher-specified parameters, the AI could accurately detect samples — including watered-down milks, different types of sodas, blends of coffee and multiple fruit juices at several levels of freshness — and report on their content with greater than 80% accuracy in about a minute.
“After achieving a reasonable accuracy with human-selected parameters, we decided to let the neural network define its own figures of merit by providing it with the raw sensor data. We found that the neural network reached a near ideal inference accuracy of more than 95% when utilizing the machine-derived figures of merit rather than the ones provided by humans,” said co-author Andrew Pannone, a doctoral student in engineering science and mechanics advised by Das. “So, we used a method called Shapley additive explanations, which allows us to ask the neural network what it was thinking after it makes a decision.”
This approach uses game theory, a decision-making process that considers the choices of others to predict the outcome of a single participant, to assign values to the data under consideration. With these explanations, the researchers could reverse engineer an understanding of how the neural network weighed various components of the sample to make a final determination — giving the team a glimpse into the neural network’s decision-making process, which has remained largely opaque in the field of AI, according to the researchers. They found that, instead of simply assessing individual human-assigned parameters, the neural network considered the data it determined were most important together, with the Shapley additive explanations revealing how important the neural network considered each input data.
ATLANTA — An innovative approach to public safety is taking shape on Cleveland Avenue, where Atlanta City Councilman Antonio Lewis has partnered with the 445 Cleveland apartment complex to deploy AI-powered robotic dogs to deter crime.
The robotic dog, named “Beth,” is equipped with 360-degree cameras, a siren, and stair-climbing capabilities. Unlike other artificial intelligence robots like “Spunky” on Boulevard, Beth is monitored in real time by a human operator located in Bogotá, Colombia.
“Our operator who is physically watching these cameras needs to deploy the dog. It’s all in one system, and they are just controlling it, like a video game at home, except it’s not a video game—it’s Beth,” said Avi Wolf, the owner of 445 Cleveland.