Toggle light / dark theme

Bongard said they found that the xenobots, which were initially sphere-shaped and made from around 3,000 cells, could replicate. But it happened rarely and only in specific circumstances. The xenobots used “kinetic replication” — a process that is known to occur at the molecular level but has never been observed before at the scale of whole cells or organisms, Bongard said.


The US scientists who created the first living robots say the life forms, known as xenobots, can now reproduce — and in a way not seen in plants and animals.

Formed from the stem cells of the African clawed frog (Xenopus laevis) from which it takes its name, xenobots are less than a millimeter (0.04 inches) wide. The tiny blobs were first unveiled in 2020 after experiments showed that they could move, work together in groups and self-heal.

Now the scientists that developed them at the University of Vermont, Tufts University and Harvard University’s Wyss Institute for Biologically Inspired Engineering said they have discovered an entirely new form of biological reproduction different from any animal or plant known to science.

Circa 2018 #artificialintelligence #doctor


Abstract: Online symptom checkers have significant potential to improve patient care, however their reliability and accuracy remain variable. We hypothesised that an artificial intelligence (AI) powered triage and diagnostic system would compare favourably with human doctors with respect to triage and diagnostic accuracy. We performed a prospective validation study of the accuracy and safety of an AI powered triage and diagnostic system. Identical cases were evaluated by both an AI system and human doctors. Differential diagnoses and triage outcomes were evaluated by an independent judge, who was blinded from knowing the source (AI system or human doctor) of the outcomes. Independently of these cases, vignettes from publicly available resources were also assessed to provide a benchmark to previous studies and the diagnostic component of the MRCGP exam. Overall we found that the Babylon AI powered Triage and Diagnostic System was able to identify the condition modelled by a clinical vignette with accuracy comparable to human doctors (in terms of precision and recall). In addition, we found that the triage advice recommended by the AI System was, on average, safer than that of human doctors, when compared to the ranges of acceptable triage provided by independent expert judges, with only a minimal reduction in appropriateness.

From: Yura Perov N [view email]

[v1] Wed, 27 Jun 2018 21:18:37 UTC (54 KB)

Bridging Technology And Medicine For The Modern Healthcare Ecosystem — Dr. Mona G. Flores, MD, Global Head of Medical AI, NVIDIA.


Dr. Mona Flores M.D., is the Global Head of Medical AI, at NVIDIA (https://blogs.nvidia.com/blog/author/monaflores/), the American multinational technology company, where she oversees the company’s AI initiatives in medicine and healthcare to bridge the chasm between technology and medicine.

Dr. Flores first joined NVIDIA in 2018 with a focus on developing their healthcare ecosystem. Before joining NVIDIA, she served as the chief medical officer of digital health company Human-Resolution Technologies after a 25+ year career in medicine and cardiothoracic surgery.

To say we’re at an inflection point of the technological era may be an obvious declaration to some. The opportunities at hand and how various technologies and markets will advance are nuanced, however, though a common theme is emerging. The pace of innovation is moving at a rate previously seen by humankind at only rare points in history. The invention of the printing press and the ascension of the internet come to mind as similar inflection points, but current innovation trends are being driven aggressively by machine learning and artificial intelligence (AI). In fact, AI is empowering rapid technology advances in virtually all areas, from the edge and personal devices, to the data center and even chip design itself.

There is also a self-perpetuating effect at play, because the demand for intelligent machines and automation everywhere is also ramping up, whether you consider driver assist technologies in the automotive industry, recommenders and speech recognition input in phones, or smart home technologies and the IoT. What’s spurring our recent voracious demand for tech is the mere fact that leading-edge OEMs, from big names like Tesla and Apple, to scrappy start-ups, are now beginning to realize great gains in silicon and system-level development beyond the confines of Moore’s Law alone.

Do you know what the Earth’s atmosphere is made of? You’d probably remember it’s oxygen, and maybe nitrogen. And with a little help from Google you can easily reach a more precise answer: 78% nitrogen, 21% oxygen and 1% Argon gas. However, when it comes to the composition of exo-atmospheres—the atmospheres of planets outside our solar system—the answer is not known. This is a shame, as atmospheres can indicate the nature of planets, and whether they can host life.

As exoplanets are so far away, it has proven extremely difficult to probe their atmospheres. Research suggests that artificial intelligence (AI) may be our best bet to explore them—but only if we can show that these algorithms think in reliable, scientific ways, rather than cheating the system. Now our new paper, published in The Astrophysical Journal, has provided reassuring insight into their mysterious logic.

Astronomers typically exploit the transit method to investigate exoplanets, which involves measuring dips in light from a star as a planet passes in front of it. If an atmosphere is present on the planet, it can absorb a very tiny bit of light, too. By observing this event at different wavelengths—colors of light—the fingerprints of molecules can be seen in the absorbed starlight, forming recognizable patterns in what we call a spectrum. A typical signal produced by the atmosphere of a Jupiter-sized planet only reduces the stellar light by ~0.01% if the star is Sun-like. Earth-sized planets produce 10–100 times lower signals. It’s a bit like spotting the eye color of a cat from an aircraft.

Tesla’s head of AI has released new footage of the automaker’s auto labeling tool for its self-driving effort.

It’s expected to be an important accelerator in improving Tesla’s Full Self-Driving Beta.

Tesla is often said to have a massive lead in self-driving data thanks to having equipped all its cars with sensors early on and collecting real-world data from a fleet that now includes over a million vehicles.