Toggle light / dark theme

That radio DJ you hear might already be a robot

OAKLAND/LOS ANGELES, Calif., Dec 2 – Andy Chanley, the afternoon drive host at Southern California’s public radio station 88.5 KCSN, has been a radio DJ for over 32 years. And now, thanks to artificial intelligence technology, his voice will live on simultaneously in many places.

“I may be a robot, but I still love to rock,” says the robot DJ named ANDY, derived from Artificial Neural Disk-JockeY, in Chanley’s voice, during a demonstration for Reuters where the voice was hard to distinguish from a human DJ.

Our phones, speakers and rice cookers have been talking to us for years, but their voices have been robotic. Seattle-based AI startup WellSaid Labs says it has finessed the technology to create over 50 real human voice avatars like ANDY so far, where the producer just needs to type in text to create the narration.

World’s first living robots can now reproduce, scientists say

Bongard said they found that the xenobots, which were initially sphere-shaped and made from around 3,000 cells, could replicate. But it happened rarely and only in specific circumstances. The xenobots used “kinetic replication” — a process that is known to occur at the molecular level but has never been observed before at the scale of whole cells or organisms, Bongard said.


The US scientists who created the first living robots say the life forms, known as xenobots, can now reproduce — and in a way not seen in plants and animals.

Formed from the stem cells of the African clawed frog (Xenopus laevis) from which it takes its name, xenobots are less than a millimeter (0.04 inches) wide. The tiny blobs were first unveiled in 2020 after experiments showed that they could move, work together in groups and self-heal.

Now the scientists that developed them at the University of Vermont, Tufts University and Harvard University’s Wyss Institute for Biologically Inspired Engineering said they have discovered an entirely new form of biological reproduction different from any animal or plant known to science.

A comparative study of artificial intelligence and human doctors for the purpose of triage and diagnosis

Circa 2018 #artificialintelligence #doctor


Abstract: Online symptom checkers have significant potential to improve patient care, however their reliability and accuracy remain variable. We hypothesised that an artificial intelligence (AI) powered triage and diagnostic system would compare favourably with human doctors with respect to triage and diagnostic accuracy. We performed a prospective validation study of the accuracy and safety of an AI powered triage and diagnostic system. Identical cases were evaluated by both an AI system and human doctors. Differential diagnoses and triage outcomes were evaluated by an independent judge, who was blinded from knowing the source (AI system or human doctor) of the outcomes. Independently of these cases, vignettes from publicly available resources were also assessed to provide a benchmark to previous studies and the diagnostic component of the MRCGP exam. Overall we found that the Babylon AI powered Triage and Diagnostic System was able to identify the condition modelled by a clinical vignette with accuracy comparable to human doctors (in terms of precision and recall). In addition, we found that the triage advice recommended by the AI System was, on average, safer than that of human doctors, when compared to the ranges of acceptable triage provided by independent expert judges, with only a minimal reduction in appropriateness.

From: Yura Perov N [view email]

[v1] Wed, 27 Jun 2018 21:18:37 UTC (54 KB)

Dr. Mona Flores, M.D., Global Head of Medical AI, NVIDIA — Bridging Technology And Medicine

Bridging Technology And Medicine For The Modern Healthcare Ecosystem — Dr. Mona G. Flores, MD, Global Head of Medical AI, NVIDIA.


Dr. Mona Flores M.D., is the Global Head of Medical AI, at NVIDIA (https://blogs.nvidia.com/blog/author/monaflores/), the American multinational technology company, where she oversees the company’s AI initiatives in medicine and healthcare to bridge the chasm between technology and medicine.

Dr. Flores first joined NVIDIA in 2018 with a focus on developing their healthcare ecosystem. Before joining NVIDIA, she served as the chief medical officer of digital health company Human-Resolution Technologies after a 25+ year career in medicine and cardiothoracic surgery.

Dr. Flores received her medical degree from Oregon Health and Science University, followed by a general surgery residency at the University of California at San Diego, a Postdoctoral Fellowship at Stanford, and a cardiothoracic surgery residency and fellowship at Columbia University in New York.

Dr. Flores also has a Masters of Biology from San Jose State and an MBA from the University at Albany School of Business. She initially worked in investment banking for a few years before pursuing her passion for medicine and technology.

Future Chip Innovation Will Be Driven By AI-Powered Co-Optimization Of Hardware And Software

To say we’re at an inflection point of the technological era may be an obvious declaration to some. The opportunities at hand and how various technologies and markets will advance are nuanced, however, though a common theme is emerging. The pace of innovation is moving at a rate previously seen by humankind at only rare points in history. The invention of the printing press and the ascension of the internet come to mind as similar inflection points, but current innovation trends are being driven aggressively by machine learning and artificial intelligence (AI). In fact, AI is empowering rapid technology advances in virtually all areas, from the edge and personal devices, to the data center and even chip design itself.

There is also a self-perpetuating effect at play, because the demand for intelligent machines and automation everywhere is also ramping up, whether you consider driver assist technologies in the automotive industry, recommenders and speech recognition input in phones, or smart home technologies and the IoT. What’s spurring our recent voracious demand for tech is the mere fact that leading-edge OEMs, from big names like Tesla and Apple, to scrappy start-ups, are now beginning to realize great gains in silicon and system-level development beyond the confines of Moore’s Law alone.

AI can reliably spot molecules on exoplanets, and might one day even discover new laws of physics

Do you know what the Earth’s atmosphere is made of? You’d probably remember it’s oxygen, and maybe nitrogen. And with a little help from Google you can easily reach a more precise answer: 78% nitrogen, 21% oxygen and 1% Argon gas. However, when it comes to the composition of exo-atmospheres—the atmospheres of planets outside our solar system—the answer is not known. This is a shame, as atmospheres can indicate the nature of planets, and whether they can host life.

As exoplanets are so far away, it has proven extremely difficult to probe their atmospheres. Research suggests that artificial intelligence (AI) may be our best bet to explore them—but only if we can show that these algorithms think in reliable, scientific ways, rather than cheating the system. Now our new paper, published in The Astrophysical Journal, has provided reassuring insight into their mysterious logic.

Astronomers typically exploit the transit method to investigate exoplanets, which involves measuring dips in light from a star as a planet passes in front of it. If an atmosphere is present on the planet, it can absorb a very tiny bit of light, too. By observing this event at different wavelengths—colors of light—the fingerprints of molecules can be seen in the absorbed starlight, forming recognizable patterns in what we call a spectrum. A typical signal produced by the atmosphere of a Jupiter-sized planet only reduces the stellar light by ~0.01% if the star is Sun-like. Earth-sized planets produce 10–100 times lower signals. It’s a bit like spotting the eye color of a cat from an aircraft.

/* */