Toggle light / dark theme

An artificial intelligence (AI)-based technology rapidly diagnoses rare disorders in critically ill children with high accuracy, according to a report by scientists from University of Utah Health and Fabric Genomics, collaborators on a study led by Rady Children’s Hospital in San Diego. The benchmark finding, published in Genomic Medicine, foreshadows the next phase of medicine, where technology helps clinicians quickly determine the root cause of disease so they can give patients the right treatment sooner.

“This study is an exciting milestone demonstrating how rapid insights from AI-powered decision support technologies have the potential to significantly improve patient care,” says Mark Yandell, Ph.D., co-corresponding author on the paper. Yandell is a professor of human genetics and Edna Benning Presidential Endowed Chair at U of U Health, and a founding scientific advisor to Fabric.

Worldwide, about seven million infants are born with serious genetic disorders each year. For these children, life usually begins in intensive care. A handful of NICUs in the U.S., including at U of U Health, are now searching for genetic causes of disease by reading, or sequencing, the three billion DNA letters that make up the human genome. While it takes hours to sequence the whole genome, it can take days or weeks of computational and manual analysis to diagnose the illness.

Circa 2019 😀


Because they can process massive amounts of data, computers can perform analytical tasks that are beyond human capability. Google, for instance, is using its computing power to develop AI algorithms that construct two-dimensional CT images of lungs into a three-dimensional lung and look at the entire structure to determine whether cancer is present. Radiologists, in contrast, have to look at these images individually and attempt to reconstruct them in their heads. Another Google algorithm can do something radiologists cannot do at all: determine patients’ risk of cardiovascular disease by looking at a scan of their retinas, picking up on subtle changes related to blood pressure, cholesterol, smoking history and aging. “There’s potential signal there beyond what was known before,” says Google product manager Daniel Tse.

The Black Box Problem

AI programs could end up revealing entirely new links between biological features and patient outcomes. A 2019 paper in JAMA Network Open described a deep-learning algorithm trained on more than 85,000 chest x-rays from people enrolled in two large clinical trials that had tracked them for more than 12 years. The algorithm scored each patient’s risk of dying during this period. The researchers found that 53 percent of the people the AI put into a high-risk category died within 12 years, as opposed to 4 percent in the low-risk category. The algorithm did not have information on who died or on the cause of death. The lead investigator, radiologist Michael Lu of Massachusetts General Hospital, says that the algorithm could be a helpful tool for assessing patient health if combined with a physician’s assessment and other data such as genetics.

AI is changing the way we live and the global balance of military power. Ex-Pentagon software chief Nicholas Chaillan said this month the U.S. has already lost out to China in military applications. Even 98-year-old Henry Kissinger weighs in on AI as co-author of a new book due next month, “The Age of AI: And Our Human Future.”

Kai-Fu Lee has been sizing up the implications for decades. The former Google executive turned venture capitalist looked at U.S.-China competition in his 2018 book, “AI Superpowers.” His new book, “AI 2041,” co-authored with science fiction writer Chen Qiufan, suggests how AI will bring sweeping changes to daily life in the next 20 years. I talked earlier this month to Lee, who currently oversees $2.7 billion of assets at Beijing-headquartered Sinovation Ventures. Sinovation has backed seven AI start-ups that have become “unicorns” worth more than $1 billion: AInnovation, 4Paradigm, Megvii, Momenta, WeRide, Horizon Robotics and Bitman. We discussed Lee’s new book, the investments he’s made based on his predictions in it, and where the U.S.-China AI rivalry now stands. Excerpts follow.

Full Story:

The strategy outlines how AI can be applied to defence and security in a protected and ethical way. As such, it sets standards of responsible use of AI technologies, in accordance with international law and NATO’s values. It also addresses the threats posed by the use of AI by adversaries and how to establish trusted cooperation with the innovation community on AI.

Artificial Intelligence is one of the seven technological areas which NATO Allies have prioritized for their relevance to defence and security. These include quantum-enabled technologies, data and computing, autonomy, biotechnology and human enhancements, hypersonic technologies, and space. Of all these dual-use technologies, Artificial Intelligence is known to be the most pervasive, especially when combined with others like big data, autonomy, or biotechnology. To address this complex challenge, NATO Defence Ministers also approved NATO’s first policy on data exploitation.

Individual strategies will be developed for all priority areas, following the same ethical approach as that adopted for Artificial Intelligence.

The truth is these systems aren’t masters of language. They’re nothing more than mindless “stochastic parrots.” They don’t understand a thing about what they say and that makes them dangerous. They tend to “amplify biases and other issues in the training data” and regurgitate what they’ve read before, but that doesn’t stop people from ascribing intentionality to their outputs. GPT-3 should be recognized for what it is; a dumb — even if potent — language generator, and not as a machine so close to us in humanness as to call it “self-aware.”

On the other hand, we should ponder whether OpenAI’s intentions are honest and whether they have too much control over GPT-3. Should any company have the absolute authority over an AI that could be used for so much good — or so much evil? What happens if they decide to shift from their initial promises and put GPT-3 at the service of their shareholders?

Full Story:


Not even our imagination will manage to keep up with technology’s pace.

The Marine Corps is expanding its mission set to include operating robots in shallow waters, a first for the service that comes as it prepares to operate on islands in the Indo-Pacific.

The Explosive Ordnance Disposal Remotely Operated Vehicle is a box-shaped robot that can navigate in shallow waters, where it will be able to identify and neutralize threats, according to a Marine Corps Systems Command news release issued Thursday.

The robot, also referred to as an ROV for remotely operated vehicle, has high-definition video capability and the ability to provide real-time feedback for explosive ordnance disposal divers, according to the release. It also uses sound navigation and ranging sensors.

Facebook is pouring a lot of time and money into augmented reality, including building its own AR glasses with Ray-Ban. Right now, these gadgets can only record and share imagery, but what does the company think such devices will be used for in the future?

A new research project led by Facebook’s AI team suggests the scope of the company’s ambitions. It imagines AI systems that are constantly analyzing peoples’ lives using first-person video; recording what they see, do, and hear in order to help them with everyday tasks. Facebook’s researchers have outlined a series of skills it wants these systems to develop, including “episodic memory” (answering questions like “where did I leave my keys?”) and “audio-visual diarization” (remembering who said what when).

Ron Hetrick, a labor economist at EMSI and one of the report’s authors, said that as a whole the industry is not yet able to bring robotics in at a meaningful level. But future restaurant business models will continue to evolve as labor challenges remain. He expects business models could change so that the amount of service customers need drops.

“You will probably lose out on the amount of restaurants that you can go sit in,” Hetrick said.

Miso’s Bell said that software engineers are always in high demand, but the company is facing “normal challenges” in terms of worker availability. The current supply chain crunch is more of an immediate concern.

NASA has released a new video that imagines future human explorers and space tourists.

Robotic missions have toured much of our Solar System – but so far, the only place beyond Earth where humans have stood is the Moon. That may change in the coming decades, with space agencies vying to achieve the historic milestone of putting the first astronaut on Mars. Towards the end of this century, as the cost of launching into space is reduced to a few cents per kilogram, space tourism may become as cheap as a transatlantic flight today. New forms of space propulsion in the 22nd century and beyond may open up the stars to human settlement.

In this short film, NASA has visualised some of the distant places that lie waiting to be explored. We get a glimpse of people on the Red Planet, standing in a cloud city on Venus, drifting towards the water plumes of Enceladus, and even kayaking on Titan. We are then provided with scientifically accurate depictions of exoplanets that humans may visit in the more distant future.