Toggle light / dark theme

AI-powered video technology is becoming ubiquitous, tracking our faces and bodies through stores, offices, and public spaces. In some countries the technology constitutes a powerful new layer of policing and government surveillance.

Fortunately, as some researchers from the Belgian university KU Leuven have just shown, you can often hide from an AI video system with the aid of a simple color printout.

Who said that? The researchers showed that the image they designed can hide a whole person from an AI-powered computer-vision system. They demonstrated it on a popular open-source object recognition system called YoLo(v2).

Read more

Prosthetics have advanced drastically in recent years. The technology’s potential has even inspired many, like Elon Musk, to ask whether we may be living as “cyborgs” in the not-too-far future. For Johnny Matheny of Port Richey, Florida, that future is now. Matheny, who lost his arm to cancer in 2005, has recently become the first person to live with an advanced mind-controlled robotic arm. He received the arm in December and will be spending the next year testing it out.

The arm was developed by Johns Hopkins Applied Physics Lab as part of their program Revolutionizing Prosthetics. The aim of the program, which is funded by the Defense Advanced Research Projects Agency (DARPA), is to create prosthetics that are controlled by neural activity in the brain to restore motor function to where it feels entirely natural. The program is specifically working on prosthetics for upper-arm amputee patients. While this particular arm has been demoed before, Matheny will be the first person to actually live with the prosthesis. The program does hope to have more patients take the tech for a longterm test run, though.

While the prosthetic device is impressive, it’s not a limitless, all-powerful robot arm. Matheney won’t be able to get the arm wet and is not allowed to drive while wearing it. Keeping a few rules in mind, Matheney will otherwise be free to push the tech to the edge of its capabilities, truly exploring what it can do.

Read more

Bioengineers at Boston Children’s Hospital report the first demonstration of a robot able to navigate autonomously inside the body. In an animal model of cardiac valve repair, the team programmed a robotic catheter to find its way along the walls of a beating, blood-filled heart to a leaky valve—without a surgeon’s guidance. They report their work today in Science Robotics.

Surgeons have used robots operated by joysticks for more than a decade, and teams have shown that tiny robots can be steered through the body by external forces such as magnetism. However, senior investigator Pierre Dupont, Ph.D., chief of Pediatric Cardiac Bioengineering at Boston Children’s, says that to his knowledge, this is the first report of the equivalent of a self-driving car navigating to a desired destination inside the body.

Dupont envisions assisting surgeons in complex operations, reducing fatigue and freeing surgeons to focus on the most difficult maneuvers, improving outcomes.

Read more

Researchers at Rady Children’s Institute for Genomic Medicine (RCIGM) have utilized a machine-learning process and clinical natural language processing (CNLP) to diagnose rare genetic diseases in record time. This new method is speeding answers to physicians caring for infants in intensive care and opening the door to increased use of genome sequencing as a first-line diagnostic test for babies with cryptic conditions.

“Some people call this , we call it augmented intelligence,” said Stephen Kingsmore, MD, DSc, President and CEO of RCIGM. “Patient care will always begin and end with the doctor. By harnessing the power of technology, we can quickly and accurately determine the root cause of genetic diseases. We rapidly provide this critical information to physicians so they can focus on personalizing care for babies who are struggling to survive.”

A new study documenting the process was published today in the journal Science Translational Medicine. The workflow and research were led by the RCIGM team in collaboration with leading technology and data-science developers —Alexion, Clinithink, Diploid, Fabric Genomics and Illumina.

Read more

For years, post traumatic stress disorder (PTSD) has been one of the most challenging disorders to diagnose. Traditional methods, like one-on-one clinical interviews, can be inaccurate due to the clinician’s subjectivity, or if the patient is holding back their symptoms.

Now, researchers at New York University say they’ve taken the guesswork out of diagnosing PTSD in veterans by using artificial intelligence to objectively detect PTSD by listening to the sound of someone’s voice. Their research, conducted alongside SRI International — the research institute responsible for bringing Siri to iPhones— was published Monday in the journal Depression and Anxiety.

According to The New York Times, SRI and NYU spent five years developing a voice analysis program that understands human speech, but also can detect PTSD signifiers and emotions. As the NYT reports, this is the same process that teaches automated customer service programs how to deal with angry callers: By listening for minor variables and auditory markers that would be imperceptible to the human ear, the researchers say the algorithm can diagnose PTSD with 89% accuracy.

Read more

In an effort to scale up the manufacture of biomaterials, researchers at UC Berkeley have combined bioprinting, a robotic arm, and flash freezing in a method that may one day allow living tissue, and even whole organs, to be printed on demand. By printing cells into 2D sheets and then freezing them as assembled, the new technique improves cell survival during both building and storage.

Read more

The laser sensors currently used to detect 3D objects in the paths of autonomous cars are bulky, ugly, expensive, energy-inefficient – and highly accurate.

These Light Detection and Ranging (LiDAR) sensors are affixed to cars’ roofs, where they increase wind drag, a particular disadvantage for . They can add around $10,000 to a car’s cost. But despite their drawbacks, most experts have considered LiDAR sensors the only plausible way for to safely perceive pedestrians, cars and other hazards on the road.

Now, Cornell researchers have discovered that a simpler method, using two inexpensive cameras on either side of the windshield, can detect objects with nearly LiDAR’s accuracy and at a fraction of the cost. The researchers found that analyzing the captured images from a bird’s-eye view rather than the more traditional frontal view more than tripled their accuracy, making a viable and low-cost alternative to LiDAR.

Read more