Toggle light / dark theme

Scientists have developed a new robot that can ‘mimic’ the two-handed movements of care workers as they dress an individual.

Until now, assistive dressing robots, designed to help an or a person with a disability get dressed, have been created in the laboratory as a one-armed machine, but research has shown that this can be uncomfortable for the person in care or impractical.

To tackle this problem, Dr. Jihong Zhu, a robotics researcher at the University of York’s Institute for Safe Autonomy, proposed a two-armed assistive dressing scheme, which has not been attempted in previous research, but inspired by caregivers who have demonstrated that specific actions are required to reduce discomfort and distress to the individual in their care.

In a collaboration with Houston Methodist Hospital, researchers from the UH Engineering Robotic Swarm Control Laboratory led by Aaron Becker, assistant professor of electrical and computer engineering, are developing a novel treatment for pulmonary embolism (PE) using millimeter-scale corkscrew shaped robots controlled by a magnetic field. PE is the third most common cardiovascular disease, resulting in up to 300,000 deaths annually.

“Using non-invasive miniature magnetic agents could improve patient comfort, reduce the risk of infection and ultimately decrease the cost of medical treatments,” according to Julien Leclerc, a Cullen College research associate specializing in applied electromagnetics. “My goal is to quickly bring this technology into the clinical realm and allow patients to benefit from this treatment method as soon as possible.\.

H/T Stephen Wolfram.

Particularly given its recent surprise successes, there’s a somewhat widespread belief that eventually AI will be able to “do everything”, or at least everything we currently do.


Stephen Wolfram explores the potential—and limitations—of AI in science. See cases in which AI will be a useful tool, and in others a less ideal tool.

For decades, scientists and pathologists have tried, without much success, to come up with a way to determine which individual lung cancer patients are at greatest risk of having their illness spread, or metastasize, to other parts of the body.

Now a team of scientists from Caltech and the Washington University School of Medicine in St. Louis has fed that problem to (AI) algorithms, asking computers to predict which cancer cases are likely to metastasize. In a novel of non-small cell lung cancer (NSCLC) patients, AI outperformed expert pathologists in making such predictions.

These predictions about the progression of lung cancer have important implications in terms of an individual patient’s life. Physicians treating early-stage NSCLC patients face the extremely difficult decision of whether to intervene with expensive, toxic treatments, such as chemotherapy or radiation, after a patient undergoes lung surgery. In some ways, this is the more cautious path because more than half of stage I–III NSCLC patients eventually experience metastasis to the brain. But that means many others do not. For those patients, such difficult treatments are wholly unnecessary.

With a mobile app powered by artificial intelligence (AI), Caitlin Hicks, MD, MS, reviews selfies of patients’ feet in real time to track their wounds as part of a clinical trial. The app saves time for Hicks, a vascular surgeon at Johns Hopkins Medicine, but also reduces clinic trips for her patients with diabetes in inner-city Baltimore, many of whom are elderly and less mobile or have other socioeconomic barriers to care. Hicks knows that for these patients, wound vigilance is the linchpin to preventing infection, hospitalization, or, worse, amputation or even death.

Despite their crushing toll, diabetic foot infections remain stubbornly hard to treat, but multidisciplinary care teams, new drugs and devices on the horizon, and practical solutions to socioeconomic factors could budge the needle.

Summary: A recent study showcases a significant leap in the study of brain oscillations, particularly ripples, which are crucial for memory organization and are affected in disorders like epilepsy and Alzheimer’s. Researchers have developed a toolbox of AI models trained on rodent EEG data to automate and enhance the detection of these oscillations, proving their efficacy on data from non-human primates.

This breakthrough, stemming from a collaborative hackathon, offers over a hundred optimized machine learning models, including support vector machines and convolutional neural networks, freely available to the scientific community. This development opens new avenues in neurotechnology applications, especially in diagnosing and understanding neurological disorders.

Something to look forward to: AMD’s FSR image upscaling technology has avoided using AI until now, which has been a double-edged sword in its competition against Nvidia’s DLSS and Intel’s XeSS. A recent interview with AMD’s CTO indicates that the company plans to pivot sharply toward AI in 2024, with gaming upscaling as one area of focus.

AMD has confirmed that it’s developing a method to play games with AI-based image upscaling. Although further details are scarce, this could potentially bring the company’s solution closer to Nvidia’s. In an interview on the No Priors podcast, CTO Mark Papermaster explained how AMD has deployed AI acceleration throughout its product stack and plans to introduce new applications to utilize it this year. “We are enabling gaming devices to upscale using AI and 2024 is a really huge deployment year,” he said.

Nvidia DLSS, Intel XeSS, and AMD FSR allow gamers to increase the resolution at which they play while minimizing the performance impact. However, while DLSS and XeSS utilize hardware-assisted AI, FSR relies only on spatial and temporal information.