The industry showed out in spades for this year’s list, highlighting devices in medical AI, surgical robotics, wearables, and femtech, among others.

This robotic Labrador puppy has been created in collaboration with the legendary Jim Henson’s Creature Shop.
A US-based company, Tombot, has unveiled Jennie – a realistic robotic puppy. This battery-powered Lab reacts to human touch, wags its tail, and even barks when you tell it to.
This robotic companion is designed to bring joy and comfort to those who need it most. Jennie has been designed to offer companionship to people battling dementia, stress, anxiety, Posttraumatic Stress Disorder (PTSD), and depression.
Jennie is equipped with various features, including real puppy sounds, software updates, interactive sensors, voice commands, a rechargeable battery, and can be controlled through a smartphone app.
MIT’s AI simulator creates realistic data, helping robots master tasks virtually.
Researchers developes an AI-powered simulator that creates realistic training data, enabling robots to master real-world tasks virtually.
Originally published on Towards AI.
When it comes to artificial intelligence (AI), opinions run the gamut. Some see AI as a miraculous tool that could revolutionize every aspect of our lives, while others fear it as a force that could upend society and replace human ingenuity. Among these diverse perspectives lies a growing fascination with the cognitive abilities of AI: Can machines truly “understand” us? Recent research suggests that advanced language models like ChatGPT-4 may be more socially perceptive than we imagined.
A recent study published in Proceedings of the National Academy of Sciences (PNAS) reveals that advanced language models can now match a six-year-old child’s performance in theory of mind (ToM) tasks, challenging our assumptions about machine intelligence.
As our collective nervousness over AI grows each day, “The Wild Robot” emerges from the woods with a completely different take on a man-made being with the ability to learn.
“I love the messaging of the story, the idea that kindness is a survival tactic,” says star Lupita Nyong’o. “It’s just so pure and sweet and needed.”
In the DreamWorks animated feature, a domestic helper robot, a ROZZUM 7,134 (Nyong’o), is lost on a wooded island and activated without human guidance. As the ingeniously designed “Roz” searches for a mission in a vernal bower that looks designed by an Impressionist painter, she learns to communicate with the animal residents and finds purpose in raising an orphaned gosling, Brightbill.
Summary: AI models trained on MRI data can now distinguish brain tumors from healthy tissue with high accuracy, nearing human performance. Using convolutional neural networks and transfer learning from tasks like camouflage detection, researchers improved the models’ ability to recognize tumors.
This study emphasizes explainability, enabling AI to highlight the areas it identifies as cancerous, fostering trust among radiologists and patients. While slightly less accurate than human detection, this method demonstrates promise for AI as a transparent tool in clinical radiology.
Summary: The Human Cell Atlas (HCA) consortium has published over 40 studies revealing groundbreaking insights into human biology through large-scale mapping of cells. These studies cover diverse areas such as brain development, gut inflammation, and COVID-19 lung responses, while also showcasing the power of AI in understanding cellular mechanisms.
By profiling over 100 million cells from 10,000 individuals, HCA is building a “Google Maps” for cell biology to transform diagnostics, drug discovery, and regenerative medicine. The initiative emphasizes diversity, including underrepresented populations, to ensure a globally inclusive understanding of health and disease.
In 2024, the Kavli Institute of Brain and Mind will reach its 20th anniversary. To celebrate this milestone, we hosted a special symposium on Monday, October 28, 2024 at Salk Institute-The Generative Mind: Biological and Artificial Intelligence. Please enjoy the presentation \.
AlphaQubit: an AI-based system that can more accurately identify errors inside quantum computers.
AlphaQubit is a neural-network based decoder drawing on Transformers, a deep learning architecture developed at Google that underpins many of today’s large language models. Using the consistency checks as an input, its task is to correctly predict whether the logical qubit — when measured at the end of the experiment — has flipped from how it was prepared.
We began by training our model to decode the data from a set of 49 qubits inside a Sycamore quantum processor, the central computational unit of the quantum computer. To teach AlphaQubit the general decoding problem, we used a quantum simulator to generate hundreds of millions of examples across a variety of settings and error levels. Then we finetuned AlphaQubit for a specific decoding task by giving it thousands of experimental samples from a particular Sycamore processor.
When tested on new Sycamore data, AlphaQubit set a new standard for accuracy when compared with the previous leading decoders. In the largest Sycamore experiments, AlphaQubit makes 6% fewer errors than tensor network methods, which are highly accurate but impractically slow. AlphaQubit also makes 30% fewer errors than correlated matching, an accurate decoder that is fast enough to scale.