Decentralized yet coordinated networks of specialized artificial intelligence agents, multi-agent systems for healthcare (MASH), that excel in performing tasks in an assistive or autonomous manner within specific clinical and operational domains are likely to become the next paradigm in medical artificial intelligence.
Category: robotics/AI – Page 77
The new DGX machines are portable but powerful enough to drive complex AI modules and research, with processing capabilities previously only available in data centers.
Join aerospace engineer Mike DiVerde as he explores the fascinating world of robotic spacecraft through NASA’s remarkable Chandra X-Ray Observatory. Discover…
A trio of AI researchers at Google’s Google DeepMind, working with a colleague from the University of Toronto, report that the AI algorithm Dreamer can learn to self-improve by mastering Minecraft in a short amount of time. In their study published in the journal Nature, Danijar Hafner, Jurgis Pasukonis, Timothy Lillicrap and Jimmy Ba programmed the AI app to play Minecraft without being trained and to achieve an expert level in just nine days.
Over the past several years, computer scientists have learned a lot about how deep learning can be used to train AI applications to conduct seemingly intelligent activities such as answering questions. Researchers have also found that AI apps can be trained to play games and perform better than humans. That research has extended into video game playing, which may seem to be redundant, because what could you get from a computer playing another computer?
In this new study, the researchers found that it can produce advances such as helping an AI app learn to improve its abilities over a short period of time, which could give robots the tools they need to perform well in the real world.
A Global 5G Community, fostering a community and ecosystem around the development of 5G applications.
Nobel laureate and Cambridge University alumnus Sir Demis Hassabis heralds a new era of AI drug discovery at ‘digital speed’
Posted in biotech/medical, robotics/AI | Leave a Comment on Nobel laureate and Cambridge University alumnus Sir Demis Hassabis heralds a new era of AI drug discovery at ‘digital speed’
Nobel laureate and Cambridge alumnus Sir Demis Hassabis heralds a new era of ‘digital speed’ drug discovery
Scientists Just Merged Human Brain Cells With AI – Here’s What Happened!
What happens when human brain cells merge with artificial intelligence? Scientists have just achieved something straight out of science fiction—combining living neurons with AI to create a hybrid intelligence system. The results are mind-blowing, and they could redefine the future of computing. But how does it work, and what does this mean for humanity?
In a groundbreaking experiment, researchers successfully integrated human brain cells with AI, creating a system that learns faster and more efficiently than traditional silicon-based computers. These “biocomputers” use lab-grown brain organoids to process information, mimicking human thought patterns while leveraging AI’s speed and scalability. The implications? Smarter, more adaptive machines that think like us.
Why is this such a big deal? Unlike conventional AI, which relies on brute-force data crunching, this hybrid system operates more like a biological brain—learning with less energy, recognizing patterns intuitively, and even showing early signs of creativity. Potential applications include ultra-fast medical diagnostics, self-improving robots, and brain-controlled prosthetics that feel truly natural.
But with great power comes big questions. Could this lead to conscious machines? Will AI eventually surpass human intelligence? And what are the ethical risks of blending biology with technology? This video breaks down the science, the possibilities, and the controversies—watch to the end for the full story.
How did scientists merge brain cells with AI? What are biocomputers? Can AI become human-like? What is hybrid intelligence? Will AI replace human brains?This video will answer all these question. Make sure you watch all the way though to not miss anything.
#ai.
How does a robotic arm or a prosthetic hand learn a complex task like grasping and rotating a ball? The challenge for the human, prosthetic or robotic hand has always been to correctly learn to control the fingers to exert forces on an object.
The sensitive skin and nerve endings that cover our hands have been attributed with helping us learn and adapt to our manipulation, so roboticists have insisted on incorporating sensors into robotic hands. But–given that you can still learn to handle objects with gloves on– there must be something else at play.
This mystery is what inspired researchers in the ValeroLab in the Viterbi School of Engineering to explore if tactile sensation is really always necessary for learning to control the fingers.
AI has created a sea change in society; now, it is setting its sights on the sea itself. Researchers at Osaka Metropolitan University have developed a machine learning-powered fluid simulation model that significantly reduces computation time without compromising accuracy.
Their fast and precise technique opens up potential applications in offshore power generation, ship design and real-time ocean monitoring. The study was published in Applied Ocean Research.
Accurately predicting fluid behavior is crucial for industries relying on wave and tidal energy, as well as for the design of maritime structures and vessels.