Toggle light / dark theme

In February 2024, Reddit struck a $60 million deal with Google to let the search giant use data on the platform to train its artificial intelligence models. Notably absent from the discussions were Reddit users, whose data were being sold.

The deal reflected the reality of the modern internet: Big tech companies own virtually all our online data and get to decide what to do with that data. Unsurprisingly, many platforms monetize their data, and the fastest-growing way to accomplish that today is to sell it to AI companies, who are themselves massive tech companies using the data to train ever more powerful models.

The decentralized platform Vana, which started as a class project at MIT, is on a mission to give power back to the users. The company has created a fully user-owned network that allows individuals to upload their data and govern how they are used. AI developers can pitch users on ideas for new models, and if the users agree to contribute their data for training, they get proportional ownership in the models.

Decentralized yet coordinated networks of specialized artificial intelligence agents, multi-agent systems for healthcare (MASH), that excel in performing tasks in an assistive or autonomous manner within specific clinical and operational domains are likely to become the next paradigm in medical artificial intelligence.

A trio of AI researchers at Google’s Google DeepMind, working with a colleague from the University of Toronto, report that the AI algorithm Dreamer can learn to self-improve by mastering Minecraft in a short amount of time. In their study published in the journal Nature, Danijar Hafner, Jurgis Pasukonis, Timothy Lillicrap and Jimmy Ba programmed the AI app to play Minecraft without being trained and to achieve an expert level in just nine days.

Over the past several years, computer scientists have learned a lot about how can be used to train AI applications to conduct seemingly intelligent activities such as answering questions. Researchers have also found that AI apps can be trained to play games and perform better than humans. That research has extended into , which may seem to be redundant, because what could you get from a computer playing another computer?

In this new study, the researchers found that it can produce advances such as helping an AI app learn to improve its abilities over a short period of time, which could give robots the tools they need to perform well in the real world.

Scientists Just Merged Human Brain Cells With AI – Here’s What Happened!
What happens when human brain cells merge with artificial intelligence? Scientists have just achieved something straight out of science fiction—combining living neurons with AI to create a hybrid intelligence system. The results are mind-blowing, and they could redefine the future of computing. But how does it work, and what does this mean for humanity?

In a groundbreaking experiment, researchers successfully integrated human brain cells with AI, creating a system that learns faster and more efficiently than traditional silicon-based computers. These “biocomputers” use lab-grown brain organoids to process information, mimicking human thought patterns while leveraging AI’s speed and scalability. The implications? Smarter, more adaptive machines that think like us.

Why is this such a big deal? Unlike conventional AI, which relies on brute-force data crunching, this hybrid system operates more like a biological brain—learning with less energy, recognizing patterns intuitively, and even showing early signs of creativity. Potential applications include ultra-fast medical diagnostics, self-improving robots, and brain-controlled prosthetics that feel truly natural.

But with great power comes big questions. Could this lead to conscious machines? Will AI eventually surpass human intelligence? And what are the ethical risks of blending biology with technology? This video breaks down the science, the possibilities, and the controversies—watch to the end for the full story.

How did scientists merge brain cells with AI? What are biocomputers? Can AI become human-like? What is hybrid intelligence? Will AI replace human brains?This video will answer all these question. Make sure you watch all the way though to not miss anything.

#ai.

How does a robotic arm or a prosthetic hand learn a complex task like grasping and rotating a ball? The challenge for the human, prosthetic or robotic hand has always been to correctly learn to control the fingers to exert forces on an object.

The and nerve endings that cover our hands have been attributed with helping us learn and adapt to our manipulation, so roboticists have insisted on incorporating sensors into robotic hands. But–given that you can still learn to handle objects with gloves on– there must be something else at play.

This mystery is what inspired researchers in the ValeroLab in the Viterbi School of Engineering to explore if tactile sensation is really always necessary for learning to control the fingers.