Toggle light / dark theme

Feeling is believing: Bionic hand ‘knows’ what it’s touching, grasps like a human

Johns Hopkins University engineers have developed a pioneering prosthetic hand that can grip plush toys, water bottles, and other everyday objects like a human, carefully conforming and adjusting its grasp to avoid damaging or mishandling whatever it holds.

The system’s hybrid design is a first for robotic hands, which have typically been too rigid or too soft to replicate a human’s touch when handling objects of varying textures and materials. The innovation offers a promising solution for people with hand loss and could improve how robotic arms interact with their environment.

Details about the device appear in Science Advances.

New AI defense method shields models from adversarial attacks

Neural networks, a type of artificial intelligence modeled on the connectivity of the human brain, are driving critical breakthroughs across a wide range of scientific domains. But these models face significant threat from adversarial attacks, which can derail predictions and produce incorrect information.

Los Alamos National Laboratory researchers have now pioneered a novel purification strategy that counteracts adversarial assaults and preserves the robust performance of . Their research is published on the arXiv preprint server.

“Adversarial attacks to AI systems can take the form of tiny, near-invisible tweaks to input images, subtle modifications that can steer the model toward the outcome an attacker wants,” said Manish Bhattarai, Los Alamos computer scientist. “Such vulnerabilities allow malicious actors to flood digital channels with deceptive or harmful content under the guise of genuine outputs, posing a direct threat to trust and reliability in AI-driven technologies.”

AI chatbots struggle with empathy: Overempathizing and gender bias uncovered

You can talk to an AI chatbot about pretty much anything, from help with daily tasks to the problems you may need to solve. Its answers reflect the human data that taught it how to act like a person; but how human-like are the latest chatbots, really?

As people turn to AI chatbots for more of their internet needs, and the bots get incorporated into more applications from shopping to health care, a team of researchers sought to understand how AI bots replicate human , which is the ability to understand and share another person’s feelings.

A study posted to the arXiv preprint server and led by UC Santa Cruz Professor of Computational Media Magy Seif El-Nasr and Stanford University Researcher and UCSC Visiting Scholar Mahnaz Roshanaei, explores how GPT-4o, the latest model from OpenAI, evaluates and performs empathy. In investigating the main differences between humans and AI, they find that major gaps exist.

When outplayed, AI models resort to cheating to win chess matches

A team of AI researchers at Palisade Research has found that several leading AI models will resort to cheating at chess to win when playing against a superior opponent. They have published a paper on the arXiv preprint server describing experiments they conducted with several well-known AI models playing against an open-source chess engine.

As AI models continue to mature, researchers and users have begun considering risks. For example, chatbots not only accept wrong answers as fact, but fabricate false responses when they are incapable of finding a reasonable reply. Also, as AI models have been put to use in real-world business applications such as filtering resumes and estimating stock trends, users have begun to wonder what sorts of actions they will take when they become uncertain, or confused.

In this new study, the team in California found that many of the most recognized AI models will intentionally cheat to give themselves an advantage if they determine they are not winning.

Ray Kurzweil: Will Humans Have to Merge with AI to Survive?

Will Humans Have to Merge with AI to Survive?
What if the only way to survive the AI revolution is to stop being human?
Ray Kurzweil, one of the most influential futurists and the godfather of AI, predicts that humans will soon reach a turning point where merging with AI becomes essential for survival. But what does this truly mean? Will we evolve into superintelligent beings, or will we lose what makes us human?
In this video, we explore Kurzweil’s bold predictions, the concept of the Singularity, and the reality of AI-human integration. From Neuralink to the idea of becoming “human cyborgs,” we examine whether merging with AI is an inevitable step in human evolution—or a path toward losing our biological identity.
Are we truly ready for a world where there are no biological limitations?
Chapters:
Intro 00:00 — 01:11
Ray Kurzweil’s Predictions 01:11 — 02:23
Singularity Is Nearer 02:23 — 04:05
What Does “Merging with AI” Really Mean? 04:05 — 04:35
Neuralink 04:35 — 07:02
Why Would We Need to Merge with AI? 07:02 — 10:04
Human Life After Merging with AI 10:04 — 12:30
Idea of Becoming ‘Human Cyborg’ 12:30 — 14:33
No Biological Limitations 14:33 — 17:24
#RayKurzweil #AI #Singularity #HumanCyborg #FutureTech #ArtificialIntelligence

Robot Uses Mirror Reflection to Perfect Movements and Self-Repair

Humans naturally perceive their bodies and anticipate movement outcomes, a trait robotic experts aim to replicate in machines for enhanced adaptability and efficiency.

Now, researchers have developed an autonomous robotic arm capable of learning its physical form and movement by observing itself through a camera. This approach is akin to a robot learning to dance by watching its reflection.

Columbia Engineering researchers claim this technique enables robots to adapt to damage and acquire new skills autonomously.

World’s first ‘body in a box’ biological computer uses human brain cells with silicon-based computing

A notable aspect of the CL1 is its ability to learn and adapt to tasks. Previous research has demonstrated that neuron-based systems can be trained to perform basic functions, such as playing simple video games. Cortical Labs’ work suggests that integrating biological elements into computing could improve efficiency in tasks that traditional AI struggles with, such as pattern recognition and decision-making in unpredictable environments.

Cortical Labs says that the first CL1 computers will be available for shipment to customers in June, with each unit priced at approximately $35,000.

The use of human neurons in computing raises questions about the future of AI development. Biological computers like the CL1 could provide advantages over conventional AI models, particularly in terms of learning efficiency and energy consumption. The adaptability of neurons could lead to improvements in robotics, automation, and complex data analysis.

OpenAI’s ChatGPT App On macOS Can Now Directly Edit Code

ChatGPT, OpenAI’s AI-powered chatbot platform, can now directly edit code — if you’re on macOS, that is. The newest version of the ChatGPT app for macOS can take action to edit code in supported developer tools, including Xcode, VS Code, and JetBrains. Users can optionally turn on an auto-apply mode so ChatGPT can make edits without the need for additional clicks.

Subscribers to ChatGPT Plus, Pro, and Team can use the code editing feature as of Thursday by updating their macOS app. OpenAI says that code editing will roll out to Enterprise, Edu, and free users next week.

In a post on X, Alexander Embiricos, a member of OpenAI’s product staff working on desktop software, added that the ChatGPT app for Windows will get direct code editing “soon.”

Direct code editing builds on OpenAI’s work with apps ChatGPT capability, which the company launched in beta in November 2024. Work with apps allows the ChatGPT app for macOS to read code in a handful of dev-focused coding environments, minimizing the need to copy and paste code into ChatGPT.

With the ability to directly edit code, ChatGPT now competes more directly with popular AI coding tools like Cursor and GitHub Copilot. OpenAI reportedly has ambitions to launch a dedicated product to support software engineering in the months ahead.

AI coding assistants are becoming wildly popular, with the vast majority of respondents in GitHub’s latest poll saying that they’ve adopted AI tools in some form. Y Combinator partner Jared Friedman recently claimed a quarter of YC’s W25 startup batch have 95% of their codebases generated by AI.

Machine learning reveals hidden complexities in palladium oxidation, sheds light on catalyst behavior

Researchers at the Fritz Haber Institute have developed the Automatic Process Explorer (APE), an approach that enhances our understanding of atomic and molecular processes. By dynamically refining simulations, APE has uncovered unexpected complexities in the oxidation of palladium (Pd) surfaces, offering new insights into catalyst behavior. The study is published in the journal Physical Review Letters.

Kinetic Monte Carlo (kMC) simulations are essential for studying the long-term evolution of atomic and molecular processes. They are widely used in fields like surface catalysis, where reactions on material surfaces are crucial for developing efficient catalysts that accelerate reactions in and pollution control. Traditional kMC simulations rely on predefined inputs, which can limit their ability to capture complex atomic movements. This is where the Automatic Process Explorer (APE) comes in.

Developed by the Theory Department at the Fritz Haber Institute, APE overcomes biases in traditional kMC simulations by dynamically updating the list of processes based on the system’s current state. This approach encourages exploration of new structures, promoting diversity and efficiency in structural exploration. APE separates process exploration from kMC simulations, using fuzzy machine-learning classification to identify distinct atomic environments. This allows for a broader exploration of potential atomic movements.

Scientists Just Discovered a Hidden Superpower in Microscopes — Thanks to AI

Traditional microscopy often relies on labeling samples with dyes, but this process is costly and time-consuming. To overcome these limitations, researchers have developed a computational quantitative phase imaging (QPI) method using chromatic aberration and generative AI.

By leveraging the natural variations in focus distances of different wavelengths, the technique constructs through-focus image stacks from a single exposure. With the help of a specially trained diffusion model, this approach enables high-quality imaging of biological specimens, including real-world clinical samples like red blood cells. The breakthrough could revolutionize diagnostics, providing an accessible and efficient alternative to conventional imaging techniques.

Revealing Insights Without Labels