But the technology could break down trust and social bonds.
Category: robotics/AI – Page 348
Robotic Autonomy in Complex Environments with Resiliency (RACER) program successfully tested autonomous movement on a new, much larger fleet vehicle – a significant step in scaling up the adaptability and capability of the underlying RACER algorithms.
The RACER Heavy Platform (RHP) vehicles are 12-ton, 20-foot-long, skid-steer tracked vehicles – similar in size to forthcoming robotic and optionally manned combat/fighting vehicles. The RHPs complement the 2-ton, 11-foot-long, Ackermann-steered, wheeled RACER Fleet Vehicles (RFVs) already in use.
“Having two radically different types of vehicles helps us advance towards RACER’s goal of platform agnostic autonomy in complex, mission-relevant off-road environments that are significantly more unpredictable than on-road conditions,” said Stuart Young, RACER program manager.
How do you make a robot smarter?
Posted in robotics/AI
Teaching robots to ask for help is key to making them safer and more efficient.
Engineers at Princeton University and Google have come up with a new way to teach robots to know when they don’t know. The technique involves quantifying the fuzziness of human language and using that measurement to tell robots when to ask for further directions. Telling a robot to pick up a bowl from a table with only one bowl is fairly clear. But telling a robot to pick up a bowl when there are five bowls on the table generates a much higher degree of uncertainty — and triggers the robot to ask for clarification.
Because tasks are typically more complex than a simple “pick up a bowl” command, the engineers use large language models (LLMs) — the technology behind tools such as ChatGPT — to gauge uncertainty in complex environments. LLMs are bringing robots powerful capabilities to follow human language, but LLM outputs are still frequently unreliable, said Anirudha Majumdar, an assistant professor of mechanical and aerospace engineering at Princeton and the senior author of a study outlining the new method.
Erwan Plantec, Joachin W.Pedersen, Milton L.Montero, Eleni Nisioti, Sebastian Risi ITU Copenhagen 2024 https://arxiv.org/abs/2406.
OpenRead & Notes Taking.
Enables both DLR researchers and external drone operators to quickly test and develop Unmanned Aircraft Systems in real-life operations, the German Aerospace Center (DLR) reports.
Following the rapid advancement of artificial intelligence (AI) tools, engineers worldwide have been working on new architectures and hardware components that replicate the organization and functions of the human brain.
Most brain-inspired technologies created to date draw inspiration from the firing of brain cells (i.e., neurons), rather than mirroring the overall structure of neural elements and how they contribute to information processing.
Researchers at Tsinghua University recently introduced a new neuromorphic computational architecture designed to replicate the organization of synapses (i.e., connections between neurons) and the tree-like structure of dendrites (i.e., projections extending from the body of neurons).
Learn science in the easiest and most engaging way possible with Brilliant! First 30 days are free and 20% off the annual premium subscription when you use our link ➜ https://brilliant.org/sabine.
A group of physicists wants to use artificial intelligence to prove that reality doesn’t exist. They want to do this by running an artificial general intelligence as an observer on a quantum computer. I wish this was a joke. But I’m afraid it’s not.
Paper here: https://quantum-journal.org/papers/q–…
🤓 Check out my new quiz app ➜ http://quizwithit.com/
💌 Support me on Donorbox ➜ https://donorbox.org/swtg.
📝 Transcripts and written news on Substack ➜ https://sciencewtg.substack.com/
👉 Transcript with links to references on Patreon ➜ / sabine.
📩 Free weekly science newsletter ➜ https://sabinehossenfelder.com/newsle…
👂 Audio only podcast ➜ https://open.spotify.com/show/0MkNfXl…
🔗 Join this channel to get access to perks ➜
/ @sabinehossenfelder.
🖼️ On instagram ➜ / sciencewtg.
#science #sciencenews #artificialintelligence #physics
In this thought-provoking lecture, Prof. Jay Friedenberg from Manhattan College delves into the intricate interplay between cognitive science, artificial intelligence, and ethics. With nearly 30 years of teaching experience, Prof. Friedenberg discusses how visual perception research informs AI design, the implications of brain-machine interfaces, the role of creativity in both humans and AI, and the necessity for ethical considerations as technology evolves. He emphasizes the importance of human agency in shaping our technological future and explores the concept of universal values that could guide the development of AGI for the betterment of society.
00:00 Introduction to Jay Friedenberg.
01:02 Connecting Cognitive Science and AI
02:36 Human Augmentation and Technology.
03:50 Brain-Machine Interfaces.
05:43 Balancing Optimism and Caution in AI
07:52 Free Will vs Determinism.
12:34 Creativity in Humans and Machines.
16:45 Ethics and Value Alignment in AI
20:09 Conclusion and Future Work.
SingularityNET was founded by Dr. Ben Goertzel with the mission of creating a decentralized, democratic, inclusive, and beneficial Artificial General Intelligence (AGI). An AGI is not dependent on any central entity, is open to anyone, and is not restricted to the narrow goals of a single corporation or even a single country.
The SingularityNET team includes seasoned engineers, scientists, researchers, entrepreneurs, and marketers. Our core platform and AI teams are further complemented by specialized teams devoted to application areas such as finance, robotics, biomedical AI, media, arts, and entertainment.
Website: https://singularitynet.io.
X: https://x.com/SingularityNET
Instagram: / singularitynet.io.
Discord: / discord.
Forum: https://community.singularitynet.io.
Telegram: https://t.me/singularitynet.
WhatsApp: https://whatsapp.com/channel/0029VaM8…
Warpcast: https://warpcast.com/singularitynet.
Mindplex Social: https://social.mindplex.ai/@Singulari…
Github: https://github.com/singnet.
Linkedin: / singularitynet.
A combined team of roboticists from Stanford University and the Toyota Research Institute has found that adding audio data to visual data when training robots helps to improve their learning skills. The team has posted their research on the arXiv preprint server.
The researchers noted that virtually all training done with AI-based robots involves exposing them to a large amount of visual information, while ignoring associated audio. They wondered if adding microphones to robots and allowing them to collect data regarding how something is supposed to sound as it is being done might help them learn a task better.
For example, if a robot is supposed to learn how to open a box of cereal and fill a bowl with it, it may be helpful to hear the sounds of a box being opened and the dryness of the cereal as it cascades down into a bowl. To find out, the team designed and carried out four robot-learning experiments.
One of the primary goals of the Large Hadron Collider (LHC) experiments is to look for signs of new particles, which could explain many of the unsolved mysteries in physics. Often, searches for new physics are designed to look for one specific type of new particle at a time, using theoretical predictions as a guide. But what about searching for unpredicted – and unexpected – new particles?
Sifting through the billions of collisions that occur in the LHC experiments without knowing exactly what to look for would be a mammoth task for physicists. So, instead of combing through the data and looking for anomalies, the ATLAS and CMS collaborations are letting artificial intelligence (AI) streamline the process.