A team of AI researchers at Meta’s Fundamental AI Research team are making four new AI models publicly available to researchers and developers creating new applications. The team has posted a paper on the arXiv preprint server outlining one of the new models, JASCO, and how it might be used.
As interest in AI applications grows, major players in the field are creating AI models that can be used by other entities to add AI capabilities to their own applications. In this new effort, the team at Meta has made available four new models: JASCO, AudioSeal and two versions of Chameleon.
JASCO has been designed to accept different types of audio input and create an improved sound. The model, the team says, allows users to adjust characteristics such as the sound of drums, guitar chords or even melodies to craft a tune. The model can also accept text input and will use it to flavor a tune.
The RACER Heavy Platform (RHP) vehicles are 12-ton, 20-foot-long, skid-steer tracked vehicles – similar in size to forthcoming robotic and optionally manned combat/fighting vehicles. The RHPs complement the 2-ton, 11-foot-long, Ackermann-steered, wheeled RACER Fleet Vehicles (RFVs) already in use.
“Having two radically different types of vehicles helps us advance towards RACER’s goal of platform agnostic autonomy in complex, mission-relevant off-road environments that are significantly more unpredictable than on-road conditions,” said Stuart Young, RACER program manager.
Teaching robots to ask for help is key to making them safer and more efficient.
Engineers at Princeton University and Google have come up with a new way to teach robots to know when they don’t know. The technique involves quantifying the fuzziness of human language and using that measurement to tell robots when to ask for further directions. Telling a robot to pick up a bowl from a table with only one bowl is fairly clear. But telling a robot to pick up a bowl when there are five bowls on the table generates a much higher degree of uncertainty — and triggers the robot to ask for clarification.
Because tasks are typically more complex than a simple “pick up a bowl” command, the engineers use large language models (LLMs) — the technology behind tools such as ChatGPT — to gauge uncertainty in complex environments. LLMs are bringing robots powerful capabilities to follow human language, but LLM outputs are still frequently unreliable, said Anirudha Majumdar, an assistant professor of mechanical and aerospace engineering at Princeton and the senior author of a study outlining the new method.
Enables both DLR researchers and external drone operators to quickly test and develop Unmanned Aircraft Systems in real-life operations, the German Aerospace Center (DLR) reports.
Following the rapid advancement of artificial intelligence (AI) tools, engineers worldwide have been working on new architectures and hardware components that replicate the organization and functions of the human brain.
Most brain-inspired technologies created to date draw inspiration from the firing of brain cells (i.e., neurons), rather than mirroring the overall structure of neural elements and how they contribute to information processing.
Researchers at Tsinghua University recently introduced a new neuromorphic computational architecture designed to replicate the organization of synapses (i.e., connections between neurons) and the tree-like structure of dendrites (i.e., projections extending from the body of neurons).
Learn science in the easiest and most engaging way possible with Brilliant! First 30 days are free and 20% off the annual premium subscription when you use our link ➜ https://brilliant.org/sabine.
A group of physicists wants to use artificial intelligence to prove that reality doesn’t exist. They want to do this by running an artificial general intelligence as an observer on a quantum computer. I wish this was a joke. But I’m afraid it’s not.
In this thought-provoking lecture, Prof. Jay Friedenberg from Manhattan College delves into the intricate interplay between cognitive science, artificial intelligence, and ethics. With nearly 30 years of teaching experience, Prof. Friedenberg discusses how visual perception research informs AI design, the implications of brain-machine interfaces, the role of creativity in both humans and AI, and the necessity for ethical considerations as technology evolves. He emphasizes the importance of human agency in shaping our technological future and explores the concept of universal values that could guide the development of AGI for the betterment of society.
00:00 Introduction to Jay Friedenberg. 01:02 Connecting Cognitive Science and AI 02:36 Human Augmentation and Technology. 03:50 Brain-Machine Interfaces. 05:43 Balancing Optimism and Caution in AI 07:52 Free Will vs Determinism. 12:34 Creativity in Humans and Machines. 16:45 Ethics and Value Alignment in AI 20:09 Conclusion and Future Work.
SingularityNET was founded by Dr. Ben Goertzel with the mission of creating a decentralized, democratic, inclusive, and beneficial Artificial General Intelligence (AGI). An AGI is not dependent on any central entity, is open to anyone, and is not restricted to the narrow goals of a single corporation or even a single country.
The SingularityNET team includes seasoned engineers, scientists, researchers, entrepreneurs, and marketers. Our core platform and AI teams are further complemented by specialized teams devoted to application areas such as finance, robotics, biomedical AI, media, arts, and entertainment.