Toggle light / dark theme

Meta releases four new publicly available AI models for developer use

A team of AI researchers at Meta’s Fundamental AI Research team are making four new AI models publicly available to researchers and developers creating new applications. The team has posted a paper on the arXiv preprint server outlining one of the new models, JASCO, and how it might be used.

As interest in AI applications grows, major players in the field are creating AI models that can be used by other entities to add AI capabilities to their own applications. In this new effort, the team at Meta has made available four new models: JASCO, AudioSeal and two versions of Chameleon.

JASCO has been designed to accept different types of audio input and create an improved sound. The , the team says, allows users to adjust characteristics such as the sound of drums, guitar chords or even melodies to craft a . The model can also accept text input and will use it to flavor a tune.

RACER Speeds Into a Second Phase With Robotic Fleet Expansion and Another Experiment Success

Robotic Autonomy in Complex Environments with Resiliency (RACER) program successfully tested autonomous movement on a new, much larger fleet vehicle – a significant step in scaling up the adaptability and capability of the underlying RACER algorithms.

The RACER Heavy Platform (RHP) vehicles are 12-ton, 20-foot-long, skid-steer tracked vehicles – similar in size to forthcoming robotic and optionally manned combat/fighting vehicles. The RHPs complement the 2-ton, 11-foot-long, Ackermann-steered, wheeled RACER Fleet Vehicles (RFVs) already in use.

“Having two radically different types of vehicles helps us advance towards RACER’s goal of platform agnostic autonomy in complex, mission-relevant off-road environments that are significantly more unpredictable than on-road conditions,” said Stuart Young, RACER program manager.

How do you make a robot smarter?

Teaching robots to ask for help is key to making them safer and more efficient.

Engineers at Princeton University and Google have come up with a new way to teach robots to know when they don’t know. The technique involves quantifying the fuzziness of human language and using that measurement to tell robots when to ask for further directions. Telling a robot to pick up a bowl from a table with only one bowl is fairly clear. But telling a robot to pick up a bowl when there are five bowls on the table generates a much higher degree of uncertainty — and triggers the robot to ask for clarification.

Because tasks are typically more complex than a simple “pick up a bowl” command, the engineers use large language models (LLMs) — the technology behind tools such as ChatGPT — to gauge uncertainty in complex environments. LLMs are bringing robots powerful capabilities to follow human language, but LLM outputs are still frequently unreliable, said Anirudha Majumdar, an assistant professor of mechanical and aerospace engineering at Princeton and the senior author of a study outlining the new method.

A new brain-inspired artificial dendritic neural circuit

Following the rapid advancement of artificial intelligence (AI) tools, engineers worldwide have been working on new architectures and hardware components that replicate the organization and functions of the human brain.

Most brain-inspired technologies created to date draw inspiration from the firing of brain cells (i.e., neurons), rather than mirroring the overall structure of neural elements and how they contribute to information processing.

Researchers at Tsinghua University recently introduced a new neuromorphic computational architecture designed to replicate the organization of synapses (i.e., connections between neurons) and the tree-like structure of dendrites (i.e., projections extending from the body of neurons).

AI could prove that reality doesn’t exist, physicists say

Learn science in the easiest and most engaging way possible with Brilliant! First 30 days are free and 20% off the annual premium subscription when you use our link ➜ https://brilliant.org/sabine.

A group of physicists wants to use artificial intelligence to prove that reality doesn’t exist. They want to do this by running an artificial general intelligence as an observer on a quantum computer. I wish this was a joke. But I’m afraid it’s not.

Paper here: https://quantum-journal.org/papers/q–

🤓 Check out my new quiz app ➜ http://quizwithit.com/
💌 Support me on Donorbox ➜ https://donorbox.org/swtg.
📝 Transcripts and written news on Substack ➜ https://sciencewtg.substack.com/
👉 Transcript with links to references on Patreon ➜ / sabine.
📩 Free weekly science newsletter ➜ https://sabinehossenfelder.com/newsle
👂 Audio only podcast ➜ https://open.spotify.com/show/0MkNfXl
🔗 Join this channel to get access to perks ➜
/ @sabinehossenfelder.
🖼️ On instagram ➜ / sciencewtg.

#science #sciencenews #artificialintelligence #physics

Exploring AI, Cognitive Science, and Ethics | Deep Interview with Jay Friedenberg

In this thought-provoking lecture, Prof. Jay Friedenberg from Manhattan College delves into the intricate interplay between cognitive science, artificial intelligence, and ethics. With nearly 30 years of teaching experience, Prof. Friedenberg discusses how visual perception research informs AI design, the implications of brain-machine interfaces, the role of creativity in both humans and AI, and the necessity for ethical considerations as technology evolves. He emphasizes the importance of human agency in shaping our technological future and explores the concept of universal values that could guide the development of AGI for the betterment of society.

00:00 Introduction to Jay Friedenberg.
01:02 Connecting Cognitive Science and AI
02:36 Human Augmentation and Technology.
03:50 Brain-Machine Interfaces.
05:43 Balancing Optimism and Caution in AI
07:52 Free Will vs Determinism.
12:34 Creativity in Humans and Machines.
16:45 Ethics and Value Alignment in AI
20:09 Conclusion and Future Work.

SingularityNET was founded by Dr. Ben Goertzel with the mission of creating a decentralized, democratic, inclusive, and beneficial Artificial General Intelligence (AGI). An AGI is not dependent on any central entity, is open to anyone, and is not restricted to the narrow goals of a single corporation or even a single country.

The SingularityNET team includes seasoned engineers, scientists, researchers, entrepreneurs, and marketers. Our core platform and AI teams are further complemented by specialized teams devoted to application areas such as finance, robotics, biomedical AI, media, arts, and entertainment.

Website: https://singularitynet.io.
X: https://x.com/SingularityNET
Instagram: / singularitynet.io.
Discord: / discord.
Forum: https://community.singularitynet.io.
Telegram: https://t.me/singularitynet.
WhatsApp: https://whatsapp.com/channel/0029VaM8
Warpcast: https://warpcast.com/singularitynet.
Mindplex Social: https://social.mindplex.ai/@Singulari
Github: https://github.com/singnet.
Linkedin: / singularitynet.