Toggle light / dark theme

When Google announced this week that its climate emissions had risen by 48 percent since 2019, it pointed the finger at artificial intelligence.

US tech firms are building vast networks of data centers across the globe and say AI is fueling the growth, throwing the spotlight on the amount of energy the technology is sucking up and its impact on the environment.

By Gitta Kutyniok

The recent unprecedented success of foundation models like GPT-4 has heightened the general public’s awareness of artificial intelligence (AI) and inspired vivid discussion about its associated possibilities and threats. In March 2023, a group of technology leaders published an open letter that called for a public pause in AI development to allow time for the creation and implementation of shared safety protocols. Policymakers around the world have also responded to rapid advancements in AI technology with various regulatory efforts, including the European Union (EU) AI Act and the Hiroshima AI Process.

One of the current problems—and consequential dangers—of AI technology is its unreliability and subsequent lack of trustworthiness. In recent years, AI-based technologies have often encountered severe issues in terms of safety, security, privacy, and responsibility with respect to fairness and interpretability. Privacy violations, unfair decisions, unexplainable results, and accidents involving self-driving cars are all examples of concerning outcomes.

This is a recording of the AGI23 Conference, Day 1, June 16th 2023, Stockholm. This video shows the following tutorial: Test and Evaluation First Principles for General Learning Systems, led by Tyler Cody.

SingularityNET was founded by Dr. Ben Goertzel with the mission of creating a decentralized, democratic, inclusive, and beneficial Artificial General Intelligence (AGI). An AGI is not dependent on any central entity, is open to anyone, and is not restricted to the narrow goals of a single corporation or even a single country.

The SingularityNET team includes seasoned engineers, scientists, researchers, entrepreneurs, and marketers. Our core platform and AI teams are further complemented by specialized teams devoted to application areas such as finance, robotics, biomedical AI, media, arts, and entertainment.

Website: https://singularitynet.io.

A team of AI researchers at Meta’s Fundamental AI Research team are making four new AI models publicly available to researchers and developers creating new applications. The team has posted a paper on the arXiv preprint server outlining one of the new models, JASCO, and how it might be used.

As interest in AI applications grows, major players in the field are creating AI models that can be used by other entities to add AI capabilities to their own applications. In this new effort, the team at Meta has made available four new models: JASCO, AudioSeal and two versions of Chameleon.

JASCO has been designed to accept different types of audio input and create an improved sound. The , the team says, allows users to adjust characteristics such as the sound of drums, guitar chords or even melodies to craft a . The model can also accept text input and will use it to flavor a tune.

Robotic Autonomy in Complex Environments with Resiliency (RACER) program successfully tested autonomous movement on a new, much larger fleet vehicle – a significant step in scaling up the adaptability and capability of the underlying RACER algorithms.

The RACER Heavy Platform (RHP) vehicles are 12-ton, 20-foot-long, skid-steer tracked vehicles – similar in size to forthcoming robotic and optionally manned combat/fighting vehicles. The RHPs complement the 2-ton, 11-foot-long, Ackermann-steered, wheeled RACER Fleet Vehicles (RFVs) already in use.

“Having two radically different types of vehicles helps us advance towards RACER’s goal of platform agnostic autonomy in complex, mission-relevant off-road environments that are significantly more unpredictable than on-road conditions,” said Stuart Young, RACER program manager.