Toggle light / dark theme

Former OpenAI Director Warns There Are Bad Things AI Can Do Besides Kill You

There’s a lot of other ways that AI could really take things in a bad direction.


One of the OpenAI directors who worked to oust CEO Sam Altman is issuing some stark warnings about the future of unchecked artificial intelligence.

In an interview during Axios’ AI+ summit, former OpenAI board member Helen Toner suggested that the risks AI poses to humanity aren’t just worst-case scenarios from science fiction.

“I just think sometimes people hear the phrase ‘existential risk’ and they just think Skynet, and robots shooting humans,” Toner said, referencing the evil AI technology from the “Terminator” films that’s often used as a metaphor for worst-case-scenario AI predictions.

New technique could help build quantum computers of the future

Quantum computers have the potential to solve complex problems in human health, drug discovery, and artificial intelligence millions of times faster than some of the world’s fastest supercomputers. A network of quantum computers could advance these discoveries even faster. But before that can happen, the computer industry will need a reliable way to string together billions of qubits—or quantum bits—with atomic precision.

AI Trained Draw Inspiration from Images, Not Copy Them

Powerful new artificial intelligence models sometimes, quite famously, get things wrong—whether hallucinating false information or memorizing others’ work and offering it up as their own. To address the latter, researchers led by a team at The University of Texas at Austin have developed a framework to train AI models on images corrupted beyond recognition.

DALL-E, Midjourney and Stable Diffusion are among the text-to-image diffusion generative AI models that can turn arbitrary user text into highly realistic images. All three are now facing lawsuits from artists who allege generated samples replicate their work. Trained on billions of image-text pairs that are not publicly available, the models are capable of generating high-quality imagery from textual prompts but may draw on copyrighted images that they then replicate.

The newly proposed framework, called Ambient Diffusion, gets around this problem by training diffusion models through access only to corrupted image-based data. Early efforts suggest the framework is able to continue to generate high-quality samples without ever seeing anything that’s recognizable as the original source images.

Four-legged, dog-like robot ‘sniffs’ hazardous gases in inaccessible environments

Nightmare material or truly man’s best friend? A team of researchers equipped a dog-like quadruped robot with a mechanized arm that takes air samples from potentially treacherous situations, such as an abandoned building or fire. The robot dog walks samples to a person who screens them for potentially hazardous compounds, says the team that published its study in Analytical Chemistry. While the system needs further refinement, demonstrations show its potential value in dangerous conditions.

Apple Bursts Onto the AI Stage with Apple Intelligence, ChatGPT, and Multimodal Siri

Apple also announced a ton of other home-grown AI innovations during Monday’s show. This includes AI-generated emojis called Genmoji, a feature that prioritizes important messages, another that summarizes group chats, and Safari will also create AI summaries of webpages.

The financials of Apple’s deal with OpenAI were not disclosed on Monday, and that element remains unclear. While Google has paid Apple billions of dollars for default privileges on Apple products, OpenAI may require an investment to provide computing of this magnitude. Processing AI requests for billions of people is anything but cheap. If Apple pays OpenAI for this, Microsoft may ultimately benefit from this deal.

Neuralink and the Future of Work — Deep Interview With Yip Thy Diep Ta

This interview with Yip Thy Diep Ta, CEO of J3d.ai, delves into AI-driven collective intelligence for decision-making, ethical considerations in AI development (fairness, bias mitigation, diversity), the implications of human-machine integration technologies (e.g., Neuralink), and the evolving role of humans in an AI-driven workforce.

Systain3r Summit: https://www.systain3r.com/

#AI #AGI #Futureofwork.

SingularityNET was founded by Dr. Ben Goertzel with the mission of creating a decentralized, democratic, inclusive, and beneficial Artificial General Intelligence (AGI). An AGI is not dependent on any central entity, is open to anyone, and is not restricted to the narrow goals of a single corporation or even a single country.

The SingularityNET team includes seasoned engineers, scientists, researchers, entrepreneurs, and marketers. Our core platform and AI teams are further complemented by specialized teams devoted to application areas such as finance, robotics, biomedical AI, media, arts, and entertainment.

Website: https://singularitynet.io.