Toggle light / dark theme

Advanced AI keeps Sundar Pichai up at night and makes Sam Altman a bit scared. Here’s why some tech execs are wary of its potential dangers

A wave of consumer enthusiasm following the launch of OpenAI’s viral ChatGPT has prompted some major tech companies to pour resources into AI development and launch new AI-powered products.

But not everyone is feeling optimistic about the highly intelligent technology.

Last month, several high-profile tech figures, including Elon Musk and Steve Wozniak, threw their weight behind an open letter calling for a pause on developing advanced AI. The letter cited various concerns about the consequences of developing tech more powerful than OpenAI’s GPT-4, including risks to democracy.

Panic at Google: Samsung considers dumping search for Bing and ChatGPT

The New York Times has a big piece detailing Google’s “shock” and “panic” when Samsung recently floated the idea of switching its smartphones from Google Search to Bing. After being the butt of jokes for years, Bing has been seen as a rising threat to Google thanks to Microsoft’s deal with OpenAI and the integration of the red-hot ChatGPT generative AI. Now, according to the report, one of Android’s biggest manufacturers is threatening to switch its new phones away from Google Search.

Of course, preinstalled search deals are more about cash than quality. Google pays billions every year to be the default search engine on popular products with deals framed as either “revenue sharing” or “traffic acquisition fees.” Google reportedly pays as much as $3.5 billion per year to be the default search on Samsung phones, while it pays Apple $20 billion per year to be the default search on iOS and macOS. The report notes that the Samsung/Google search contract “is under negotiation, and Samsung could stick with Google.”

A neuromorphic visual sensor can recognize moving objects and predict their path

A new bio-inspired sensor can recognize moving objects in a single frame from a video and successfully predict where they will move to. This smart sensor, described in a Nature Communications paper, will be a valuable tool in a range of fields, including dynamic vision sensing, automatic inspection, industrial process control, robotic guidance, and autonomous driving technology.

Current motion detection systems need many components and complex algorithms doing frame-by-frame analyses, which makes them inefficient and energy-intensive. Inspired by the human visual system, researchers at Aalto University have developed a new neuromorphic vision technology that integrates sensing, memory, and processing in a single device that can detect motion and predict trajectories.

At the core of their technology is an array of photomemristors, that produce in response to light. The current doesn’t immediately stop when the light is switched off. Instead, it decays gradually, which means that photomemristors can effectively “remember” whether they’ve been exposed to light recently. As a result, a sensor made from an array of photomemristors doesn’t just record instantaneous information about a scene, like a camera does, but also includes a dynamic memory of the preceding instants.

DeepMind Sparrow Is A New AGI That Is Safer And More Precise

Year 2022 face_with_colon_three


AI models that interact more effectively, precisely, and safely are being developed as a result of technological developments. Large language models (LLMs) have excelled in a variety of tasks in recent years, including question-answering, summarizing, and conversation. Dialogue is an activity that especially interests scholars since it allows for flexible and dynamic communication.

However, LLM-powered chat agents frequently provide incorrect or made-up content, discriminative language, or advocate unsafe conduct. Researchers may be able to create safer conversation bots by learning from user remarks. Based on input from study participants, new strategies for training conversation bots that show promise for a safer system can be examined using reinforcement learning.

DeepMind Sparrow has been unveiled, a realistic dialogue agent that reduces the chance of harmful and incorrect replies, in their most recent article. Sparrow’s mission is to train conversation agents how to be more helpful, accurate, and safe.

AI Could Make More Work for Us, Instead of Simplifying Our Lives

Scientists said it allowed them to evaluate a greater number of hypotheses, along with the number of ways that scientists could make subtle changes to the experimental set-up. This had the effect of boosting the volume of data that needed checking, standardizing, and sharing.

Also, robots needed to be “trained” in performing experiments previously carried out manually. Humans, too, needed to develop new skills for preparing, repairing, and supervising robots. This was done to ensure there were no errors in the scientific process.

Scientific work is often judged on output such as peer-reviewed publications and grants. However, the time taken to clean, troubleshoot, and supervise automated systems competes with the tasks traditionally rewarded in science. These less valued tasks may also be largely invisible—particularly because managers are the ones who would be unaware of mundane work due to not spending as much time in the lab.

New voice cloning AI lets “you” speak multiple languages

This article is an installment of Future Explored, a weekly guide to world-changing technology. You can get stories like this one straight to your inbox every Thursday morning by subscribing here.

In January, Microsoft unveiled an AI that can clone a speaker’s voice after hearing them talk for just three seconds. While this system, VALL-E, was far from the first voice cloning AI, its accuracy and need for such a small audio sample set a new bar for the tech.

Microsoft has now raised that bar again with an update called “VALL-E X,” which can clone a voice from a short sample (4 to 10 seconds) and then use it to synthesize speech in a different language, all while preserving the original speaker’s voice, emotion, and tone.

Ethics and Rights for AI Artwork

By Cheryl Gallagher Cultural and Creative Content Specialist

In the news recently, the US Copyright Office partially rescinded copyright protections for an article containing exclusively AI generated art. It was a landmark decision that is likely just the beginning of a long legal and ethical debate around the role, ethics, and rights of Artificial Intelligence in today’s global society — and tomorrow’s interplanetary one.

AI artworks are currently being denied copyright protection because copyrights only protect human generated work, and in the Copyright Office’s current opinion, the “artist” does not exert enough creative control over the output of the program (i.e., just using a written prompt to generate an image does not constitute a copyrightable work, as the program generated it, not the human involved). At least some AI generated images are considered to have enough human “involvement” to be copyrightable, but more direct working with the imagery is required.

/* */