Toggle light / dark theme

Join top executives in San Francisco on July 11–12, to hear how leaders are integrating and optimizing AI investments for success. Learn More

Generative AI has taken the world by storm. So much so that in the last several months, the technology has twice been a major feature on CBS’s “60 Minutes.” The rise of startling conversant chatbots such as ChatGPT has even prompted warnings of runaway technology from some luminary artificial intelligence (AI) experts. While the current state of generative AI is clearly impressive — perhaps dazzling would be a better adjective — it might be even further advanced than is generally understood.

This week, The New York Times reported that some researchers in the tech industry believe these systems have moved toward something that cannot be explained as a “stochastic parrot” — a system that simply mimics its underlying dataset. Instead, they are seeing “An AI system that is coming up with humanlike answers and ideas that weren’t programmed into it.” This observation comes from Microsoft and is based on responses to their prompts from OpenAI’s ChatGPT.

TOKYO, May 20 (Reuters) — Leaders of the Group of Seven (G7) nations on Saturday called for the development and adoption of international technical standards for trustworthy artificial intelligence (AI) as lawmakers of the rich countries focus on the new technology.

While the G7 leaders, meeting in Hiroshima, Japan, recognised that the approaches to achieving “the common vision and goal of trustworthy AI may vary”, they said in a statement that “the governance of the digital economy should continue to be updated in line with our shared democratic values”.

The agreement came after European Union, which is represented at the G7, inched closer this month to passing legislation to regulate AI technology, potentially the world’s first comprehensive AI law.

SABINE HOSSENFELDER: My name is Sabine Hossenfelder. I’m a physicist and Research Fellow at the Frankfurt Institute for Advanced Studies, and I have a book that’s called “Existential Physics: A Scientist’s Guide to Life’s Biggest Questions.”

NARRATOR: Why did you pursue a career in physics?

HOSSENFELDER: I originally studied mathematics, not physics, because I was broadly interested in the question how much can we describe about nature with mathematics? But mathematics is a really big field and I couldn’t make up my mind exactly what to study. And so I decided to focus on that part of mathematics that’s actually good to describe nature and that naturally led me to physics. I was generally trying to make sense of the world and I thought that human interactions, social systems are a pretty hopeless case. There’s no way I’ll ever make sense of them. But simple things like particles or maybe planets and moons, I might be able to work that out. In the foundations of physics, we work with a lot of mathematics and I know from my own experience that it’s really, really hard to learn. And so I think for a lot of people out there, the journal articles that we write in the foundations of physics are just incomprehensible.

In a conversation during an episode of the Adam Buxton podcast, the veteran performer recently voiced his opinion regarding the likelihood of actors being kept alive in movies through the power of advanced technologies.

Known for his iconic roles in numerous blockbuster films like Forest Gump and Cast Away, the actor said that such technologies can be leveraged to recreate his image, voice, and mannerisms, from “now until kingdom come,” said Tom in the podcast.

He argues policymakers need to do more to protect civilians.

Speaking to CTVNews.


FotografieLink/iStock.

“A digital twin is essentially a replica of something from the real world… Deepfakes are the mirror image of digital twins, meaning that someone had created a digital replica without the permission of that person, and usually for malicious purposes, usually to trick somebody,” California-based AI expert Neil Sahota, who has served as an AI adviser to the United Nations, told the news outlet.

On Saturday, leaders of the Group of Seven (G7) nations made some public calls for the development and adoption of technical standards to keep artificial intelligence (AI) “trustworthy.” They added that they feared that governance of the technology has not kept pace with its growth.

This is according to a report by the Financial Post published on Saturday.

The leaders from the U.S., Japan, Germany, Britain, France, Italy, Canada and the EU said in a statement that while approaches to “the common vision and goal of trustworthy AI may vary,” the rules for digital technologies like AI should be “in line with our shared democratic values.”

Despite their potential, AI detectors often fall short of accurately identifying and mitigating cheating.

In the age of advanced artificial intelligence, the fight against cheating, plagiarism and misinformation has taken a curious turn.

As developers and companies race to create AI detectors capable of identifying content written by other AIs, a new study from Stanford scholars reveals a disheartening truth: these detectors are far from reliable. Students would probably love to hear this.