Toggle light / dark theme

Fear the fire or harness the flame: The future of generative AI

Join top executives in San Francisco on July 11–12, to hear how leaders are integrating and optimizing AI investments for success. Learn More

Generative AI has taken the world by storm. So much so that in the last several months, the technology has twice been a major feature on CBS’s “60 Minutes.” The rise of startling conversant chatbots such as ChatGPT has even prompted warnings of runaway technology from some luminary artificial intelligence (AI) experts. While the current state of generative AI is clearly impressive — perhaps dazzling would be a better adjective — it might be even further advanced than is generally understood.

This week, The New York Times reported that some researchers in the tech industry believe these systems have moved toward something that cannot be explained as a “stochastic parrot” — a system that simply mimics its underlying dataset. Instead, they are seeing “An AI system that is coming up with humanlike answers and ideas that weren’t programmed into it.” This observation comes from Microsoft and is based on responses to their prompts from OpenAI’s ChatGPT.

G7 calls for adoption of international technical standards for AI

TOKYO, May 20 (Reuters) — Leaders of the Group of Seven (G7) nations on Saturday called for the development and adoption of international technical standards for trustworthy artificial intelligence (AI) as lawmakers of the rich countries focus on the new technology.

While the G7 leaders, meeting in Hiroshima, Japan, recognised that the approaches to achieving “the common vision and goal of trustworthy AI may vary”, they said in a statement that “the governance of the digital economy should continue to be updated in line with our shared democratic values”.

The agreement came after European Union, which is represented at the G7, inched closer this month to passing legislation to regulate AI technology, potentially the world’s first comprehensive AI law.

AI technology may help to immortalize his performaces, says Tom Hanks

In a conversation during an episode of the Adam Buxton podcast, the veteran performer recently voiced his opinion regarding the likelihood of actors being kept alive in movies through the power of advanced technologies.

Known for his iconic roles in numerous blockbuster films like Forest Gump and Cast Away, the actor said that such technologies can be leveraged to recreate his image, voice, and mannerisms, from “now until kingdom come,” said Tom in the podcast.

UN adviser warns about the destructive use of deepfakes

He argues policymakers need to do more to protect civilians.

Speaking to CTVNews.


FotografieLink/iStock.

“A digital twin is essentially a replica of something from the real world… Deepfakes are the mirror image of digital twins, meaning that someone had created a digital replica without the permission of that person, and usually for malicious purposes, usually to trick somebody,” California-based AI expert Neil Sahota, who has served as an AI adviser to the United Nations, told the news outlet.

G7 leaders call for standards to keep AI ‘trustworthy’

On Saturday, leaders of the Group of Seven (G7) nations made some public calls for the development and adoption of technical standards to keep artificial intelligence (AI) “trustworthy.” They added that they feared that governance of the technology has not kept pace with its growth.

This is according to a report by the Financial Post published on Saturday.

The leaders from the U.S., Japan, Germany, Britain, France, Italy, Canada and the EU said in a statement that while approaches to “the common vision and goal of trustworthy AI may vary,” the rules for digital technologies like AI should be “in line with our shared democratic values.”

AI detectors falling short in the battle against cheating

Despite their potential, AI detectors often fall short of accurately identifying and mitigating cheating.

In the age of advanced artificial intelligence, the fight against cheating, plagiarism and misinformation has taken a curious turn.

As developers and companies race to create AI detectors capable of identifying content written by other AIs, a new study from Stanford scholars reveals a disheartening truth: these detectors are far from reliable. Students would probably love to hear this.

Machine-learning program reveals genes responsible for sex-specific differences in Alzheimer’s disease progression

Alzheimer’s disease (AD) is a complex neurodegenerative illness with genetic and environmental origins. Females experience faster cognitive decline and cerebral atrophy than males, while males have greater mortality rates. Using a new machine-learning method they developed called “Evolutionary Action Machine Learning (EAML),” researchers at Baylor College of Medicine and the Jan and Dan Duncan Neurological Research Institute (Duncan NRI) at Texas Children’s Hospital have discovered sex-specific genes and molecular pathways that contribute to the development and progression of this condition. The study was published in Nature Communications.

“We have developed a unique machine-learning software that uses an advanced computational predictive metric called the evolutionary action (EA) score as a feature to identify that influence AD risk separately in males and females,” Dr. Olivier Lichtarge, MD, Ph.D., professor of biochemistry and at Baylor College of Medicine, said. “This approach lets us exploit a massive amount of evolutionary data efficiently, so we can now probe with greater accuracy smaller cohorts and identify involved in in AD.”

EAML is an ensemble computational approach that includes nine machine learning algorithms to analyze the functional impact of non-synonymous coding variants, defined as DNA mutations that affect the structure and function of the resulting protein, and estimates their deleterious effect on using the evolutionary action (EA) score.

Generative AI Shakes Global Diplomacy At G7 Summit In Japan

Our technological age is witnessing a breakthrough that has existential implications and risks. The innovative behemoth, ChatGPT, created by OpenAI, is ushering us inexorably into an AI economy where machines can spin human-like text, spark deep conversations and unleash unparalleled potential. However, this bold new frontier has its challenges. Security, privacy, data ownership and ethical considerations are complex issues that we must address, as they are no longer just hypothetical but a reality knocking at our door.

The G7, composed of the world’s seven most advanced economies, has recognized the urgency of addressing the impact of AI.


To understand how countries may approach AI, we need to examine a few critical aspects.

Clear regulations and guidelines for generative AI: To ensure the responsible and safe use of generative AI, it’s crucial to have a comprehensive regulatory framework that covers privacy, security and ethics. This framework will provide clear guidance for both developers and users of AI technology.

Public engagement: It’s important to involve different viewpoints in policy discussions about AI, as these decisions affect society as a whole. To achieve this, public consultations or conversations with the general public about generative AI can be helpful.