Toggle light / dark theme

In a conversation during an episode of the Adam Buxton podcast, the veteran performer recently voiced his opinion regarding the likelihood of actors being kept alive in movies through the power of advanced technologies.

Known for his iconic roles in numerous blockbuster films like Forest Gump and Cast Away, the actor said that such technologies can be leveraged to recreate his image, voice, and mannerisms, from “now until kingdom come,” said Tom in the podcast.

He argues policymakers need to do more to protect civilians.

Speaking to CTVNews.


FotografieLink/iStock.

“A digital twin is essentially a replica of something from the real world… Deepfakes are the mirror image of digital twins, meaning that someone had created a digital replica without the permission of that person, and usually for malicious purposes, usually to trick somebody,” California-based AI expert Neil Sahota, who has served as an AI adviser to the United Nations, told the news outlet.

On Saturday, leaders of the Group of Seven (G7) nations made some public calls for the development and adoption of technical standards to keep artificial intelligence (AI) “trustworthy.” They added that they feared that governance of the technology has not kept pace with its growth.

This is according to a report by the Financial Post published on Saturday.

The leaders from the U.S., Japan, Germany, Britain, France, Italy, Canada and the EU said in a statement that while approaches to “the common vision and goal of trustworthy AI may vary,” the rules for digital technologies like AI should be “in line with our shared democratic values.”

Despite their potential, AI detectors often fall short of accurately identifying and mitigating cheating.

In the age of advanced artificial intelligence, the fight against cheating, plagiarism and misinformation has taken a curious turn.

As developers and companies race to create AI detectors capable of identifying content written by other AIs, a new study from Stanford scholars reveals a disheartening truth: these detectors are far from reliable. Students would probably love to hear this.

Alzheimer’s disease (AD) is a complex neurodegenerative illness with genetic and environmental origins. Females experience faster cognitive decline and cerebral atrophy than males, while males have greater mortality rates. Using a new machine-learning method they developed called “Evolutionary Action Machine Learning (EAML),” researchers at Baylor College of Medicine and the Jan and Dan Duncan Neurological Research Institute (Duncan NRI) at Texas Children’s Hospital have discovered sex-specific genes and molecular pathways that contribute to the development and progression of this condition. The study was published in Nature Communications.

“We have developed a unique machine-learning software that uses an advanced computational predictive metric called the evolutionary action (EA) score as a feature to identify that influence AD risk separately in males and females,” Dr. Olivier Lichtarge, MD, Ph.D., professor of biochemistry and at Baylor College of Medicine, said. “This approach lets us exploit a massive amount of evolutionary data efficiently, so we can now probe with greater accuracy smaller cohorts and identify involved in in AD.”

EAML is an ensemble computational approach that includes nine machine learning algorithms to analyze the functional impact of non-synonymous coding variants, defined as DNA mutations that affect the structure and function of the resulting protein, and estimates their deleterious effect on using the evolutionary action (EA) score.

Our technological age is witnessing a breakthrough that has existential implications and risks. The innovative behemoth, ChatGPT, created by OpenAI, is ushering us inexorably into an AI economy where machines can spin human-like text, spark deep conversations and unleash unparalleled potential. However, this bold new frontier has its challenges. Security, privacy, data ownership and ethical considerations are complex issues that we must address, as they are no longer just hypothetical but a reality knocking at our door.

The G7, composed of the world’s seven most advanced economies, has recognized the urgency of addressing the impact of AI.


To understand how countries may approach AI, we need to examine a few critical aspects.

Clear regulations and guidelines for generative AI: To ensure the responsible and safe use of generative AI, it’s crucial to have a comprehensive regulatory framework that covers privacy, security and ethics. This framework will provide clear guidance for both developers and users of AI technology.

Public engagement: It’s important to involve different viewpoints in policy discussions about AI, as these decisions affect society as a whole. To achieve this, public consultations or conversations with the general public about generative AI can be helpful.

That is, if you’re paying attention.

So Apple has restricted the use of OpenAI’s ChatGPT and Microsoft’s Copilot, The Wall Street Journal.

It’s not just Apple, but also Samsung and Verizon in the tech world and a who’s who of banks (Bank of America, Citi, Deutsche Bank, Goldman, Wells Fargo, and JPMorgan). This is because of the possibility of confidential data escaping; in any event, ChatGPT’s privacy policy explicitly says your prompts can be used to train its models unless you opt out. The fear of leaks isn’t unfounded: in March, a bug in ChatGPT revealed data from other users.


Apple’s banned the use of OpenAI — as has Samsung, Verizon, and a who’s who of banks. Should the rest of us be concerned about how our data’s getting used?

Aubrey: 50% chance to LEV in 12–15 years, and a variety of topics from Rey Kurzweil to A.I. to Singularity, and so on.


In this podcast, Aubrey de Grey discusses his work as President and CSO at Lev Foundation and co-founder at Sense Research Foundation in the field of longevity. He explains how the Foundation’s focus is to combine rejuvenation and damage repair interventions to have greater efficacy in postponing aging and saving lives. De Grey believes that within 12 to 15 years, they have a 50% chance of achieving longevity escape velocity, which is postponing aging and rejuvenating the body faster than time passes. De Grey acknowledges the limitations of traditional approaches like exercise and diet in postponing aging and feels that future breakthroughs will come from high-tech approaches like skin and cell therapies. He discusses the potential of AI and machine learning in drug discovery and the possibility of using it to accelerate scientific experimentation to optimize decisions about which experiments to do next. De Gray cautions that the quality of conclusions from AI depends on the quality and quantity of input data and that the path towards defeating aging would require a symbiotic partnership between humans and AI. Finally, he discusses his excitement about the possibilities of hardware and devices like Apple Watch and Levels in tracking blood sugar levels and their potential to prolong life.

As big tech companies are in a fierce race with each other to build generative AI tools, they are being cautious about giving their secrets away. In a move to prevent any of its data from ending up with competitors, Apple has restricted internal use of tools like OpenAI’s ChatGPT and Microsoft-owned GitHub’s Copilot, a new report says.

According to The Wall Street Journal, Apple is worried about its confidential data ending up with developers who trained the models on user data. Notably, OpenAI launched the official ChatGPT app on iOS Thursday. Separately, Bloomberg reporter Mark Gurman tweeted that the chatbot has been on the list of restricted software at Apple for months.

I believe ChatGPT has been banned/on the list of restricted software at Apple for months. Obviously the release of ChatGPT on iOS today again makes this relevant.