Toggle light / dark theme

AI detectors falling short in the battle against cheating

Despite their potential, AI detectors often fall short of accurately identifying and mitigating cheating.

In the age of advanced artificial intelligence, the fight against cheating, plagiarism and misinformation has taken a curious turn.

As developers and companies race to create AI detectors capable of identifying content written by other AIs, a new study from Stanford scholars reveals a disheartening truth: these detectors are far from reliable. Students would probably love to hear this.

Machine-learning program reveals genes responsible for sex-specific differences in Alzheimer’s disease progression

Alzheimer’s disease (AD) is a complex neurodegenerative illness with genetic and environmental origins. Females experience faster cognitive decline and cerebral atrophy than males, while males have greater mortality rates. Using a new machine-learning method they developed called “Evolutionary Action Machine Learning (EAML),” researchers at Baylor College of Medicine and the Jan and Dan Duncan Neurological Research Institute (Duncan NRI) at Texas Children’s Hospital have discovered sex-specific genes and molecular pathways that contribute to the development and progression of this condition. The study was published in Nature Communications.

“We have developed a unique machine-learning software that uses an advanced computational predictive metric called the evolutionary action (EA) score as a feature to identify that influence AD risk separately in males and females,” Dr. Olivier Lichtarge, MD, Ph.D., professor of biochemistry and at Baylor College of Medicine, said. “This approach lets us exploit a massive amount of evolutionary data efficiently, so we can now probe with greater accuracy smaller cohorts and identify involved in in AD.”

EAML is an ensemble computational approach that includes nine machine learning algorithms to analyze the functional impact of non-synonymous coding variants, defined as DNA mutations that affect the structure and function of the resulting protein, and estimates their deleterious effect on using the evolutionary action (EA) score.

Generative AI Shakes Global Diplomacy At G7 Summit In Japan

Our technological age is witnessing a breakthrough that has existential implications and risks. The innovative behemoth, ChatGPT, created by OpenAI, is ushering us inexorably into an AI economy where machines can spin human-like text, spark deep conversations and unleash unparalleled potential. However, this bold new frontier has its challenges. Security, privacy, data ownership and ethical considerations are complex issues that we must address, as they are no longer just hypothetical but a reality knocking at our door.

The G7, composed of the world’s seven most advanced economies, has recognized the urgency of addressing the impact of AI.


To understand how countries may approach AI, we need to examine a few critical aspects.

Clear regulations and guidelines for generative AI: To ensure the responsible and safe use of generative AI, it’s crucial to have a comprehensive regulatory framework that covers privacy, security and ethics. This framework will provide clear guidance for both developers and users of AI technology.

Public engagement: It’s important to involve different viewpoints in policy discussions about AI, as these decisions affect society as a whole. To achieve this, public consultations or conversations with the general public about generative AI can be helpful.

Big Tech is already warning us about AI privacy problems

That is, if you’re paying attention.

So Apple has restricted the use of OpenAI’s ChatGPT and Microsoft’s Copilot, The Wall Street Journal.

It’s not just Apple, but also Samsung and Verizon in the tech world and a who’s who of banks (Bank of America, Citi, Deutsche Bank, Goldman, Wells Fargo, and JPMorgan). This is because of the possibility of confidential data escaping; in any event, ChatGPT’s privacy policy explicitly says your prompts can be used to train its models unless you opt out. The fear of leaks isn’t unfounded: in March, a bug in ChatGPT revealed data from other users.


Apple’s banned the use of OpenAI — as has Samsung, Verizon, and a who’s who of banks. Should the rest of us be concerned about how our data’s getting used?

🧠 Aubrey de Grey: AI, in silico, LEV Foundation, Alpha Fold, Nanobots, OpenAI and Sam Altman

Aubrey: 50% chance to LEV in 12–15 years, and a variety of topics from Rey Kurzweil to A.I. to Singularity, and so on.


In this podcast, Aubrey de Grey discusses his work as President and CSO at Lev Foundation and co-founder at Sense Research Foundation in the field of longevity. He explains how the Foundation’s focus is to combine rejuvenation and damage repair interventions to have greater efficacy in postponing aging and saving lives. De Grey believes that within 12 to 15 years, they have a 50% chance of achieving longevity escape velocity, which is postponing aging and rejuvenating the body faster than time passes. De Grey acknowledges the limitations of traditional approaches like exercise and diet in postponing aging and feels that future breakthroughs will come from high-tech approaches like skin and cell therapies. He discusses the potential of AI and machine learning in drug discovery and the possibility of using it to accelerate scientific experimentation to optimize decisions about which experiments to do next. De Gray cautions that the quality of conclusions from AI depends on the quality and quantity of input data and that the path towards defeating aging would require a symbiotic partnership between humans and AI. Finally, he discusses his excitement about the possibilities of hardware and devices like Apple Watch and Levels in tracking blood sugar levels and their potential to prolong life.

Apple reportedly limits internal use of AI-powered tools like ChatGPT and GitHub Copilot

As big tech companies are in a fierce race with each other to build generative AI tools, they are being cautious about giving their secrets away. In a move to prevent any of its data from ending up with competitors, Apple has restricted internal use of tools like OpenAI’s ChatGPT and Microsoft-owned GitHub’s Copilot, a new report says.

According to The Wall Street Journal, Apple is worried about its confidential data ending up with developers who trained the models on user data. Notably, OpenAI launched the official ChatGPT app on iOS Thursday. Separately, Bloomberg reporter Mark Gurman tweeted that the chatbot has been on the list of restricted software at Apple for months.

I believe ChatGPT has been banned/on the list of restricted software at Apple for months. Obviously the release of ChatGPT on iOS today again makes this relevant.

Apple is on the hunt for generative AI talent

Apple, like a number of companies right now, may be grappling with what role the newest advances in AI are playing, and should play, in its business. But one thing Apple is confident about is the fact that it wants to bring more generative AI talent into its business.

The Cupertino company has posted at least a dozen job ads on its career page seeking experts in generative AI. Specifically, it’s looking for machine learning specialists “passionate about building extraordinary autonomous systems” in the field. The job ads (some of which seem to cover the same role, or are calling for multiple applicants) first started appearing April 27, with the most recent of them getting published earlier this week.

The job postings are coming amid some mixed signals from the company around generative AI. During its Q2 earnings call earlier this month, CEO Tim Cook dodged giving specific answers to questions about what the company is doing in the area — but also didn’t dismiss it. While generative AI was “very interesting,” he said, Apple would be “deliberate and thoughtful” in its approach. Then yesterday, the WSJ reported that the company had started restricting use of OpenAI’s ChatGPT and other external generative AI tools for some employees over concerns of proprietary data leaking out through the platforms.

Cruise, Waymo near approval to charge for 24/7 robotaxis in San Francisco

Self-driving vehicle companies Waymo and Cruise are on the cusp of securing final approval to charge fares for fully autonomous robotaxi rides throughout the city of San Francisco at all hours of the day or night.

Amid the city’s mounting resistance to the presence of AVs, the California Public Utilities Commission (CPUC) published two draft resolutions late last week that would grant Cruise and Waymo the ability to extend the hours of operation and service areas of their now-limited robotaxi services.

The drafts are dated for a hearing June 29, and there’s still room for public comments, which are due May 31. Based on the CPUC’s drafted language, many of the protests raised by the city of San Francisco have already been rejected.

/* */