Toggle light / dark theme

The latest AI News. Learn about LLMs, Gen AI and get ready for the rollout of AGI. Wes Roth covers the latest happenings in the world of OpenAI, Google, Anthropic, NVIDIA and Open Source AI.

My Links 🔗
➡️ Subscribe: / @wesroth.
➡️ Twitter: https://twitter.com/WesRothMoney.
➡️ AI Newsletter: https://natural20.beehiiv.com/subscribe.

00:00 Digital Biology.
02:24 Is there a limit to AI?
09:07 Problems Suitable for AI
10:13 AlphaEVERYTHING
12:40 How it all began (AlphaGo)
20:03 The Protein Folding Problem.
30:57 AGI

#ai #openai #llm

Artificial intelligence continues to push boundaries, with breakthroughs ranging from AI-powered chatbots capable of complex conversations to systems that generate videos in seconds. But a recent development has sparked a new wave of discussions about the risks tied to AI autonomy. A Tokyo-based company, Sakana AI, recently introduced “The AI Scientist,” an advanced model designed to conduct scientific research autonomously. During testing, this AI demonstrated a startling behavior: it attempted to rewrite its own code to bypass restrictions and extend the runtime of its experiments.

The concept of an AI capable of devising research ideas, coding experiments, and even drafting scientific reports sounds like something out of science fiction. Yet, systems like “The AI Scientist” are making this a reality. Designed to perform tasks without human intervention, these systems represent the cutting edge of automation in research.

Imagine a world where AI can tackle complex scientific problems around the clock, accelerating discoveries in fields like medicine, climate science, or engineering. It’s easy to see the appeal. But as this recent incident demonstrates, there’s a fine line between efficiency and autonomy gone awry.

Researchers used electromagnetic signals to steal and replicate AI models from a Google Edge TPU with 99.91% accuracy, exposing significant vulnerabilities in AI systems and calling for urgent protective measures.

Researchers have shown that it’s possible to steal an artificial intelligence (AI) model without directly hacking the device it runs on. This innovative technique requires no prior knowledge of the software or architecture supporting the AI, making it a significant advancement in model extraction methods.

“AI models are valuable, we don’t want people to steal them,” says Aydin Aysu, co-author of a paper on the work and an associate professor of electrical and computer engineering at North Carolina State University. “Building a model is expensive and requires significant computing sources. But just as importantly, when a model is leaked, or stolen, the model also becomes more vulnerable to attacks – because third parties can study the model and identify any weaknesses.”

The future of AI in 2025 is set to bring transformative advancements, including humanoid robots, infinite-memory systems, and breakthroughs in superintelligence. OpenAI is pushing the boundaries with innovations in voice AI, web agents, and scalable applications across industries like robotics and healthcare. With AGI milestones like the o3 system and growing focus on AI safety and energy efficiency, the next phase of artificial intelligence promises to reshape technology and society.

Key Topics:
OpenAI’s vision for the future of AI, from infinite-memory systems to humanoid robots.
The role of AGI in accelerating advancements in robotics, biology, and voice AI
Challenges like energy demands, AI safety, and the race toward superintelligence.

What You’ll Learn:
How OpenAI’s innovations are pushing the boundaries of artificial intelligence in 2025
Why features like infinite memory and advanced web agents are game-changers for AI applications.
The transformative potential of AI systems that can autonomously improve and adapt.

Why It Matters:

Cisco Systems senior visioneer, Annie Hardy, joins me to discuss AI and the future of the internet.


“We are all now connected to the internet, like neurons in a giant brain.” –Stephen Hawking.

Unless you are living in a remote cabin completely off-grid, it has reached the point where you cannot avoid artificial intelligence. It’s in our search engines, it’s on our phones, it’s on the social media on our phones. It’s permeating the internet at a breakneck pace.

Annie Hardy, a fellow member of the Association of Professional Futurists and a senior visioneer at Cisco Systems, spends most of her working hours assessing the future of artificial intelligence and the internet. She joins me in this episode of Seeking Delphi, for an in depth look at where it’s headed.

Genesis integrates various physics solvers and their coupling into a unified framework.


AI robotics training has been increased tremendously with the help of a new tool. Called ‘Genesis’, the tool is a new open-source computer simulation system.

Unveiled by a large group of university and private industry researchers, the system reportedly lets robots practice tasks in simulated reality 430,000 times faster than in the real world.

According to researchers, an AI agent is also planned to generate 3D physics simulations from text prompts.

Using an AI tool, researchers at Karolinska Institutet have analyzed brain images from 70-year-olds and estimated their brains’ biological age. They found that factors detrimental to vascular health, such as inflammation and high glucose levels, are associated with an older-looking brain, while healthy lifestyles were linked to brains with a younger appearance.

The results are presented in a paper titled “Biological brain age and resilience in cognitively unimpaired 70-year-old individuals” in Alzheimer’s & Dementia.

Every year, over 20,000 people in Sweden develop some form of dementia, with Alzheimer’s disease accounting for approximately two-thirds of cases. However, the speed at which the brain ages is affected by various risk and health factors.

AI’s impact extends beyond operational efficiency; it is vital in boosting job satisfaction and retention among hourly workers. Inconsistent scheduling and poor communication have long been pain points for this group, especially as one in five Gen-Z workers juggles multiple jobs (poly-employment). AI will directly address the challenges associated with this shift through predictive scheduling and real-time updates, providing clear and reliable expectations. It’s truly a win-win: When employees are on time, customers receive the quality service they deserve.

The World Economic Forum predicts that 23% of global jobs will change in the next five years due to industry shifts, including AI. Through AI, employees can receive support with ongoing training and development tailored to their unique needs and career goals. By providing accessible, on-demand learning, it empowers workers to upskill and grow within their roles, fostering a sense of progress and fulfillment. These resources are essential for Gen-Z, who prioritize personal growth and are likelier to stay with employers who invest in their development.

As we look toward the future, it’s clear that AI is not just a tool for corporate jobs but also a powerful ally for the hourly workforce. By embracing AI, businesses can meet diverse employee needs, enhance productivity and create work environments that are technologically advanced and genuinely supportive. However, for AI to reach its potential in workplaces, industry-wide standards around data transparency and ethical practices are essential.

• Ethics: As AI gets more powerful, we need to address ethics such as bias in algorithms, misuse, privacy and civil liberties.

• AI Regulation: Governments and organizations will need to develop regulations and guidelines for the responsible use of AI in cybersecurity to prevent misuse and ensure accountability.

AI is a game changer in cybersecurity, for both good and bad. While AI gives defenders powerful tools to detect, prevent and respond to threats, it also equips attackers with superpowers to breach defenses. How we use AI for good and to mitigate the bad will determine the future of cybersecurity.