Ivanti EPMM zero-day flaws enabled cyberattacks on Dutch, EU, and Finnish government systems, exposing employee contact and device data.
Are we chasing the wrong goal with Artificial General Intelligence, and missing the breakthroughs that matter now?
On this episode of Digital Disruption, we’re joined by former research director at Google and AI legend, Peter Norvig.
Peter is an American computer scientist and a Distinguished Education Fellow at the Stanford Institute for Human-Centered Artificial Intelligence (HAI). He is also a researcher at Google, where he previously served as Director of Research and led the company’s core search algorithms group. Before joining Google, Norvig headed NASA Ames Research Center’s Computational Sciences Division, where he served as NASA’s senior computer scientist and received the NASA Exceptional Achievement Award in 2001.He is best known as the co-author, alongside Stuart J. Russell, of Artificial Intelligence: A Modern Approach — the world’s most widely used textbook in the field of artificial intelligence.
Peter sits down with Geoff to separate facts from fiction about where AI is really headed. He explains why the hype around Artificial General Intelligence (AGI) misses the point, how today’s models are already “general,” and what truly matters most: making AI safer, more reliable, and human-centered. He discusses the rapid evolution of generative models, the risks of misinformation, AI safety, open-source regulation, and the balance between democratizing AI and containing powerful systems. This conversation explores the impact of AI on jobs, education, cybersecurity, and global inequality, and how organizations can adapt, not by chasing hype, but by aligning AI to business and societal goals. If you want to understand where AI actually stands, beyond the headlines, this is the conversation you need to hear.
In this episode:
00:00 Intro.
01:00 How AI evolved since Artificial Intelligence: A Modern Approach.
03:00 Is AGI already here? Norvig’s take on general intelligence.
06:00 The surprising progress in large language models.
08:00 Evolution vs. revolution.
10:00 Making AI safer and more reliable.
12:00 Lessons from social media and unintended consequences.
15:00 The real AI risks: misinformation and misuse.
18:00 Inside Stanford’s Human-Centered AI Institute.
20:00 Regulation, policy, and the role of government.
22:00 Why AI may need an Underwriters Laboratory moment.
24:00 Will there be one “winner” in the AI race?
26:00 The open-source dilemma: freedom vs. safety.
28:00 Can AI improve cybersecurity more than it harms it?
30:00 “Teach Yourself Programming in 10 Years” in the AI age.
33:00 The speed paradox: learning vs. automation.
36:00 How AI might (finally) change productivity.
38:00 Global economics, China, and leapfrog technologies.
42:00 The job market: faster disruption and inequality.
45:00 The social safety net and future of full-time work.
48:00 Winners, losers, and redistributing value in the AI era.
50:00 How CEOs should really approach AI strategy.
52:00 Why hiring a “PhD in AI” isn’t the answer.
54:00 The democratization of AI for small businesses.
56:00 The future of IT and enterprise functions.
57:00 Advice for staying relevant as a technologist.
59:00 A realistic optimism for AI’s future.
#ai #agi #humancenteredai #futureofwork #aiethics #innovation.
How Elon plans to launch a terawatt of GPUs into space.
## Elon Musk plans to launch a massive computing power of 1 terawatt of GPUs into space to advance AI, robotics, and make humanity multi-planetary, while ensuring responsible use and production. ## ## Questions to inspire discussion.
Space-Based AI Infrastructure.
Q: When will space-based data centers become economically superior to Earth-based ones? A: Space data centers will be the most economically compelling option in 30–36 months due to 5x more effective solar power (no batteries needed) and regulatory advantages in scaling compared to Earth.
☀️ Q: How much cheaper is space solar compared to ground solar? A: Space solar is 10x cheaper than ground solar because it requires no batteries and is 5x more effective, while Earth scaling faces tariffs and land/permit issues.
Q: What solar production capacity are SpaceX and Tesla planning? A: SpaceX and Tesla plan to produce 100 GW/year of solar cells for space, manufacturing from raw materials to finished cells in-house.
NEUNew Maximizing Tumor Resection and Managing Cognitive Attentional Outcomes: Measures of Impact of Awake Surgery in Glioma Treatment by Zigiotto et al at “S. Chiara” University-Hospital, Azienda Provinciale per i Servizi Sanitari, Trento, Italy Congress of Neurological Surgeons (CNS)
including attention. Understanding how AwS and AsS affect attention is crucial, given its pivotal role in supporting various cognitive functions.
METHODS:
We conducted a retrospective analysis on 64 glioma patients treated with AwS or AsS. Attention was assessed with visual search tasks and Trail Making Test Part A before and 1 week and 1 month after surgery. Volumetric T1-weighted and T2/Fluid Attenuated Inversion Recovery MRI sequences before and after surgery were used to delineate the lesion and the surgical cavity. The extent of resection was calculated to determine supramaximal resection for both contrast-enhanced and non–contrast-enhanced tumor regions.
RESULTS:
The U.S. Cybersecurity and Infrastructure Security Agency (CISA) ordered government agencies to patch their systems against a five-year-old GitLab vulnerability that is actively being exploited in attacks.
GitLab patched this server-side request forgery (SSRF) flaw (tracked as CVE-2021–39935) in December 2021, saying it could allow unauthenticated attackers with no privileges to access the CI Lint API, which is used to simulate pipelines and validate CI/CD configurations.
“When user registration is limited, external users that aren’t developers shouldn’t have access to the CI Lint API,” the company said at the time.
A new threat actor called Amaranth Dragon, linked to APT41 state-sponsored Chinese operations, exploited the CVE-2025–8088 vulnerability in WinRAR in espionage attacks on government and law enforcement agencies.
The hackers combined legitimate tools with the custom Amaranth Loader to deliver encrypted payloads from command-and-control (C2) servers behind Cloudflare infrastructure, for more accurate targeting and increased stealth.
According to researchers at cybersecurity company Check Point, Amaranth Dragon targeted organizations in Singapore, Thailand, Indonesia, Cambodia, Laos, and the Philippines.
By Chuck Brooks
#artificialintelligence #tech #government #quantum #innovation #federal #ai
By Chuck Brooks, president of Brooks Consulting International
In 2026, government technological innovation has reached a key turning point. After years of modernization plans, pilot projects and progressive acceptance, government leaders are increasingly incorporating artificial intelligence and quantum technologies directly into mission-critical capabilities. These technologies are becoming essential infrastructure for economic competitiveness, national security and scientific advancement rather than merely scholarly curiosity.
We are seeing a deliberate change in the federal landscape from isolated testing to the planned implementation of emerging technology across the whole government. This evolution represents not only technology momentum but also policy leadership, public-private collaboration and expanded industrial capability.