Toggle light / dark theme

Large language models prioritize helpfulness over accuracy in medical contexts, finds study

Large language models (LLMs) can store and recall vast quantities of medical information, but their ability to process this information in rational ways remains variable. A new study led by investigators from Mass General Brigham demonstrated a vulnerability in that LLMs are designed to be sycophantic, or excessively helpful and agreeable, which leads them to overwhelmingly fail to appropriately challenge illogical medical queries despite possessing the information necessary to do so.

Findings, published in npj Digital Medicine, demonstrate that targeted training and fine-tuning can improve LLMs’ abilities to respond to illogical prompts accurately.

“As a community, we need to work on training both patients and clinicians to be safe users of LLMs, and a key part of that is going to be bringing to the surface the types of errors that these models make,” said corresponding author Danielle Bitterman, MD, a faculty member in the Artificial Intelligence in Medicine (AIM) Program and Clinical Lead for Data Science/AI at Mass General Brigham.

Google is powering Belgium’s digital future with a two-year €5 billion investment in AI infrastructure

Google is investing an additional €5 billion in Belgium over the next two years to expand its cloud and AI infrastructure. This includes expansions of our data center campuses in Saint-Ghislain and will add another 300 full time jobs. We’ve also announced new agreements with Eneco, Luminus and Renner which will support the development of new onshore wind farms and support the grid with clean energy.

Our commitment goes beyond infrastructure. We’re also equipping Belgians with the skills needed to thrive in an AI-driven economy, at no cost and will fund non-profits to provide free, practical AI training for low-skilled workers.

This is an extraordinary time for European innovation and its digital and economic future. Google is deepening its roots in Belgium and investing in its residents to unlock significant economic opportunities for the country, helping to ensure it remains a leader in technology and AI.

The AI Safety Expert: These Are The Only 5 Jobs That Will Remain In 2030! — Dr. Roman Yampolskiy

WARNING: AI could end humanity, and we’re completely unprepared. Dr. Roman Yampolskiy reveals how AI will take 99% of jobs, why Sam Altman is ignoring safety, and how we’re heading toward global collapse…or even World War III.

Dr. Roman Yampolskiy is a leading voice in AI safety and a Professor of Computer Science and Engineering. He coined the term “AI safety” in 2010 and has published over 100 papers on the dangers of AI. He is also the author of books such as, ‘Considerations on the AI Endgame: Ethics, Risks and Computational Frameworks’

He explains:
⬛How AI could release a deadly virus.
⬛Why these 5 jobs might be the only ones left.
⬛How superintelligence will dominate humans.
⬛Why ‘superintelligence’ could trigger a global collapse by 2027
⬛How AI could be worse than nuclear weapons.
⬛Why we’re almost certainly living in a simulation.

00:00 Intro.
02:28 How to Stop AI From Killing Everyone.
04:35 What’s the Probability Something Goes Wrong?
04:57 How Long Have You Been Working on AI Safety?
08:15 What Is AI?
09:54 Prediction for 2027
11:38 What Jobs Will Actually Exist?
14:27 Can AI Really Take All Jobs?
18:49 What Happens When All Jobs Are Taken?
20:32 Is There a Good Argument Against AI Replacing Humans?
22:04 Prediction for 2030
23:58 What Happens by 2045?
25:37 Will We Just Find New Careers and Ways to Live?
28:51 Is Anything More Important Than AI Safety Right Now?
30:07 Can’t We Just Unplug It?
31:32 Do We Just Go With It?
37:20 What Is Most Likely to Cause Human Extinction?
39:45 No One Knows What’s Going On Inside AI
41:30 Ads.
42:32 Thoughts on OpenAI and Sam Altman.
46:24 What Will the World Look Like in 2100?
46:56 What Can Be Done About the AI Doom Narrative?
53:55 Should People Be Protesting?
56:10 Are We Living in a Simulation?
1:01:45 How Certain Are You We’re in a Simulation?
1:07:45 Can We Live Forever?
1:12:20 Bitcoin.
1:14:03 What Should I Do Differently After This Conversation?
1:15:07 Are You Religious?
1:17:11 Do These Conversations Make People Feel Good?
1:20:10 What Do Your Strongest Critics Say?
1:21:36 Closing Statements.
1:22:08 If You Had One Button, What Would You Pick?
1:23:36 Are We Moving Toward Mass Unemployment?
1:24:37 Most Important Characteristics.

Follow Dr Roman:
X — https://bit.ly/41C7f70
Google Scholar — https://bit.ly/4gaGE72

You can purchase Dr Roman’s book, ‘Considerations on the AI Endgame: Ethics, Risks and Computational Frameworks’, here: https://amzn.to/4g4Jpa5

Creator behind AI actress responds to backlash: ‘She is not a replacement for a human being’

Artificial people may put a lot of actors out of work.


Tilly Norwood looks and sounds real, but she’s not real at all.

Created by Eline Van Der Velden, the CEO of the AI production company Particle6, the “actress” has garnered interest from studios with talent agents eyeing to sign her.

Variety reports, Van Der Velden explained at the Zurich Summit that studio interest has spiked since Tilly’s launch with agency representation expected soon. If signed, she would be one of the first AI-generated actresses to have talent representation.

Human Flourishing In The Age Of AI And Robots — The Futurists X Summit 2025

See my Comment below for a link to David Orban’s 20 minute talk.


In this keynote, delivered at The Futurists X Summit, on September 22 in Dubai, David Orban maps how AI and humanoid robotics shift us from steady exponential progress to an acceleration of acceleration—what he calls the Jolting Technologies Hypothesis. He argues we’re not in a zero-sum economy; as capability compounds and doubling times shrink, we unlock new degrees of freedom for individuals, firms, and society. The challenge is to steer that power with clear narratives, robust safety, and deliberate design of work, value, and purpose.

You’ll hear:
• Why narratives (optimism vs. doom) shape which futures become real.
• How shortening doubling times in AI capabilities pull forward timelines once thought 20–30 years out.
• Why trust in AI is task-relative: if +5% isn’t enough, aim for 10× reliability.
• The coming phase transformation as intelligence becomes infrastructure (homes, mobility, industry).
• Concrete social questions (e.g., organ donation post–road-death decline) that demand AI-assisted governance.
• Why the nature of work will change: from jobs as status to human aspiration as value.

Key ideas:
• Humanoid robots at scale: rapid iteration, non-fragile recovery, and human-complementary performance.
• Designing agency: go from idea → action with near-instant execution; experiment, learn, and iterate fast.
• From zombies to luminaries: use newfound freedom to architect lives worth living.

Resources & Links:

FinWise insider breach impacts 689K American First Finance customers

FinWise Bank is warning on behalf of corporate customers that it suffered a data breach after a former employee accessed sensitive files after the end of their employment.

“On May 31, 2024, FinWise experienced a data security incident involving a former employee who accessed FinWise data after the end of their employment,” reads a data breach notification sent by FinWise on behalf of American First Finance (AFF).

American First Finance (AFF) is a company that offers consumer financing products, including installment loans and lease-to-own programs, for a diverse range of products and services. Customers use AFF to apply for and manage the loans, with the company handling the services, account setup, repayment process, and customer support.

Mo Gawdat on AI, ethics & machine mastery: How Artificial Intelligence will rule the world

Mo Gawdat warns that AI will soon surpass human intelligence, fundamentally changing society, but also believes that with collective action, ethical development, and altruistic leadership, humans can ensure a beneficial future and potentially avoid losing control to AI

## Questions to inspire discussion.

AI’s Impact on Humanity.

🤖 Q: How soon will AI surpass human intelligence? A: According to Mo Gawdat, AI will reach AGI by 2026, with intelligence measured in thousands compared to humans, making human intelligence irrelevant within 3 years.

🌍 Q: What potential benefits could AI bring to global issues? A: 12% of world military spending redirected to AI could solve world hunger, provide universal healthcare, and end extreme poverty, creating a potential utopia.

Preparing for an AI-Driven Future.

/* */