Toggle light / dark theme

Feral AI gossip with the potential to spread damage and shame will become more frequent, researchers warn

“Feral” gossip spread via AI bots is likely to become more frequent and pervasive, causing reputational damage and shame, humiliation, anxiety, and distress, researchers have warned.

Chatbots like ChatGPT, Claude, and Gemini don’t just make things up—they generate and spread gossip, complete with negative evaluations and juicy rumors that can cause real-world harm, according to new analysis by philosophers Joel Krueger and Lucy Osler from the University of Exeter.

The research is published in the journal Ethics and Information Technology.

THE BRAVE AND THE COWARDS — SRI Newsletter December 2025

As the geopolitical climate shifts, we increasingly hear warmongering pronouncements that tend to resurrect popular sentiments we naïvely believed had been buried by history. Among these is the claim that Europe is weak and cowardly, unwilling to cross the threshold between adolescence and adulthood. Maturity, according to this narrative, demands rearmament and a head-on confrontation with the challenges of the present historical moment. Yet beneath this rhetoric lies a far more troubling transformation.

We are witnessing a blatant attempt to replace the prevailing moral framework—until recently ecumenically oriented toward a passive and often regressive environmentalism—with a value system founded on belligerence. This new morality defines itself against “enemies” of presumed interests, whether national, ethnic, or ideological.

Those who expected a different kind of shift—one that would abandon regressive policies in favor of an active, forward-looking environmentalism—have been rudely awakened. The self-proclaimed revolutionaries sing an old and worn-out song: war. These new “futurists” embrace a technocratic faith that goes far beyond a legitimate trust in science and technology—long maligned during the previous ideological era—and descends into open contempt for human beings themselves, now portrayed as redundant or even burdensome in the age of the supposedly unstoppable rise of artificial intelligence.

Neutrality isn’t a safe strategy on controversial issues, research shows

Researchers Rachel Ruttan and Katherine DeCelles of the University of Toronto’s Rotman School of Management are anything but neutral on neutrality. The next time you’re tempted to play it safe on a hot-button topic, their evidence-based advice is to consider saying what you really think.

That’s because their recent research, based on more than a dozen experiments with thousands of participants, reveals that people take a dim view of others’ professed neutrality on controversial issues, rating them just as morally suspect as those expressing an opposing viewpoint, if not worse.

“Neutrality gives you no advantage over opposition,” says Prof. Ruttan, an associate professor of organizational behavior and human resource management with an interest in moral judgment and prosocial behavior. “You’re not pleasing anyone.”

Inequalities exist in even the most egalitarian societies, anthropologists find

There is no such thing as a society where everyone is equal. That is the key message of new research that challenges the romantic ideal of a perfectly egalitarian human society.

Anthropologist Duncan Stibbard-Hawkes and his colleague Chris von Rueden reviewed extensive evidence, such as ethnographic accounts and detailed field observation from contemporary groups often viewed as egalitarian, that is, where everyone is equal in power, wealth and status.

These included the Tanzanian Hadza, the Malay Batek and the Kalahari!Kung. They were prompted by general confusion over how to define “egalitarianism” and the desire to debunk the idea of the “noble savage,” which suggests that non-primitive people live moral, peaceful lives close to nature.

Billionaires are building a future without humans

AI companies are explicitly working toward AGI and are likely to succeed soon, possibly within years. Keep the Future Human explains how unchecked development of smarter-than-human, autonomous, general-purpose AI systems will almost inevitably lead to human replacement. But it doesn’t have to. Learn how we can keep the future human and experience the extraordinary benefits of Tool AI…

https://keepthefuturehuman.ai.

Chapters:
0:00 – The AI Race.
1:31 – Extinction Risk.
2:35 – AGI When?
4:08 – AI Will Take Power.
5:12 – Good News.
6:52 – Bad News.
8:11 – More Money, More Capability.
10:10 – AI-Assisted Alignment?
11:32 – Obedient vs Sovereign AI
12:53 – AI Is a New Lifeform.

Original Videos:
• Keep the Future Human (with Anthony Aguirre)
• AI Ethics Series | Ethical Horizons in Adv…

Editing:
https://zeino.tv/

Nick Bostrom: What Happens When AI Evolves Faster Than Humans?

The journey “Up from Eden” could involve humanity’s growth in understanding, comprehending and appreciating with greater love true and wisdom, shaping a future worth living for.


AI is accelerating faster than human biology. What happens to humanity when the future moves faster than we can evolve?

Oxford philosopher Nick Bostrom, author of Superintelligence, says we are entering the biggest turning point in human history — one that could redefine what it means to be human.

In this talk, Bostrom explains why AI might be the last invention humans ever make, and how the next decade could bring changes that once took thousands of years in health, longevity, and human evolution. He warns that digital minds may one day outnumber biological humans — and that this shift could change everything about how we live and who we become.

Superintelligence will force us to choose what humanity becomes next.

Philip Goff — Can AI Become Conscious?

Follow Closer To Truth on Instagram for daily videos, updates, and announcements: https://www.instagram.com/closertotruth/

AI consciousness, its possibility or probability, has burst into public debate, eliciting all kinds of issues from AI ethics and rights to AI going rogue and harming humanity. We explore diverse views; we argue that AI consciousness depends on theories of consciousness.

Make a tax-deductible donation of any amount to help support Closer To Truth continue making content like this: https://shorturl.at/OnyRq.

Philip Goff is a British author, panpsychist philosopher, and professor at Durham University whose research focuses on philosophy of mind and consciousness. Specifically, it focuses on how consciousness can be part of the scientific worldview.

Closer To Truth, hosted by Robert Lawrence Kuhn and directed by Peter Getzels, presents the world’s greatest thinkers exploring humanity’s deepest questions. Discover fundamental issues of existence. Engage new and diverse ways of thinking. Appreciate intense debates. Share your own opinions. Seek your own answers.

Lifeboat Foundation Guardian Award 2025: Professor Roman V. Yampolskiy

The Lifeboat Foundation Guardian Award is annually bestowed upon a respected scientist or public figure who has warned of a future fraught with dangers and encouraged measures to prevent them.

This year’s winner is Professor Roman V. Yampolskiy. Roman coined the term “AI safety” in a 2011 publication titled * Artificial Intelligence Safety Engineering: Why Machine Ethics Is a Wrong Approach*, presented at the Philosophy and Theory of Artificial Intelligence conference in Thessaloniki, Greece, and is recognized as a founding researcher in the field.

Roman is known for his groundbreaking work on AI containment, AI safety engineering, and the theoretical limits of artificial intelligence controllability. His research has been cited by over 10,000 scientists and featured in more than 1,000 media reports across 30 languages.

Watch his interview on * The Diary of a CEO* at [ https://www.youtube.com/watch?v=UclrVWafRAI](https://www.youtube.com/watch?v=UclrVWafRAI) that has already received over 11 million views on YouTube alone. The Singularity has begun, please pay attention to what Roman has to say about it!


Professor Roman V. Yampolskiy who coined the term “AI safety” is winner of the 2025 Guardian Award.

Ethics of neurotechnology: UNESCO adopts the first global standard in

Today UNESCO’s Member States took the final step towards adopting the first global normative framework on the ethics of neurotechnology. The Recommendation, which will enter into force on November 12, establishes essential safeguards to ensure that neurotechnology contributes to improving the lives of those who need it the most, without jeopardizing human rights.

/* */