Toggle light / dark theme

Get the latest international news and world events from around the world.

Log in for authorized contributors

Joscha Bach & Anders Sandberg

Are minds just processes? Can AI become conscious, morally wiser, or even part of a larger collective intelligence? Anders Sandberg and Joscha Bach discuss consciousness, AGI, hybrid minds, moral uncertainty, collective agency and the future of the cyborg Leviathan. It’s a deep and winding discussion with so many interesting topics covered!

0:00 Intro.
0:37 What is consciousness? Phenomenology — functionalism & panpsychism.
1:54 Causal boundaries — the mind is a causally organised process with a non-arbitrary functional boundary, sustained through time by feedback, control, and internal continuity.
3:20 Minds are not states — they are processes. We don’t see causal filtering in tables.
5:54 Epiphenomenalism is self-undermining if it has no causal role, and taking causation seriously pushes towards functionalism.
9:49 Methodological humility about armchair philosophy of mind.
12:41 Putnam-style Brain-in-a-vat — and why standard objections to AI minds fall flat.
16:37 Is sentience required (or desired) for not just moral competence in AI, but moral motivation as well?
22:35 Why stepping outside yourself is powerful — seeing.
25:12 Are AIs born enlightened?
26:25 Are LLMs AGI yet? What’s still missing.
28:16 AI, hybrid minds, and the limits of human augmentation.
32:32 Can minds be extended — in humans, dogs, and cats?
36:19 Why human language may not be open-ended enough.
39:41 Why AI is so data-hungry — and why better algorithms must exist.
43:39 Why better representations matter more than raw compute (grokking was surprising)
48:46 How babies build a world model from touch and perception.
51:05 What comes after copilots: agent teams, multimodality and new AI workflows.
55:32 Can AI help us discover new forms of taste and aesthetics.
59:49 Using AI to learn art history and invent a transhumanist aesthetic.
1:01:47 When AI helps everyone looks professional, what still counts as real skill?
1:03:56 What happens when the self starts to merge with AI
1:05:43 How AI changes the way we think and create.
1:08:10 What happens when AI starts shaping human relationships.
1:11:18 Why feeling in control can matter more than being right.
1:12:58 Why intelligence without wisdom is very dangerous.
1:17:45 AI via scaling statistical pattern matching vs symbolic (& causal) reasoning. Can LLMs learn causality or just correlation?
1:23:00 Will multimodal AI replace LLMs or use them as glue everywhere.
1:24:02 10 years to the singularity?
1:25:27 AI, coordination and the corruption problem.
1:29:47 Can AI become more moral than us (humans)? and if so, should it?
1:34:31 Why pluralism still leaves moral collisions unresolved.
1:34:31 Traversing the landscape of norms (value)
1:38:14 Can ethics work across nested levels of existence? (from the person-effecting-view to the matrioshka-effecting-view)
1:43:08 Moral realism, evolution & game-theoretic symmetries.
1:48:01 Is there a global optimum of moral coordination? Is that god?
1:55:12 Metaphors of the body-politic, the body of Christ, Omega Point theory, Leviathan.
1:59:36 Will superintelligences converge into a cosmic singleton?

Many thanks for tuning in!
Please support SciFuture by subscribing and sharing!
Buy me a coffee? https://buymeacoffee.com/tech101z.

Have any ideas about people to interview? Want to be notified about future events? Any comments about the STF series?
Please fill out this form: https://docs.google.com/forms/d/1mr9P… regards, Adam Ford

Kind regards.
Adam Ford.
Science, Technology & the Future — #SciFuture — http://scifuture.org

Read more

Space Weather Could Be Hiding Alien Signals

Dr. Vishal Gajjar: “If a signal gets broadened by its own star’s environment, it can slip below our detection thresholds, even if it’s there, potentially helping explain some of the radio silence we’ve seen in technosignature searches.”


What steps can be taken to identify why we haven’t received radio signals from an extraterrestrial intelligence, also called technosignatures? This is what a recent study published in The Astrophysical Journal hopes to address as a team of scientists investigated potential explanations regarding why humanity continues hearing silence from technosignatures. This study has the potential to help scientists and the public better understand the shortcomings and enhancements that can be made in the search for intelligent life beyond Earth.

For the study, the researchers used a series of computer models to simulate how radio signals leaving extrasolar star systems could be influenced by a myriad of factors, specifically space weather coming from the host star. This study comes as SETI and other researchers worldwide continue to come up empty regarding identifying technosignatures. The goal of the study was to ascertain potential reasons while putting constraints on both how and where to search for technosignatures.

In the end, the researchers ascertained that space weather plays a role in altering the outgoing radio signals by dispersing them, as opposed to the radio signals maintaining a fixed beam. The team ascertained that M-dwarf stars, which constitute approximately 75 percent of the stars in the Milky Way Galaxy while being smaller and cooler than our Sun, are prime targets for searching for technosignatures. This is due to their space weather, which is far more active than stars like our Sun, dispersing the radio signals.

Lunar regolith simulant used to grow chickpeas

Dr. Sara Santos: “The research is about understanding the viability of growing crops on the moon. How do we transform this regolith into soil? What kinds of natural mechanisms can cause this conversion?” [ https://www.labroots.com/trending/space/30294/lunar-regolith…hickpeas-2](https://www.labroots.com/trending/space/30294/lunar-regolith…hickpeas-2)


How will astronauts grow food during long-term missions to the Moon? This is what a recent study published in Scientific Reports hopes to address as a team of scientists investigated the prospect of growing food on the Moon. This study has the potential to help scientists, mission planners, engineers, and astronauts develop new methods for growing food on the Moon, which could help advance such techniques when humans go to Mars.

For the study, the researchers grew chickpeas using simulated lunar regolith (often mistakenly called “soil”) and fungi, with the latter being used to test plant stress levels, decrease toxins, and enhance the mixture of regolith simulant and fungi. The team tested a variety of mixtures, including 25 to 100 percent regolith simulant and with and without the fungi. The goal of the study was to ascertain the plausibility of growing food on the Moon under climate-controlled conditions using lunar regolith and Earth-based products. In the end, the researchers found that the most promising mixture was 75 percent regolith simulant with fungi.

AI Research Symposium: The Next Frontiers | Keynotes by Demis Hassabis, Yoshua Bengio & Yann LeCun

Welcome to the Research Symposium on Enabling AI at Nation Scale, hosted by the Ministry of Electronics and Information Technology (MeitY).

This landmark event brings together the world’s leading pioneers in Artificial Intelligence to discuss the future of discovery, engineering, and national infrastructure. Featuring keynote addresses from Turing Award winners and industry visionaries, we explore how AI acts as a catalyst for scientific breakthroughs and the \.

Abstract: Glioblastoma remains profoundly resistant to current immunotherapeutic strategies

Here, Fanghui Lu & team report OLIG2, a master transcription factor in glioblastoma stem cells, enables immune evasion by suppressing CXCL10. And, targeting OLIG2 overcomes immunotherapy resistance and improves survival.


1Department of Cancer Center, The Second Affiliated Hospital of Chongqing Medical University, Chongqing, China.

2Department of Neurosurgery, Key Laboratory of Major Brain Disease and Aging Research (Ministry of Education), The First Affiliated Hospital of Chongqing Medical University, Chongqing, China.

3School of Basic Medical Sciences, Chongqing Medical University, Chongqing, China.

The homogenizing effect of large language models on human expression and thought

AI chatbots are homogenizing human expression and risk reducing humanity’s collective wisdom, computer scientists and psychologists say. http://spkl.io/6181AI6Jh.

TrendsInCognitiveScience.


Cognitive diversity, reflected in variations of language, perspective, and reasoning, is essential to creativity and collective intelligence. This diversity is rich and grounded in culture, history, and individual experience. Yet, as large language models (LLMs) become deeply embedded in people’s lives, they risk standardizing language and reasoning. We synthesize evidence across linguistics, psychology, cognitive science, and computer science to show how LLMs reflect and reinforce dominant styles while marginalizing alternative voices and reasoning strategies. We examine how their design and widespread use contribute to this effect by mirroring patterns in their training data and amplifying convergence as all people increasingly rely on the same models across contexts.

Robust Mouse Rejuvenation: Breaking the Ceiling of Longevity Research

For decades, the field of biogerontology has largely focused on a single strategy: manipulating metabolism to slow down the rate at which we age. While approaches like caloric restriction have produced fascinating results in short-lived organisms like worms and flies, they have shown clear limits in mammals. Slowing the accumulation of damage does not remove the damage that is already there. It merely delays (not prevents) the onset of disease, particularly when applied late in life.

/* */