Toggle light / dark theme

AI is homogenizing human expression and thought, computer scientists and psychologists say

AI chatbots are standardizing how people speak, write, and think. If this homogenization continues unchecked, it risks reducing humanity’s collective wisdom and ability to adapt, computer scientists and psychologists argue in an opinion paper published in Trends in Cognitive Sciences.

They say that AI developers should incorporate more real-world diversity into large language model (LLM) training sets, not only to help preserve human cognitive diversity, but also to improve chatbots’ reasoning abilities.

What Happens When AI Runs the Entire Economy?

What happens when AI controls prices, jobs, markets, and growth itself? Explore the future of an economy run by machines—and what it means for work, power, and humanity.

Get Nebula using my link for 50% off an annual subscription: https://go.nebula.tv/isaacarthur Watch my exclusive video Lazarus Protocols: https://nebula.tv/videos/isaacarthur-lazarus-protocols-reviv…extinction Check out Abolish Everything https://nebula.tv/abolish?ref=isaacarthur.

🛒 SFIA Merchandise: https://isaac-arthur-shop.fourthwall.com/ 🌐 Visit our Website: http://www.isaacarthur.net ❤️ Support us on Patreon: https://www.patreon.com/IsaacArthur ⭐ Support us on Subscribestar: https://www.subscribestar.com/isaac-arthur 👥 Facebook Group: https://www.facebook.com/groups/1583992725237264/ 📣 Reddit Community: https://www.reddit.com/r/IsaacArthur/ 🐦 Follow on Twitter / X: https://twitter.com/Isaac_A_Arthur 💬 SFIA Discord Server: https://discord.gg/53GAShE Credits: What Happens When AI Runs the Entire Economy? Written, Produced & Narrated by: Isaac Arthur Select imagery/video supplied by Getty Images.

Chapters 0:00 Intro — The Invisible Hand Becomes a Neural Network 2:58 What Does “Running the Economy” Actually Mean? 5:30 What Does “Running the Economy” Actually Mean? 10:19 Labor in an AI-Run Economy 14:41 Who Programs the Economy’s Values? 17:01 Government, Power, and Economic Sovereignty 20:21 So, Can Humans Stay in the Loop? 22:56 The Best-Case and Worst-Case Futures 24:53 Abolish Everything 25:57 The Last Economic Decisions We Ever Make.

Read more

Human brain and AI speech recognition decode speech in similar step-by-step stages, study finds

Over the past decades, computer scientists have developed numerous artificial intelligence (AI) systems that can process human speech in different languages. The extent to which these models replicate the brain processes via which humans understand spoken language, however, has not yet been clearly determined.

Researchers at Columbia University, IBM Research and the Feinstein Institutes for Medical Research recently carried out a study aimed at comparing how automatic speech recognition (ASR) systems and the human brain decode speech. Their findings, published in Nature Machine Intelligence, suggest that activity in specific brain regions while people make sense of spoken language corresponds to specific stages in the processing of speech by AI models.

“The core mystery we wanted to solve is how the human brain performs the incredible computational feat of turning raw acoustic vibrations, the sounds of speech, into discrete linguistic meaning,” Nima Mesgarani, senior author of the paper, told Tech Xplore. “We now have AI systems that match human performance in transcribing speech, but we didn’t know if they were reaching those solutions independently or if they had converged on the same strategy as our biology.”

Brain-inspired device could lead to faster, more energy-efficient AI hardware

A team led by engineers at the University of California San Diego has developed a new brain-inspired hardware platform that could help computer hardware keep pace with the explosive growth of artificial intelligence. By combining memory and computation on the same chip—and allowing its components to interact collectively like neurons in the brain—the brain-inspired platform improved the speed, accuracy, and energy efficiency of pattern recognition in two simulated tasks: recognizing spoken digits and detecting epileptic seizures early from brain-wave recordings.

The approach could lead to the development of compact, energy-efficient hardware for smaller AI systems such as those used in wearable health monitors, smart sensors, and other autonomous devices.

The work, published on March 9 in Nature Nanotechnology, falls within the field of neuromorphic computing, which aims to build machines that mimic how the brain processes information. The researchers emphasize that the technology is brain-inspired, rather than brain-like; it draws ideas from how neural networks interact but does not attempt to replicate the brain itself.

Plastic Responses to Single and Combined Environmental Stresses in a Highly Chemodiverse Aromatic Plant Species

🚱Plants face various environmental stresses, to which they respond in different ways. Due to climate change, it is expected that plants will encounter increased phases of drought and changes in herbivory.

🐛This study thus aimed to evaluate the intra-individual variation in responses, that is phenotypic plasticity, to single and combined stresses, including drought and insect herbivory. Authors used plants of the aromatic species Tanacetum vulgare, which are characterized by distinct terpenoid chemotypes and metabolic fingerprints shaped by maternal origin. Clones were exposed to no stress, drought, herbivory, or a combination of both.

⚗️The impacts of these treatments were determined in terms of aboveground biomass as well as emission rates or concentrations, richness, and functional Hill diversity (FHD) of volatile organic compounds (VOCs), stored leaf and root terpenoids, and leaf metabolic fingerprints.

📊Drought resulted in lower plant aboveground biomass, VOC richness, and VOC FHD. Herbivory had no effect on biomass, but increased the VOC emission rates and richness, also in combination with drought. The treatment significantly affected the phenotypic plasticity of the aboveground biomass and VOC emission.

👉These findings highlight the importance of studying intra-individual variation in plant responses to different stresses and their combinations to fully comprehend the finely tuned chemodiversity.

Read more.

This website uses a security service to protect against malicious bots. This page is displayed while the website verifies you are not a bot.

Stacked quantum materials enable precise spin control without external magnetic fields

Spintronics—a technology that harnesses the electron’s magnetic quantum states to carry information—could pave the way for a new generation of ultra-energy-efficient electronics. Yet a major challenge has been the ability to control these delicate quantum properties with sufficient precision for practical applications. By combining different quantum materials, researchers at Chalmers University of Technology have now taken a decisive step forward, achieving unprecedented control over spin phenomena. The advance opens the door to next-generation low-power data processing and memory technologies.

Data centers, cloud services, AI and connected systems account for a rapidly growing share of global energy consumption. In the quest for new, more energy-efficient technological solutions, spin electronics, or spintronics, has proven to be a new and promising approach. Instead of relying solely on the movement of electric charge, spintronics use magnetic states to carry information. More specifically, it takes advantage of a quantum property of electrons known as spin, which makes electrons behave like tiny magnets.

“Just like a compass needle, an electron’s spin can point in one of two directions—up or down. These two directions can be used to represent digital information, in the same way today’s electronics use 0s and 1s,” explains Saroj Dash, Professor of Quantum Device Physics at Chalmers University of Technology.

How AI could unlock deep-sea secrets of marine life

The reef is a home and feeding ground for dozens of species that depend on it the way a woodland creature depends on trees. It has survived ice ages – but whether it will survive increasing pressures from industrial fishing, deep-sea mining and climate change is, in part, a question about data. If we don’t know it exists, how can we protect it?

A new project called Deep Vision could fundamentally transform our understanding of the deep ocean by digging into pictures and videos sat largely unexamined in research archives around the world. By using AI, thousands of hours of seafloor footage can be analysed to produce the first comprehensive maps of vulnerable marine ecosystems across the entire Atlantic basin.

Over the past two decades, robotic and autonomous underwater vehicles have collected vast quantities of footage from the deep sea. This represents an extraordinary resource – a record of ecosystems that most humans will never see.

Joscha Bach & Anders Sandberg

Are minds just processes? Can AI become conscious, morally wiser, or even part of a larger collective intelligence? Anders Sandberg and Joscha Bach discuss consciousness, AGI, hybrid minds, moral uncertainty, collective agency and the future of the cyborg Leviathan. It’s a deep and winding discussion with so many interesting topics covered!

0:00 Intro.
0:37 What is consciousness? Phenomenology — functionalism & panpsychism.
1:54 Causal boundaries — the mind is a causally organised process with a non-arbitrary functional boundary, sustained through time by feedback, control, and internal continuity.
3:20 Minds are not states — they are processes. We don’t see causal filtering in tables.
5:54 Epiphenomenalism is self-undermining if it has no causal role, and taking causation seriously pushes towards functionalism.
9:49 Methodological humility about armchair philosophy of mind.
12:41 Putnam-style Brain-in-a-vat — and why standard objections to AI minds fall flat.
16:37 Is sentience required (or desired) for not just moral competence in AI, but moral motivation as well?
22:35 Why stepping outside yourself is powerful — seeing.
25:12 Are AIs born enlightened?
26:25 Are LLMs AGI yet? What’s still missing.
28:16 AI, hybrid minds, and the limits of human augmentation.
32:32 Can minds be extended — in humans, dogs, and cats?
36:19 Why human language may not be open-ended enough.
39:41 Why AI is so data-hungry — and why better algorithms must exist.
43:39 Why better representations matter more than raw compute (grokking was surprising)
48:46 How babies build a world model from touch and perception.
51:05 What comes after copilots: agent teams, multimodality and new AI workflows.
55:32 Can AI help us discover new forms of taste and aesthetics.
59:49 Using AI to learn art history and invent a transhumanist aesthetic.
1:01:47 When AI helps everyone looks professional, what still counts as real skill?
1:03:56 What happens when the self starts to merge with AI
1:05:43 How AI changes the way we think and create.
1:08:10 What happens when AI starts shaping human relationships.
1:11:18 Why feeling in control can matter more than being right.
1:12:58 Why intelligence without wisdom is very dangerous.
1:17:45 AI via scaling statistical pattern matching vs symbolic (& causal) reasoning. Can LLMs learn causality or just correlation?
1:23:00 Will multimodal AI replace LLMs or use them as glue everywhere.
1:24:02 10 years to the singularity?
1:25:27 AI, coordination and the corruption problem.
1:29:47 Can AI become more moral than us (humans)? and if so, should it?
1:34:31 Why pluralism still leaves moral collisions unresolved.
1:34:31 Traversing the landscape of norms (value)
1:38:14 Can ethics work across nested levels of existence? (from the person-effecting-view to the matrioshka-effecting-view)
1:43:08 Moral realism, evolution & game-theoretic symmetries.
1:48:01 Is there a global optimum of moral coordination? Is that god?
1:55:12 Metaphors of the body-politic, the body of Christ, Omega Point theory, Leviathan.
1:59:36 Will superintelligences converge into a cosmic singleton?

Many thanks for tuning in!
Please support SciFuture by subscribing and sharing!
Buy me a coffee? https://buymeacoffee.com/tech101z.

Have any ideas about people to interview? Want to be notified about future events? Any comments about the STF series?
Please fill out this form: https://docs.google.com/forms/d/1mr9P… regards, Adam Ford

Kind regards.
Adam Ford.
Science, Technology & the Future — #SciFuture — http://scifuture.org

Read more

/* */