Toggle light / dark theme

Nick Bostrom — From Superintelligence to Deep Utopia — Can We Create a Perfect Society?

Since Nick Bostrom wrote Superintelligence, AI has surged from theoretical speculation to powerful, world-shaping reality. Progress is undeniable, yet there is an ongoing debate in the AI safety community – caught between mathematical rigor and swiss-cheese security. P(doom) debates rage on, but equally concerning is the risk of locking in negative-value futures for a very long time.

Zooming in: motivation selection-especially indirect normativity-raises the question: is there a structured landscape of possible value configurations, or just a chaotic search for alignment?

From Superintelligence to Deep Utopia: not just avoiding catastrophe but ensuring resilience, meaning, and flourishing in a’solved’ world; a post instrumental, plastic utopia – where humans are ‘deeply redundant’, can we find enduring meaning and purpose?

This is our moment to shape the future. What values will we encode? What futures will we entrench?

0:00 Highlights.
3:07 Intro.
4:15 Interview.

P.s. the background music at the start of the video is ’ Eta Carinae ’ which I created on a Korg Minilogue XD: https://scifuture.bandcamp.com/track/.… music at the end is ‘Hedonium 1′ which is guitar saturated with Strymon reverbs, delays and modulation: / hedonium-1 Many thanks for tuning in! Please support SciFuture by subscribing and sharing! Buy me a coffee? https://buymeacoffee.com/tech101z Have any ideas about people to interview? Want to be notified about future events? Any comments about the STF series? Please fill out this form: https://docs.google.com/forms/d/1mr9P… Kind regards, Adam Ford

The World’s First True Artificial Superintelligence | Asinoid by ASILAB

ASILAB is excited to introduce Asinoid – the world’s first true artificial superintelligence built on the architecture of the human brain. Designed to think, learn, and evolve autonomously like a living organism.

Learn more on our website: http://asilab.com.

Asinoid isn’t just another AI. Unlike today’s pre-trained, prompt-driven models and agents, Asinoid is a self-improving and proactive mind. It learns over time. It remembers. It sets its own goals. And it gets smarter by rewiring itself from within.

An Asinoid can power a fleet of autonomous drones. Act as the brain inside your security system. It can drive your R&D, run your meetings, become the cognitive layer behind your SaaS product or even co-found a company with you.

The possibilities are endless. And we want to explore them with you.

We’re opening access to pioneering companies, researchers, and developers who want to build with us. If you’re ready to create something groundbreaking, let’s get started.

Meta releases new data set, AI model aimed at speeding up scientific research

Meta released a massive trove of chemistry data Wednesday that it hopes will supercharge scientific research, and is also crucial for the development of more advanced, general-purpose AI systems.

The company used the data set to build a powerful new AI model for scientists that can speed up the time it takes to create new drugs and materials.

The Open Molecules 2025 effort required 6 billion compute hours to create, and is the result of 100 million calculations that simulate the quantum mechanics of atoms and molecules in four key areas chosen for their potential impact on science.

View a PDF of the paper titled J1: Incentivizing Thinking in LLM-as-a-Judge via Reinforcement Learning, by Chenxi Whitehouse and 6 other authors

The progress of AI is bottlenecked by the quality of evaluation, and powerful LLM-as-a-Judge models have proved to be a core solution. Improved judgment ability is enabled by stronger chain-of-thought reasoning, motivating the need to find the best recipes for training such models to think. In this work we introduce J1, a reinforcement learning approach to training such models. Our method converts both verifiable and non-verifiable prompts to judgment tasks with verifiable rewards that incentivize thinking and mitigate judgment bias. In particular, our approach outperforms all other existing 8B or 70B models when trained at those sizes, including models distilled from DeepSeek-R1. J1 also outperforms o1-mini, and even R1 on some benchmarks, despite training a smaller model. We provide analysis and ablations comparing Pairwise-J1 vs Pointwise-J1 models, offline vs online training recipes, reward strategies, seed prompts, and variations in thought length and content. We find that our models make better judgments by learning to outline evaluation criteria, comparing against self-generated reference answers, and re-evaluating the correctness of model responses.

Tesla’s Supercomputer Will DWARF Everything

Tesla is developing a terawatt-level supercomputer at Giga Texas to enhance its self-driving technology and AI capabilities, positioning the company as a leader in the automotive and renewable energy sectors despite current challenges ## ## Questions to inspire discussion.

Tesla’s Supercomputers.

💡 Q: What is the scale of Tesla’s new supercomputer project?

A: Tesla’s Cortex 2 supercomputer at Giga Texas aims for 1 terawatt of compute with 1.4 billion GPUs, making it 3,300x bigger than today’s top system.

💡 Q: How does Tesla’s compute power compare to Chinese competitors?

A: Tesla’s FSD uses 3x more compute than Huawei, Xpeng, Xiaomi, and Li Auto combined, with BYD not yet a significant competitor. Full Self-Driving (FSD)

Princeton Engineers Develop “Metabot” That Is Both a Material and a Robot

This material can expand, change shape, move, and respond to electromagnetic commands like a remotely controlled robot, even though it has no motor or internal gears. In a study that echoes scenes from the Transformers movie franchise, engineers at Princeton University have developed a material c

/* */