Toggle light / dark theme

To validate these simulated results, PhD student Omri Cohen fabricated a series of disks from two polymer layers. The lower layer was patterned with a regular matrix and the upper one consisted of thin lines radiating out from the center. When the disks were heated and then cooled again, the matrix layer remained the same, while the upper layer contracted by a varying amount along the radial direction. This difference induced a curvature in the disk, and the team was able to replicate the simulated series of shape transitions by varying the curvature and thickness of the disks.

Further analysis shows that the formation of each cusp acts as a focal point for the stresses that accumulate in the petal. In older petals this localized concentration of stress inhibits growth around the cusps, producing a concave distortion on the rounded edge of the petal. “This completes a nice feedback cycle,” explains Sharon. “Simple growth first generates Mainardi-Codazzi-Peterson incompatibility, leading to a mechanical instability that forms cusps. These cusps then focus the stress, which affects the further growth of the tissue.”

Understanding the mechanical mechanisms that alter the shape of rose petals as they grow could inform the design of self-shaping materials and structures for applications like soft robotics and deployable spacecraft. “The idea is to program internal forces to enable the material to shape itself, and this work offers a new strategy for creating more localized shaping,” explains Benoît Roman of ESPCI ParisTech, an expert in shape-changing materials. “But the real value of this study is that it provides a perfect example of using physics to uncover and describe a deep and general phenomenon.”

Since Nick Bostrom wrote Superintelligence, AI has surged from theoretical speculation to powerful, world-shaping reality. Progress is undeniable, yet there is an ongoing debate in the AI safety community – caught between mathematical rigor and swiss-cheese security. P(doom) debates rage on, but equally concerning is the risk of locking in negative-value futures for a very long time.

Zooming in: motivation selection-especially indirect normativity-raises the question: is there a structured landscape of possible value configurations, or just a chaotic search for alignment?

From Superintelligence to Deep Utopia: not just avoiding catastrophe but ensuring resilience, meaning, and flourishing in a’solved’ world; a post instrumental, plastic utopia – where humans are ‘deeply redundant’, can we find enduring meaning and purpose?

This is our moment to shape the future. What values will we encode? What futures will we entrench?

0:00 Highlights.
3:07 Intro.
4:15 Interview.

P.s. the background music at the start of the video is ’ Eta Carinae ’ which I created on a Korg Minilogue XD: https://scifuture.bandcamp.com/track/.… music at the end is ‘Hedonium 1′ which is guitar saturated with Strymon reverbs, delays and modulation: / hedonium-1 Many thanks for tuning in! Please support SciFuture by subscribing and sharing! Buy me a coffee? https://buymeacoffee.com/tech101z Have any ideas about people to interview? Want to be notified about future events? Any comments about the STF series? Please fill out this form: https://docs.google.com/forms/d/1mr9P… Kind regards, Adam Ford

ASILAB is excited to introduce Asinoid – the world’s first true artificial superintelligence built on the architecture of the human brain. Designed to think, learn, and evolve autonomously like a living organism.

Learn more on our website: http://asilab.com.

Asinoid isn’t just another AI. Unlike today’s pre-trained, prompt-driven models and agents, Asinoid is a self-improving and proactive mind. It learns over time. It remembers. It sets its own goals. And it gets smarter by rewiring itself from within.

An Asinoid can power a fleet of autonomous drones. Act as the brain inside your security system. It can drive your R&D, run your meetings, become the cognitive layer behind your SaaS product or even co-found a company with you.

The possibilities are endless. And we want to explore them with you.

We’re opening access to pioneering companies, researchers, and developers who want to build with us. If you’re ready to create something groundbreaking, let’s get started.

Meta released a massive trove of chemistry data Wednesday that it hopes will supercharge scientific research, and is also crucial for the development of more advanced, general-purpose AI systems.

The company used the data set to build a powerful new AI model for scientists that can speed up the time it takes to create new drugs and materials.

The Open Molecules 2025 effort required 6 billion compute hours to create, and is the result of 100 million calculations that simulate the quantum mechanics of atoms and molecules in four key areas chosen for their potential impact on science.

The progress of AI is bottlenecked by the quality of evaluation, and powerful LLM-as-a-Judge models have proved to be a core solution. Improved judgment ability is enabled by stronger chain-of-thought reasoning, motivating the need to find the best recipes for training such models to think. In this work we introduce J1, a reinforcement learning approach to training such models. Our method converts both verifiable and non-verifiable prompts to judgment tasks with verifiable rewards that incentivize thinking and mitigate judgment bias. In particular, our approach outperforms all other existing 8B or 70B models when trained at those sizes, including models distilled from DeepSeek-R1. J1 also outperforms o1-mini, and even R1 on some benchmarks, despite training a smaller model. We provide analysis and ablations comparing Pairwise-J1 vs Pointwise-J1 models, offline vs online training recipes, reward strategies, seed prompts, and variations in thought length and content. We find that our models make better judgments by learning to outline evaluation criteria, comparing against self-generated reference answers, and re-evaluating the correctness of model responses.

Tesla is developing a terawatt-level supercomputer at Giga Texas to enhance its self-driving technology and AI capabilities, positioning the company as a leader in the automotive and renewable energy sectors despite current challenges ## ## Questions to inspire discussion.

Tesla’s Supercomputers.

💡 Q: What is the scale of Tesla’s new supercomputer project?

A: Tesla’s Cortex 2 supercomputer at Giga Texas aims for 1 terawatt of compute with 1.4 billion GPUs, making it 3,300x bigger than today’s top system.

💡 Q: How does Tesla’s compute power compare to Chinese competitors?

A: Tesla’s FSD uses 3x more compute than Huawei, Xpeng, Xiaomi, and Li Auto combined, with BYD not yet a significant competitor. Full Self-Driving (FSD)

This material can expand, change shape, move, and respond to electromagnetic commands like a remotely controlled robot, even though it has no motor or internal gears. In a study that echoes scenes from the Transformers movie franchise, engineers at Princeton University have developed a material c