GET MY FREE GUIDE: đ *The Content Creatorâs AI Blueprint: From 25 Hours to 5 Minutes* https://FirstMovers.ai/blueprint/*48-dimensional light just revealedâŠ
Category: robotics/AI – Page 3
Val Kilmer Resurrected by AI: âAs Deep as the Graveâ Trailer Brings Late Actor Back to the Big Screen (EXCLUSIVE)
The filmmakers behind âAs Deep as the Graveâ have debuted the trailer for the upcoming historical drama, giving viewers a first look at the AI technology that was used to create Val Kilmerâs performance.
Kilmer, who died in 2025 after battling throat cancer, was cast as Father Fintan, a Catholic priest and Native American spiritualist, but was too sick to shoot his role. With the cooperation of Kilmerâs estate and his daughter Mercedes, the âAs Deep as the Graveâ team used generative AI to include the actor in the finished film.
Machine learning accelerates analysis of fusion materials
Tungstenâs superior performance in extreme environments makes it a leading candidate for plasma-facing components (PFCs) in fusion reactors, but the ultra-high heat can damage its microscopic structure and lead to component failure. Scanning electron microscopy (SEM) can capture and quantify these microstructure changes, but assembling a sufficiently large dataset of SEM imagery is expensive and logistically challenging.
To augment this dataset, researchers at Oak Ridge National Laboratory trained a generative machine learning model using 3,200 SEM images of tungsten samples exposed to fusion-relevant conditions. The model can generate novel SEM images with realistic microstructures and surface features, such as cracks and pores, without replicating the original images.
âThis work is not about making pretty pictures, itâs about capturing the statistics of real damage on these materials,â said ORNLâs Rinkle Juneja, the projectâs principal investigator. âWe train our generative workflow to learn tungstenâs microstructure signatures, like crack patterns, so it can generate new, statistically consistent microstructures, laying the groundwork for robust, data-driven assessment of PFC fusion materials.â
Any color you like: Scientists create âany wavelengthâ lasers in tiny circuits for light
Computer chips that cram billions of electronic devices into a few square inches have powered the digital economy and transformed the world. Scientists may be on the cusp of launching a similar technological revolutionâthis time using light.
In a significant advance toward that goal, National Institute of Standards and Technology (NIST) scientists and collaborators have pioneered a way to make integrated circuits for light by depositing complex patterns of specialized materials onto silicon wafers. These so-called photonics chips use optical devices such as lasers, waveguides, filters and switches to shuttle light around and process information.
The new advance could provide a big boost for emerging technologies such as artificial intelligence, quantum computers and optical atomic clocks.
Microsoft pays $2.3M for cloud and AI flaws at Zero Day Quest
Microsoft has awarded $2.3 million to security researchers after receiving nearly 700 submissions during this yearâs Zero Day Quest hacking contest.
Tom Gallagher, Vice President of Engineering at Microsoft Security Response Center (MSRC), said that over 80 flaws found during the live event at Microsoftâs Redmond campus were high-impact cloud and AI security vulnerabilities.
âDuring the 2026 live hacking event, Microsoft partnered with the global security research community, representing more than 20 countries and a wide range of professional backgrounds, from high school students to college professors,â Gallagher said.
AI chatbot teaches AI âstudentâ to love owls, even after data is scrubbed
Large language models (LLMs) can teach other algorithms unwanted traits, which can persist even when training data has been scrubbed of the original trait, according to new research published in Nature. In one example, a model seems to transmit a preference for owls to other models via hidden signals in data. The findings demonstrate that more thorough safety checks are needed when producing LLMs.
LLMs can generate datasets to train other models through a process called distillation, in which a âstudentâ model is taught to mimic the outputs of a âteacherâ model. While this process can be used to produce cheaper versions of an LLM, it is unclear which properties of the teacher model are transferred to the student.
Alex Cloud and colleagues used GPT-4.1, which was prompted to have traits unrelated to a core task (a preference for owls or certain trees, for instance), to train a student model with output consisting only of numerical data, with no references to the trait. When the resulting student was subsequently prompted, it mentioned the teacherâs favorite animal or tree over 60% of the time, compared to 12% for a student trained by a teacher with no favorite animal or tree. This effect was also observed when the student was trained on a teacherâs output that contained code instead of numbers.
Human Gene Editing Has Begun | George Church
We are already gene editing humans. You just havenât noticed.
George Church, Harvard geneticist and Human Genome Project pioneer, explains why CRISPR wasnât the real breakthrough, how multiplex gene editing unlocked organ transplants and de-extinction, and why aging will likely require rewriting many genes at once.
Hosted by Mgoes â https://twitter.com/m_goes_distance
Brought to you by SuperHuman Fund â https://superhuman.fund/
0:00 â Gene Editing Mammals â Humans
8:36 â Germline vs Somatic
14:56 â Modified Humans Are Already Here
18:50 â Enhancing Healthy Humans
25:00 â Aging Therapies vs Cognitive Enhancement
30:20 â Embryo Selection
38:10 â Is US Losing To UAE?
42:33 â Biotech Failures
49:31 â Next Dire Wolf Moment
54:21 â AI x Science
1:02:07 â Synthetizing Entire Genomes.
The Accelerate Bio Podcast explores the future of humanity in the age of Artificial Intelligence. Subscribe for deep-dive conversations with founders, scientists, and investors shaping AI, biotechnology, and human progress.
This episode discusses George Church, gene editing, CRISPR, human enhancement, longevity, aging, embryo selection, synthetic biology, multiplex editing, AI biotech.
Gemini Robotics ER 1.6: Enhanced Embodied Reasoning
Today, weâre introducing Gemini Robotics-ER 1.6, a significant upgrade to our reasoning-first model that enables robots to understand their environments with unprecedented precision. By enhancing spatial reasoning and multi-view understanding, we are bringing a new level of autonomy to the next generation of physical agents.
This model specializes in reasoning capabilities critical for robotics, including visual and spatial understanding, task planning and success detection. It acts as the high-level reasoning model for a robot, capable of executing tasks by natively calling tools like Google Search to find information, vision-language-action models (VLAs) or any other third-party user-defined functions.
Gemini Robotics-ER 1.6 shows significant improvement over both Gemini Robotics-ER 1.5 and Gemini 3.0 Flash, specifically enhancing spatial and physical reasoning capabilities such as pointing, counting, and success detection. We are also unlocking a new capability: instrument reading, enabling robots to read complex gauges and sight glasses â a use case we discovered through close collaboration with our partner, Boston Dynamics.