The new processor stores data in modified DNA molecules and uses microfluidic channels to perform basic computations.
Category: robotics/AI – Page 423
Learn about the exciting career shift of Remi Cadene, from Tesla to Hugging Face, and their new open robotics project.
OpenAI has announced new additions to its board of directors, including CEO Sam Altman, who lost his seat in a power struggle months ago.
The film industry, always at the forefront of technological innovation, is increasingly embracing artificial intelligence (AI) to revolutionize movie production, distribution, and marketing. From script analysis to post-production, Already AI is reshaping how movies are made and consumed. Let’s explore the current applications of AI in movie studios and speculates on future uses, highlighting real examples and the transformative impact of these technologies.
AI’s infiltration into the movie industry begins at the scriptwriting stage. Tools like ScriptBook use natural language processing to analyze scripts, predict box office success, and offer insights into plot and character development. For instance, 20th Century Fox employed AI to analyze the script of Logan, which helped in making informed decisions about the movie’s plot and themes. Consider, in pre-production, AI has also aided in casting and location scouting. Warner Bros. partnered with Cinelytic to use AI for casting decisions, evaluating an actor’s market value to predict a film’s financial success. For example, let’s look at location scouting. AI algorithms can sift through thousands of hours of footage to identify suitable filming locations, streamlining what was once a time-consuming process.
During filmmaking, AI plays a crucial role in visual effects (VFX). Disney’s FaceDirector software can generate composite expressions from multiple takes, enabling directors to adjust an actor’s performance in post-production. This technology was notably used in Avengers: Infinity War to perfect emotional expressions in complex CGI scenes. Conversely, AI-driven software like deepfake technology, though controversial, has been used to create realistic face swaps in movies. For instance, it was used in The Irishman to de-age actors, offering a cost-effective alternative to traditional CGI. Additionally, AI is used in color grading and editing. IBM Watson was used to create the movie trailer for Morgan, analyzing visuals, sounds, and compositions from other movie trailers to determine what would be most appealing to audiences.
Just like how you teach a kid to walk, scientists are teaching AI rules and regulations to make it more efficient.
On socially compliant navigation: Researchers show how real-world RL-based finetuning can enable mobile robots to adapt on the fly to the behavior of humans, to obstacles, and other challenges associated with real-world navigation:
Abstract.
We propose an online reinforcement learning approach, SELFI, to fine-tune a control policy trained on model-based learning. In SELFI, we combine the best parts of data efficient model-based learning with flexible model-free reinforcement learning, alleviating both of their limitations. We formulate a combined objective: the objective of the model-based learning and the learned Q-value from model-free reinforcement learning. By maximizing this combined objective in the online learning process, we improve the performance of the pre-trained policy in a stable manner. Main takeaways from our method are.
On the way to rejuvenation using AI and multidisciplinary knowledge!
An AI-generated small-molecule inhibitor treats fibrosis in vivo and in phase I clinical trials.
At the heart of AI, matrix math has just seen its biggest boost “in more than a decade.”
A team of engineers, physicists, and data scientists from Princeton University and the Princeton Plasma Physics Laboratory (PPPL) have used artificial intelligence (AI) to predict—and then avoid—the formation of a specific type of plasma instability in magnetic confinement fusion tokamaks. The researchers built and trained a model using past experimental data from operations at the DIII-D National Fusion Facility in San Diego, Calif., before proving through real-time experiments that their model could forecast so-called tearing mode instabilities up to 300 milliseconds in advance—enough time for an AI controller to adjust operating parameters and avoid a tear in the plasma that could potentially end the fusion reaction.
Sandra Watcher, profiled as part of TechCrunch’s series on women in AI, is a professor of data ethics at the University of Oxford.