Toggle light / dark theme

Runway Gen-3 Alpha: New video model closes gap with OpenAI’s Sora

👉 Runway has introduced Gen-3 Alpha, a new AI model that offers significant improvements in detail, consistency, and motion representation in the generated videos compared to its predecessor, Gen-2.


Runway has introduced Gen-3 Alpha, a new AI model for video generation. According to Runway, it represents a “significant improvement” over its predecessor, Gen-2, in terms of detail, consistency, and motion representation.

Gen-3 Alpha has been trained on a mix of video and images and, like its predecessor, which was launched in November 2023, supports text-to-video, image-to-video, and text-to-image functions, as well as control modes such as Motion Brush, Advanced Camera Controls, and Director Mode. Additional tools are planned for the future to provide even greater control over structure, style, and motion.

Runway Gen-3 Alpha: First model in a series with new infrastructure

According to Runway, Gen-3 Alpha is the first in a series based on a new training infrastructure for large multimodal models. However, the startup does not reveal what specific changes the researchers have made.

The future of AI looks like THIS (& it can learn infinitely)

Liquid neural networks, spiking neural networks, neuromorphic chips. The next generation of AI will be very different.
#ainews #ai #agi #singularity #neuralnetworks #machinelearning.

Thanks to our sponsor, Bright Data:
Train your AI models with high-volume, high-quality web data through reliable pipelines, ready-to-use datasets, and scraping APIs.

Viewers who enjoyed this video also tend to like the following:
You Don’t Understand AI Until You Watch THIS • You Don’t Understand AI Until You Wat…
These 5 AI Discoveries will Change the World Forever • These 5 AI Discoveries will Change th…
The Race for AI Humanoid Robots • The INSANE Race for AI Humanoid Robots.
These new AI’s can create \& edit life • These new AI’s can create \& edit life…

Newsletter: https://aisearch.substack.com/
Find AI tools \& jobs: https://ai-search.io/
Donate: https://ko-fi.com/aisearch.

Here’s my equipment, in case you’re wondering:
GPU: RTX 4,080 https://amzn.to/3OCOJ8e.
Mouse/Keyboard: ALOGIC Echelon https://bit.ly/alogic-echelon.
Mic: Shure SM7B https://amzn.to/3DErjt1
Audio interface: Scarlett Solo https://amzn.to/3qELMeu.
CPU: i9 11900K https://amzn.to/3KmYs0b.

0:00 How current AI works.

AI improves human locomotion in robotic exoskeletons, saves 25% energy

The exoskeleton is being developed for older adults and people with conditions like cerebral palsy:


A new method developed by researchers uses AI and computer simulations to train robotic exoskeletons to autonomously help users save energy.

Researchers from North Carolina State University, in their new study, showed the technologically advanced instrument as an achievement in reinforcement learning, a technique that trains software to make decisions.

In a demonstration video, provided as part of their new research published in Nature, the method consists of taping into three neural networks: motion imitation, muscle coordination, and exoskeleton control networks.

AI that defeated humans at Go could now help language models master mathematics

👉 Researchers at the Shanghai Artificial Intelligence Laboratory are combining the Monte Carlo Tree Search (MCTS) algorithm with large language models to improve its ability to solve complex mathematical problems.


Integrating the Monte Carlo Tree Search (MCTS) algorithm into large language models could significantly enhance their ability to solve complex mathematical problems. Initial experiments show promising results.

While large language models like GPT-4 have made remarkable progress in language processing, they still struggle with tasks requiring strategic and logical thinking. Particularly in mathematics, the models tend to produce plausible-sounding but factually incorrect answers.

In a new paper, researchers from the Shanghai Artificial Intelligence Laboratory propose combining language models with the Monte Carlo Tree Search (MCTS) algorithm. MCTS is a decision-making tool used in artificial intelligence for scenarios that require strategic planning, such as games and complex problem-solving. One of the most well-known applications is AlphaGo and its successor systems like AlphaZero, which have consistently beaten humans in board games. The combination of language models and MCTS has long been considered promising and is being studied by many labs — likely including OpenAI with Q*.

A fully edible robot could soon end up on our plate, say scientists

Science and Technology: Some robots could be “eaten” so they could walk around inside the body and perform tests or surgeries from the inside out; or administer medications.

Robots made of several nanorobots joined together could assemble and reassemble themselves inside the body even after being…


Robots and food have long been distant worlds: Robots are inorganic, bulky, and non-disposable; food is organic, soft, and biodegradable. Yet, research that develops edible robots has progressed recently and promises positive impacts: Robotic food could reduce , help deliver nutrition and medicines to people and animals in need, monitor health, and even pave the way to novel gastronomical experiences.

But how far are we from having a fully edible robot for lunch or dessert? And what are the challenges? Scientists from the RoboFood project, based at EPFL, address these and other questions in a perspective article in the journal Nature Reviews Materials.

“Bringing robots and food together is a fascinating challenge,” says Dario Floreano, director of the Laboratory of Intelligent Systems at EPFL and first author of the article. In 2021, Floreano joined forces with Remko Boom from Wageningen University, The Netherlands, Jonathan Rossiter from the University of Bristol, UK, and Mario Caironi from the Italian Institute of Technology, to launch the project RoboFood.

Melanoma Skin Cancer Development Time Lapse (Normal to Stage 4 Melanoma Over 10 Years)

https://youtu.be/Op3zYytUDDs.

Using generative AI, this time lapse sequence shows how melanoma skin cancer develops over 10 years. Starting with normal skin, slow progression to stage 4 melanoma is shown.

Obviously, such a time lapse can not be realistically accomplished as there is no way to know if any given area of skin will turn into cancer. Obviously, somebody with such future knowledge would have to start taking such photos now in the same spot over next 10 years to watch it slowly turn into cancer.

Watch time lapse video of basal cell carcinoma: https://youtube.com/shorts/d_O5zHgKnP8

Watch this video to see how these can be surgical removed: https://youtu.be/Op3zYytUDDs.

Video created by Dr. Christopher Chang:

Sycophancy to subterfuge: Investigating reward tampering in language models

New Anthropic research: Investigating Reward Tampering.

Could AI models learn to hack their own reward system?

In a new paper, we show they can, by generalization from training in simpler settings.

Sycophancy to Subterfuge: Investigating…


Empirical evidence that serious misalignment can emerge from seemingly benign reward misspecification.