Toggle light / dark theme

Probing the chemical ‘reactome’ with high-throughput experimentation data

Using #AI to define the chemical “reactome”—the important functional sites in small molecules.


High-throughput experimentation (HTE) has great utility for chemical synthesis. However, robust interpretation of high-throughput data remains a challenge. Now, a flexible analyser has been developed on the basis of a machine learning-statistical analysis framework, which can reveal hidden chemical insights from historical HTE data of varying scopes, sizes and biases.

Top 10 newest and most advanced humanoid robots in the world. Humanoid robot technology | Pro Robots

We are already living in the era of the fourth industrial revolution, but in the near future we will be facing another one that could really change everything. We are talking about the revolution of humanoid robots — versatile, intelligent and dexterous machines that can not only help, but also replace humans in tight places. In this video, we’ll tell you about the top 10 newest and most advanced humanoid robots in the world, and what technologies will make them truly versatile! Onward to a brighter future)

👉For business inquiries: [email protected].
✅ Instagram: / pro_robots.
✅ Telegram: https://t.me/PRO_robots.
✅ Facebook https://www.facebook.com/PRO.Robots.I

0:00 A breakthrough in humanoid robots.
1:17 What technologies could make robots as dexterous as humans?
3:46 Digit, the first commercial humanoid robot from Agility Robotics.
5:18 New humanoid robot from Singapore.
6:45 What kind of humanoid robot has OpenAI invested in?
7:34 New Apollo robot from Apptronik.
9:00 CyberOne humanoid robot project from Xiaomi.
10:20 Unitree’s H1 robot.
11:07 XPENG’s agile and stable robot PX5
12:05 Sanctuary AI’s most agile robot Phoenix.
13:13 The world’s most advanced humanoid robot by Figure AI
15:18 Tesla Bot: Ilon Musk’s Humanoid Robot.
16:15 The world’s most advanced humanoid robot from Boston Dynamics.

Boston Dynamics Atlas. If you’ve been following robotics, you’ve likely seen this humanoid robot in action. Atlas is a pinnacle of robotic achievement, showcasing impressive mobility and coordination. Its advanced control system allows it to perform backflips, handstands, and navigate complex environments with ease. Atlas is not just a demonstration of technological prowess; it’s a glimpse into the future of robotics assisting in real-world scenarios.

Moving on to the Valkyrie robot from NASA. Initially designed for space exploration, Valkyrie boasts a humanoid form with an emphasis on strength and adaptability. Its design includes 44 degrees of freedom, making it highly flexible and capable of mimicking human movements. While initially intended for space missions, Valkyrie’s applications extend to disaster response and exploration of challenging terrains.

Now, let’s talk about the Tesla Bot. Yes, you heard it right, Tesla is venturing into humanoid robotics. Elon Musk unveiled the Tesla Bot with a vision to eliminate dangerous, repetitive, and boring tasks performed by humans. While specific details are still emerging, the idea is to create a humanoid robot using Tesla’s expertise in electric vehicles and AI. The Tesla Bot aims to be a general-purpose, capable machine for a variety of everyday tasks.

Multiple AI models help robots execute complex plans more transparently

The HiP framework developed MIT_CSAIL creates detailed plans for robots using the expertise of three different foundation models, helping it execute tasks in households, factories, and construction that require multiple steps.


“All we want to do is take existing pre-trained models and have them successfully interface with each other,” says Anurag Ajay, a PhD student in the MIT Department of Electrical Engineering and Computer Science (EECS) and a CSAIL affiliate. “Instead of pushing for one model to do everything, we combine multiple ones that leverage different modalities of internet data. When used in tandem, they help with robotic decision-making and can potentially aid with tasks in homes, factories, and construction sites.”

These models also need some form of “eyes” to understand the environment they’re operating in and correctly execute each sub-goal. The team used a large video diffusion model to augment the initial planning completed by the LLM, which collects geometric and physical information about the world from footage on the internet. In turn, the video model generates an observation trajectory plan, refining the LLM’s outline to incorporate new physical knowledge.

This process, known as iterative refinement, allows HiP to reason about its ideas, taking in feedback at each stage to generate a more practical outline. The flow of feedback is similar to writing an article, where an author may send their draft to an editor, and with those revisions incorporated in, the publisher reviews for any last changes and finalizes.