Toggle light / dark theme

Vision-free MIT Cheetah

MIT’s Cheetah 3 robot can now leap and gallop across rough terrain, climb a staircase littered with debris, and quickly recover its balance when suddenly yanked or shoved, all while essentially blind.

Watch more videos from MIT: https://www.youtube.com/user/MITNewsOffice?sub_confirmation=1

The Massachusetts Institute of Technology is an independent, coeducational, privately endowed university in Cambridge, Massachusetts. Our mission is to advance knowledge; to educate students in science, engineering, and technology; and to tackle the most pressing problems facing the world today. We are a community of hands-on problem-solvers in love with fundamental science and eager to make the world a better place.
The MIT YouTube channel features videos about all types of MIT research, including the robot cheetah, LIGO, gravitational waves, mathematics, and bombardier beetles, as well as videos on origami, time capsules, and other aspects of life and culture on the MIT campus. Our goal is to open the doors of MIT and bring the Institute to the world through video.

Scientists train spider to jump on demand in bid to launch army of pest-fighting robots

Circa 2018


SPIDERS often make people jump but a bunch of clever scientists have managed to train one to jump on demand.

Researchers managed to teach the spider – nicknamed Kim – to jump from different heights and distances so they could film the arachnid’s super-springy movements.

The study is part of a research programme by the University of Manchester which aims to create a new class of micro-robots agile enough to jump like acrobatic spiders.

Insanely humanlike androids have entered the workplace and soon may take your job

November 2019 is a landmark month in the history of the future. That’s when humanoid robots that are indistinguishable from people start running amok in Los Angeles. Well, at least they do in the seminal sci-fi film “Blade Runner.” Thirty-seven years after its release, we don’t have murderous androids running around. But we do have androids like Hanson Robotics’ Sophia, and they could soon start working in jobs traditionally performed by people.

Russian start-up Promobot recently unveiled what it calls the world’s first autonomous android. It closely resembles a real person and can serve in a business capacity. Robo-C can be made to look like anyone, so it’s like an android clone. It comes with an artificial intelligence system that has more than 100,000 speech modules, according to the company. It can operate at home, acting as a companion robot and reading out the news or managing smart appliances — basically, an anthropomorphic smart speaker. It can also perform workplace tasks such as answering customer questions in places like offices, airports, banks and museums, while accepting payments and performing other functions.

“We analyzed the needs of our customers, and there was a demand,” says Promobot co-founder and development director Oleg Kivokurtsev. “But, of course, we started the development of an anthropomorphic robot a long time ago, since in robotics there is the concept of the ‘Uncanny Valley,’ and the most positive perception of the robot arises when it looks like a person. Now we have more than 10 orders from companies and private clients from around the world.”

The fourth generation of AI is here, and it’s called ‘Artificial Intuition’

Artificial Intelligence (AI) is one of the most powerful technologies ever developed, but it’s not nearly as new as you might think. In fact, it’s undergone several evolutions since its inception in the 1950s. The first generation of AI was ‘descriptive analytics,’ which answers the question, “What happened?” The second, ‘diagnostic analytics,’ addresses, “Why did it happen?” The third and current generation is ‘predictive analytics,’ which answers the question, “Based on what has already happened, what could happen in the future?”

While predictive analytics can be very helpful and save time for data scientists, it is still fully dependent on historic data. Data scientists are therefore left helpless when faced with new, unknown scenarios. In order to have true “artificial intelligence,” we need machines that can “think” on their own, especially when faced with an unfamiliar situation. We need AI that can not just analyze the data it is shown, but express a “gut feeling” when something doesn’t add up. In short, we need AI that can mimic human intuition. Thankfully, we have it.

Episode 14 — Does the Dwarf Planet Ceres Harbor Life?

Please have a listen to Episode 14 of Cosmic Controversy with guest Julie Castillo, NASA’s Dawn mission project scientist. We spend much of the episode discussing the beguiling dwarf planet Ceres and the need for a sample return mission.


This week’s guest is NASA Dawn project scientist Julie Castillo-Rogez who led the hugely successful robotic mission on the first in-depth look at the asteroid Vesta and the dwarf planet Ceres. Castillo talks about why there’s a growing consensus that Ceres may have long had habitable subsurface conditions and why we need a sample return mission to launch in 2033. We also discuss Mars’ moons of Deimos and Phobos and the first interstellar asteroid, Oumuamua.

Artificial intelligence algorithm can determine a neighborhood’s political leanings by its cars

From the understated opulence of a Bentley to the stalwart family minivan to the utilitarian pickup, Americans know that the car you drive is an outward statement of personality. You are what you drive, as the saying goes, and researchers at Stanford have just taken that maxim to a new level.

Using computer algorithms that can see and learn, they have analyzed millions of publicly available images on Google Street View. The researchers say they can use that knowledge to determine the political leanings of a given neighborhood just by looking at the cars on the streets.

“Using easily obtainable visual data, we can learn so much about our communities, on par with some information that takes billions of dollars to obtain via census surveys. More importantly, this research opens up more possibilities of virtually continuous study of our society using sometimes cheaply available visual data,” said Fei-Fei Li, an associate professor of computer science at Stanford and director of the Stanford Artificial Intelligence Lab and the Stanford Vision Lab, where the work was done.

Teaching evolutionary theory to artificial intelligence reveals cancer’s life history

Scientists have developed the most accurate computing method to date to reconstruct the patchwork of genetic faults within tumors and their history during disease development, in new research funded by Cancer Research UK and published in Nature Genetics.

Their powerful approach combines with the mathematical models of Charles Darwin’s theory of evolution to analyze genetic data more accurately than ever before, paving the way for a fundamental shift in how ’s genetic diversity is used to deliver tailored treatments to patients.

Applying these to DNA data taken from patient samples revealed that tumors had a simpler genetic structure than previously thought. The algorithms showed that tumors had fewer distinct subpopulations of cells, called “subclones,” than previously suggested. The scientists, based at The Institute of Cancer Research, London, and Queen Mary University of London, could also tell how old each subclone was and how fast it was growing.