Toggle light / dark theme

Alongside the news that Boston Dynamics is going to let its robot dog, Spot, out of its laboratory for the first time, the company has released a new video of Atlas, its spectacular bipedal robot that’s previously been seen doing everything from parkour to backflips. In this latest video, Atlas does a small gymnastics routine, consisting of a number of somersaults, a short handstand, a 360-degree spinning jump, and even a balletic split leap.

What’s most impressive is seeing Atlas tie all these moves together into one pretty cohesive routine. In the video’s description, Boston Dynamics says that it’s using a “model predictive controller” to blend from one maneuver to the next. Presumably each somersault gives the robot a fair amount of forward momentum, but at no point in the video does it seem to lose its balance as a result. Amazingly, Atlas is able to roll gracefully along its back without any of its machinery getting squashed or tangled.

Cancer is one of humanity’s leading killers, and the main reason for that is it’s often hard to detect until it’s too late. But that might be about to change. Researchers have developed a new type of AI-powered blood test that can accurately detect over 50 different types of cancer and even identify where it is in the body.

There are just so many types of cancer that it’s virtually impossible to keep an eye out for all of them through routine tests. Instead, the disease usually isn’t detected until doctors begin specifically looking for it, after a patient experiences symptoms. And in many cases, by then it can be too late.

Ideally, there would be a routine test patients can undergo that would flag any type of cancer that may be budding in the body, giving treatment the best shot of being successful. And that’s just what the new study is working towards.

An artificial intelligence can accurately translate thoughts into sentences, at least for a limited vocabulary of 250 words. The system may bring us a step closer to restoring speech to people who have lost the ability because of paralysis.

Joseph Makin at the University of California, San Francisco, and his colleagues used deep learning algorithms to study the brain signals of four women as they spoke. The women, who all have epilepsy, already had electrodes attached to their brains to monitor seizures.

“Proprioceptive Control of an Over-Actuated Hexapod Robot in Unstructured Terrain,” by Marko Bjelonic, Navinda Kottege and Philipp Beckerle from Technische Universitat Darmstadt and CSIRO, Brisbane, Australia was presented at IROS 2016 in Daejeon, South Korea.

“We are not there yet but we think this could be the basis of a speech prosthesis,” said Dr Joseph Makin, co-author of the research from the University of California, San Francisco.

Writing in the journal Nature Neuroscience, Makin and colleagues reveal how they developed their system by recruiting four participants who had electrode arrays implanted in their brain to monitor epileptic seizures.

These participants were asked to read aloud from 50 set sentences multiple times, including “Tina Turner is a pop singer”, and “Those thieves stole 30 jewels”. The team tracked their neural activity while they were speaking.

Researchers have used artificial intelligence to detect Vietnam War-era bomb craters in Cambodia from satellite images—with the hope that it can help find unexploded bombs.

The new method increased true bomb crater detection by more than 160 percent over standard methods.

The model, combined with declassified U.S. military records, suggests that 44 to 50 percent of the bombs in the area studied may remain unexploded.

In recent years, researchers worldwide have been trying to develop sensors that could replicate humans’ sense of touch in robots and enhance their manipulation skills. While some of these sensors achieved remarkable results, most existing solutions have small sensitive fields or can only gather images with low-resolutions.

A team of researchers at UC Berkeley recently developed a new multi-directional tactile sensor, called OmniTact, that overcomes some of the limitations of previously developed sensors. OmniTact, presented in a paper pre-published on arXiv and set to be presented at ICRA 2020, acts as an artificial fingertip that allows robots to sense the properties of objects it is holding or manipulating.

“Our lab recognized the need for a sensor for general robotic manipulation tasks with expanded capabilities than current ,” Frederik Ebert, one of the researchers who carried out the study, told TechXplore. ‘“Existing tactile sensors are either flat, have small sensitive fields or only provide low-resolution signals. For example, the GelSight sensor provides high resolution (i.e., 400×400 pixel) images but is large and flat, providing sensitivity on only one side, while the OptoForce sensor is curved, but only provides force readings at a single point.”