SPIDERS often make people jump but a bunch of clever scientists have managed to train one to jump on demand.
Researchers managed to teach the spider – nicknamed Kim – to jump from different heights and distances so they could film the arachnid’s super-springy movements.
The study is part of a research programme by the University of Manchester which aims to create a new class of micro-robots agile enough to jump like acrobatic spiders.
November 2019 is a landmark month in the history of the future. That’s when humanoid robots that are indistinguishable from people start running amok in Los Angeles. Well, at least they do in the seminal sci-fi film “Blade Runner.” Thirty-seven years after its release, we don’t have murderous androids running around. But we do have androids like Hanson Robotics’ Sophia, and they could soon start working in jobs traditionally performed by people.
Russian start-up Promobot recently unveiled what it calls the world’s first autonomous android. It closely resembles a real person and can serve in a business capacity. Robo-C can be made to look like anyone, so it’s like an android clone. It comes with an artificial intelligence system that has more than 100,000 speech modules, according to the company. It can operate at home, acting as a companion robot and reading out the news or managing smart appliances — basically, an anthropomorphic smart speaker. It can also perform workplace tasks such as answering customer questions in places like offices, airports, banks and museums, while accepting payments and performing other functions.
“We analyzed the needs of our customers, and there was a demand,” says Promobot co-founder and development director Oleg Kivokurtsev. “But, of course, we started the development of an anthropomorphic robot a long time ago, since in robotics there is the concept of the ‘Uncanny Valley,’ and the most positive perception of the robot arises when it looks like a person. Now we have more than 10 orders from companies and private clients from around the world.”
Artificial Intelligence (AI) is one of the most powerful technologies ever developed, but it’s not nearly as new as you might think. In fact, it’s undergone several evolutions since its inception in the 1950s. The first generation of AI was ‘descriptive analytics,’ which answers the question, “What happened?” The second, ‘diagnostic analytics,’ addresses, “Why did it happen?” The third and current generation is ‘predictive analytics,’ which answers the question, “Based on what has already happened, what could happen in the future?”
While predictive analytics can be very helpful and save time for data scientists, it is still fully dependent on historic data. Data scientists are therefore left helpless when faced with new, unknown scenarios. In order to have true “artificial intelligence,” we need machines that can “think” on their own, especially when faced with an unfamiliar situation. We need AI that can not just analyze the data it is shown, but express a “gut feeling” when something doesn’t add up. In short, we need AI that can mimic human intuition. Thankfully, we have it.
The self-driving car could transform our ideas of space and time, enabling us to do more of the things we love and less of the ones we loathe. Here are some of the most fascinating potential uses.
Please have a listen to Episode 14 of Cosmic Controversy with guest Julie Castillo, NASA’s Dawn mission project scientist. We spend much of the episode discussing the beguiling dwarf planet Ceres and the need for a sample return mission.
This week’s guest is NASA Dawn project scientist Julie Castillo-Rogez who led the hugely successful robotic mission on the first in-depth look at the asteroid Vesta and the dwarf planet Ceres. Castillo talks about why there’s a growing consensus that Ceres may have long had habitable subsurface conditions and why we need a sample return mission to launch in 2033. We also discuss Mars’ moons of Deimos and Phobos and the first interstellar asteroid, Oumuamua.
From the understated opulence of a Bentley to the stalwart family minivan to the utilitarian pickup, Americans know that the car you drive is an outward statement of personality. You are what you drive, as the saying goes, and researchers at Stanford have just taken that maxim to a new level.
Using computer algorithms that can see and learn, they have analyzed millions of publicly available images on Google Street View. The researchers say they can use that knowledge to determine the political leanings of a given neighborhood just by looking at the cars on the streets.
“Using easily obtainable visual data, we can learn so much about our communities, on par with some information that takes billions of dollars to obtain via census surveys. More importantly, this research opens up more possibilities of virtually continuous study of our society using sometimes cheaply available visual data,” said Fei-Fei Li, an associate professor of computer science at Stanford and director of the Stanford Artificial Intelligence Lab and the Stanford Vision Lab, where the work was done.
Scientists have developed the most accurate computing method to date to reconstruct the patchwork of genetic faults within tumors and their history during disease development, in new research funded by Cancer Research UK and published in Nature Genetics.
Their powerful approach combines artificial intelligence with the mathematical models of Charles Darwin’s theory of evolution to analyze genetic data more accurately than ever before, paving the way for a fundamental shift in how cancer’s genetic diversity is used to deliver tailored treatments to patients.
Applying these new algorithms to DNA data taken from patient samples revealed that tumors had a simpler genetic structure than previously thought. The algorithms showed that tumors had fewer distinct subpopulations of cells, called “subclones,” than previously suggested. The scientists, based at The Institute of Cancer Research, London, and Queen Mary University of London, could also tell how old each subclone was and how fast it was growing.
Tesla boss Elon Musk has been told by Germany’s economy minister that he can have whatever he needs for his new electric vehicle manufacturing plant in Berlin.
Musk and Germany economy minister Peter Altmaier had an hour long meeting in Berlin on Wednesday, according to a source familiar with the matter. “The main topics were Tesla’s billions of euros worth of investment in Germany,” the source said.
The duo, who first met six years ago, also spoke about Musk’s projects in areas like space flight and autonomous driving.