Toggle light / dark theme

A new robot has overcome a fundamental challenge of locomotion by teaching itself how to walk.

Researchers from Google developed algorithms that helped the four-legged bot to learn how to walk across a range of surfaces within just hours of practice, annihilating the record times set by its human overlords.

Their system uses deep reinforcement learning, a form of AI that teaches through trial and error by providing rewards for certain actions.

AI/Humans, our brave now world, happening now.


Are we facing a golden digital age or will robots soon run the world? We need to establish ethical standards in dealing with artificial intelligence — and to answer the question: What still makes us as human beings unique?

Mankind is still decades away from self-learning machines that are as intelligent as humans. But already today, chatbots, robots, digital assistants and other artificially intelligent entities exist that can emulate certain human abilities. Scientists and AI experts agree that we are in a race against time: we need to establish ethical guidelines before technology catches up with us. While AI Professor Jürgen Schmidhuber predicts artificial intelligence will be able to control robotic factories in space, the Swedish-American physicist Max Tegmark warns against a totalitarian AI surveillance state, and the philosopher Thomas Metzinger predicts a deadly AI arms race. But Metzinger also believes that Europe in particular can play a pioneering role on the threshold of this new era: creating a binding international code of ethics.

Computer scientists from Rice, supported by collaborators from Intel, will present their results today at the Austin Convention Center as a part of the machine learning systems conference MLSys.

Many companies are investing heavily in GPUs and other specialized hardware to implement deep learning, a powerful form of artificial intelligence that’s behind digital assistants like Alexa and Siri, facial recognition, product recommendation systems and other technologies. For example, Nvidia, the maker of the industry’s gold-standard Tesla V100 Tensor Core GPUs, recently reported a 41% increase in its fourth quarter revenues compared with the previous year.

Rice researchers created a cost-saving alternative to GPU, an algorithm called “sub-linear deep learning engine” (SLIDE) that uses general purpose central processing units (CPUs) without specialized acceleration hardware.

Chinese technology giant Alibaba recently developed an AI system for diagnosing the COVID-19 (coronavirus).

Alibaba’s like Amazon, Microsoft, a video game company, and a nation-wide healthcare network all rolled into one with every branch being fed solutions from the company’s world-class AI department.

Per a report from Nikkei’s Asian Review (h/t TechSpot), Alibaba claims its new system can detect coronavirus in CT scans of patients’ chests with 96% accuracy against viral pneumonia cases. And it only takes 20 seconds for the AI to make a determination – according to the report, humans generally take about 15 minutes to diagnose the illness as there can be upwards of 300 images to evaluate.

The fact that self-driving trucks did not initially capture the public imagination is perhaps not entirely shocking. After all, most people have never been inside a truck, let alone a self-driving one, and don’t give them more than a passing thought. But just because trucks aren’t foremost in most people’s thoughts, doesn’t mean trucks don’t impact everyone’s lives day in and day out. Trucking is an $800 billion industry in the US. Virtually everything we buy — from our food to our phones to our furniture — reaches us via truck. Automating the movement of goods could, therefore, have at least as profound an impact on our lives as automating how we move ourselves. And people are starting to take notice.

As self-driving industry pioneers, we’re not surprised: we have been saying this for years. We founded Kodiak Robotics in 2018 with the vision of launching a freight carrier that would drive autonomously on highways, while continuing to use traditional human drivers for first- and last-mile pickup and delivery. We developed this model because our experience in the industry convinced us that today’s self-driving technology is best-suited for highway driving. While training self-driving vehicles to drive on interstate highways is complicated, hard work, it’s a much simpler, more constrained problem than driving on city streets, which have pedestrians, public transportation, bikes, pets, and other things that make cities great to live in but difficult for autonomous technology to understand and navigate.

Last summer, the National Security Commission on Artificial Intelligence asked to hear original, creative ideas about how the United States would maintain global leadership in a future enabled by artificial intelligence. RAND researchers stepped up to the challenge.


“Send us your ideas!” That was the open call for submissions about emerging technology’s role in global order put out last summer by the National Security Commission on Artificial Intelligence (NSCAI). RAND researchers stepped up to the challenge, and a wide range of ideas were submitted. Ten essays were ultimately accepted for publication.

The NSCAI, co-chaired by Eric Schmidt, the former chief executive of Alphabet (Google’s parent company), and Robert Work, the former deputy secretary of defense, is a congressionally mandated, independent federal commission set up last year “to consider the methods and means necessary to advance the development of artificial intelligence, machine learning, and associated technologies by the United States to comprehensively address the national security and defense needs of the United States.”

The commission’s ultimate role is to elevate awareness and to inform better legislation. As part of its mission, the commission is tasked with helping the Department of Defense better understand and prepare for a world where AI might impact national security in unexpected ways.

Rice University computer scientists have overcome a major obstacle in the burgeoning artificial intelligence industry by showing it is possible to speed up deep learning technology without specialized acceleration hardware like graphics processing units (GPUs).

Computer scientists from Rice, supported by collaborators from Intel, will present their results today at the Austin Convention Center as a part of the machine learning systems conference MLSys.

Many companies are investing heavily in GPUs and other specialized hardware to implement deep learning, a powerful form of artificial intelligence that’s behind digital assistants like Alexa and Siri, facial recognition, product recommendation systems and other technologies. For example, Nvidia, the maker of the industry’s gold-standard Tesla V100 Tensor Core GPUs, recently reported a 41% increase in its fourth quarter revenues compared with the previous year.

Google has released a neural-network-powered chatbot called Meena that it claims is better than any other chatbot out there.

Data slurp: Meena was trained on a whopping 341 gigabytes of public social-media chatter—8.5 times as much data as OpenAI’s GPT-2. Google says Meena can talk about pretty much anything, and can even make up (bad) jokes.

Why it matters: Open-ended conversation that covers a wide range of topics is hard, and most chatbots can’t keep up. At some point most say things that make no sense or reveal a lack of basic knowledge about the world. A chatbot that avoids such mistakes will go a long way toward making AIs feel more human, and make characters in video games more lifelike.

Some forms of autonomous vehicle watch the road ahead using built-in cameras. Ensuring that accurate camera orientation is maintained during driving is, therefore, in some systems key to letting these vehicles out on roads. Now, scientists from Korea have developed what they say is an accurate and efficient camera-orientation estimation method to enable such vehicles to navigate safely across distances.


A fast camera-orientation estimation algorithm that pinpoints vanishing points could make self-driving cars safer.

John Wallace

Facebook Icon