Toggle light / dark theme

No Autonomous Trucks? Wait, What? ‘…it resembled conventional human-operated transportation vehicles, but with one exception — there was no driver’s cabin.’ — Philip K. Dick, 1955.

Elon Musk’s Traffic Tunnel Challenge Is Boring ‘The car vibrated… threading the maze of local tubes.’ — Jack Vance, 1954.

HVSD, Kitty Hawk’s Electric Plane Very quiet commuter plane offers VTOL service.

SHANGHAI (Reuters) — Researchers at one of China’s top universities have designed a robot they say could help save lives on the frontline during the coronavirus outbreak.

The machine consists of a robotic arm on wheels that can perform ultrasounds, take mouth swabs and listen to sounds made by a patient’s organs, usually done with a stethoscope.

Such tasks are normally carried out by doctors in person. But with this robot, which is fitted with cameras, medical personnel do not need to be in the same room as the patient, and could even be in a different city.

Medical robotics expert Guang-Zhong Yang calls for a global effort to develop new types of robots for fighting infectious diseases.


When I reached Professor Guang-Zhong Yang on the phone last week, he was cooped up in a hotel room in Shanghai, where he had self-isolated after returning from a trip abroad. I wanted to hear from Yang, a widely respected figure in the robotics community, about the role that robots are playing in fighting the coronavirus pandemic. He’d been monitoring the situation from his room over the previous week, and during that time his only visitors were a hotel employee, who took his temperature twice a day, and a small wheeled robot, which delivered his meals autonomously.

An IEEE Fellow and founding editor of the journal Science Robotics, Yang is the former director and co-founder of the Hamlyn Centre for Robotic Surgery at Imperial College London. More recently, he became the founding dean of the Institute of Medical Robotics at Shanghai Jiao Tong University, often called the MIT of China. Yang wants to build the new institute into a robotics powerhouse, recruiting 500 faculty members and graduate students over the next three years to explore areas like surgical and rehabilitation robots, image-guided systems, and precision mechatronics.

“I ran a lot of the operations for the institute from my hotel room using Zoom,” he told me.

The super-charged face scanning tech is costing the military at least $4.3 million.


The United States Army is currently building a super-charged facial recognition system — tech that could be ready for action as soon as next year.

The system, as described in a new One Zero story, analyzes infrared images of a person’s face to see if they’re a match for anyone on a government watchlist, such as a known terrorist. Not only will the finished system reportedly work in the dark, through car windshields, and even in less-than-clear weather conditions — but it’ll also be able to ID individuals from up to 500 meters away.

Army Agreements

One Zero tracked down two military contracts for the development of the tech.

DARPA, the Defense Advanced Research Projects Agency that’s responsible for developing emerging technologies for the U.S. military, is building a new high-tech spacecraft — and it’s armed. In an age of Space Force and burgeoning threats like hunter-killer satellites, this might not sound too surprising. But you’re misunderstanding. DARPA’s new spacecraft, currently “in the thick of it” when it comes to development, is armed. As in, it has arms. Like the ones you use for grabbing things.

Armed robots aren’t new. Mechanical robot arms are increasingly widespread here on Earth. Robot arms have been used to carry out complex surgery and flip burgers. Attached to undersea exploration vehicles, they’ve been used to probe submerged wrecks. They’ve been used to open doors, defuse bombs, and decommission nuclear power plants. They’re pretty darn versatile. But space is another matter entirely.

AI on the mars rover is used to help it navigate the planet. The computer is able to make multiple changes to the rover’s course every minute. Technology behind the Mars rovers are very similar to that used by self-driving cars. The major difference is that the rover has to navigate more complicated terrain and does not have other vehicular or pedestrian traffic to take into account. That complicated terrain is analyzed by the computer vision systems in the rover as it moves. If a terrain problem is encountered, the autonomous system makes a change to the course of the rover to avoid it or adjust navigation.

AI and Space: Made for Each Other

Over the last few years we have continued to see a large effort to commercialize space. Several companies are even looking to start tourist trips into space. Artificial intelligence is working to make space commercialization a possibility and to make space a safe environment in which to operate. The various benefits of AI in space all work together to enable further venturing into the unknown.

Dinorah Delfin has unleashed another exceptional edition of Immortalist Magazine. One of the best aspects is the dueling articles on the future states of Artificial General Intelligence (AGI).

Daniel Faggella constructs another dismal, dreary, depressing, destruction of hope for a benevolent artificial general intelligence. Emphasis on depressing. He has a wonderful way of creating a series of logical roadblocks to any optimism that there is a future with a compassionate artificial general intelligence. But he seems to be arguing against a contention that probably nobody believes in. He is arguing that there is no certainty that an artificial general intelligence will be benevolent. Most thinking humanoids are going to agree with that perspective. As he points out forcefully in his concluding and strongest rebuttal: no one knows what the future holds.

But no one is looking for absolute certainty in the far future. Transhumanists in general are looking for a path forward to an existence full of superhappiness, superintelligence and superlongevity.

After a prolonged winter, artificial intelligence is experiencing a scorching summer mainly thanks to advances in deep learning and artificial neural networks. To be more precise, the renewed interest in deep learning is largely due to the success of convolutional neural networks (CNNs), a neural network structure that is especially good at dealing with visual data.

But what if I told you that CNNs are fundamentally flawed? That was what Geoffrey Hinton, one of the pioneers of deep learning, talked about in his keynote speech at the AAAI conference, one of the main yearly AI conferences.

As of Thursday afternoon, there are 10,985 confirmed cases of COVID-19 in the United States and zero FDA-approved drugs to treat the infection.

While DARPA works on short-term “firebreak” countermeasures and computational scientists track sources of new cases of the virus, a host of drug discovery companies are putting their AI technologies to work predicting which existing drugs, or brand-new drug-like molecules, could treat the virus.