The super-charged face scanning tech is costing the military at least $4.3 million.
The United States Army is currently building a super-charged facial recognition system — tech that could be ready for action as soon as next year.
The system, as described in a new One Zero story, analyzes infrared images of a person’s face to see if they’re a match for anyone on a government watchlist, such as a known terrorist. Not only will the finished system reportedly work in the dark, through car windshields, and even in less-than-clear weather conditions — but it’ll also be able to ID individuals from up to 500 meters away.
Army Agreements
One Zero tracked down two military contracts for the development of the tech.
DARPA, the Defense Advanced Research Projects Agency that’s responsible for developing emerging technologies for the U.S. military, is building a new high-tech spacecraft — and it’s armed. In an age of Space Force and burgeoning threats like hunter-killer satellites, this might not sound too surprising. But you’re misunderstanding. DARPA’s new spacecraft, currently “in the thick of it” when it comes to development, is armed. As in, it has arms. Like the ones you use for grabbing things.
Armed robots aren’t new. Mechanical robot arms are increasingly widespread here on Earth. Robot arms have been used to carry out complex surgery and flip burgers. Attached to undersea exploration vehicles, they’ve been used to probe submerged wrecks. They’ve been used to open doors, defuse bombs, and decommission nuclear power plants. They’re pretty darn versatile. But space is another matter entirely.
AI on the mars rover is used to help it navigate the planet. The computer is able to make multiple changes to the rover’s course every minute. Technology behind the Mars rovers are very similar to that used by self-driving cars. The major difference is that the rover has to navigate more complicated terrain and does not have other vehicular or pedestrian traffic to take into account. That complicated terrain is analyzed by the computer vision systems in the rover as it moves. If a terrain problem is encountered, the autonomous system makes a change to the course of the rover to avoid it or adjust navigation.
AI and Space: Made for Each Other
Over the last few years we have continued to see a large effort to commercialize space. Several companies are even looking to start tourist trips into space. Artificial intelligence is working to make space commercialization a possibility and to make space a safe environment in which to operate. The various benefits of AI in space all work together to enable further venturing into the unknown.
Dinorah Delfin has unleashed another exceptional edition of Immortalist Magazine. One of the best aspects is the dueling articles on the future states of Artificial General Intelligence (AGI).
Daniel Faggella constructs another dismal, dreary, depressing, destruction of hope for a benevolent artificial general intelligence. Emphasis on depressing. He has a wonderful way of creating a series of logical roadblocks to any optimism that there is a future with a compassionate artificial general intelligence. But he seems to be arguing against a contention that probably nobody believes in. He is arguing that there is no certainty that an artificial general intelligence will be benevolent. Most thinking humanoids are going to agree with that perspective. As he points out forcefully in his concluding and strongest rebuttal: no one knows what the future holds.
But no one is looking for absolute certainty in the far future. Transhumanists in general are looking for a path forward to an existence full of superhappiness, superintelligence and superlongevity.
After a prolonged winter, artificial intelligence is experiencing a scorching summer mainly thanks to advances in deep learning and artificial neural networks. To be more precise, the renewed interest in deep learning is largely due to the success of convolutional neural networks (CNNs), a neural network structure that is especially good at dealing with visual data.
But what if I told you that CNNs are fundamentally flawed? That was what Geoffrey Hinton, one of the pioneers of deep learning, talked about in his keynote speech at the AAAI conference, one of the main yearly AI conferences.
As of Thursday afternoon, there are 10,985 confirmed cases of COVID-19 in the United States and zero FDA-approved drugs to treat the infection.
While DARPA works on short-term “firebreak” countermeasures and computational scientists track sources of new cases of the virus, a host of drug discovery companies are putting their AI technologies to work predicting which existing drugs, or brand-new drug-like molecules, could treat the virus.
In a preprint paper, Microsoft researchers describe a machine learning system that reasons out the correct actions to take directly from camera images. It’s trained via simulation and learns to independently navigate environments and conditions in the real world, including unseen situations, which makes it a fit for robots deployed in search and rescue missions. Someday, it could help those robots more quickly identify people in need of help.
“We wanted to push current technology to get closer to a human’s ability to interpret environmental cues, adapt to difficult conditions and operate autonomously,” wrote the researchers in a blog post published this week. “We were interested in exploring the question of what it would take to build autonomous systems that achieve similar performance levels.”
I hadn’t seen anything about this thing in about 5+ years, and it was pretty bad back then. Now they have it singing, although i’d like to just see it talking or trying to hold a conversation. Anyhow, the Mouth of your future humanoid robot:
The Prayer is presented in the show “Neurons, Simulated Intelligence”, at Centre Pompidou, Paris, curated by Frédéric Migayrou and Camille Lenglois from 26 February — 26 April 2020. The Prayer is an art-installation that tries to explore the supernatural through artificial intelligence with a long-term experimental set up. A robot — installation operates a talking mouth, that is part of a computer system, creating and voicing prayers, that are generated in every very moment by the self-learning system itself, exploring ‘the divine’ the supernatural or ‘the noumenal’ as the mystery of ‘the unknown’, using deep learning. How would a divine epiphany appear to an artificial intelligence? The focus of the project could maybe shed light on the difference between humans and AI machines in the debate about mind and matter and allows a speculative stance on the future of humans in the age of AI technology and AGI ambitions. Above an anticipation of AI Singing with AI generated texts, since singing is a major religious practice.
The production is a collaboration with Regina Barzilay, Tianxiao Shen, Enrico Santus, all MIT CSAIL, Amazon Polly, Bill and Will Sturgeon, Elchanan Mossel, MIT, Stefan Strauss, Chris Fitch, Brian Kane, Keith Welsh, Webster University, Matthew Azevedo.
With extraordinary and special thanks to Hideyuki Sawada, Waseda University and Thanh Vo, The University of Danang, whose speech machine for the hearing impaired has been a point of departure for the design of the outer features of the artwork.
I have spent the past several years of my life desperately trying to warn humanity that the robots are coming to destroy us all, and everybody laughed at me. But this week—shortly after the T-1000 was seen smooching his miniature horse and donkey—a robot noodle chef has taken over soba-making duties at a Tokyo train station, so who’s laughing now? (The robots are laughing now.)