Artificial Intelligence (AI) has been a mega-trend in 2020. The current pandemic has only accelerated the relevance and adoption of AI and machine learning. Here we look at some of the top AI trends for 2021.
Recommending content, powering chatbots, trading stocks, detecting medical conditions, and driving cars. These are only a small handful of the most well-known uses of artificial intelligence, yet there is one that, despite being on the margins for much of AI’s recent history, is now threatening to grow significantly in prominence. This is AI’s ability to classify and rank people, to separate them according to whether they’re “good” or “bad” in relation to certain purposes.
At the moment, Western civilization hasn’t reached the point where AI-based systems are used en masse to categorize us according to whether we’re likely to be “good” employees, “good” customers, “good” dates and “good” citizens. Nonetheless, all available indicators suggest that we’re moving in this direction, and that this is regardless of whether Western nations consciously decide to construct the kinds of social credit system currently being developed by China.
This risk was highlighted at the end of September, when it emerged that an AI-powered system was being used to screen job candidates in the U.K. for the first time. Developed by the U.S.-based HireVue, it harnesses machine learning to evaluate the facial expressions, language and tone of voice of job applicants, who are filmed via smartphone or laptop and quizzed with an identical set of interview questions. HireVue’s platform then filters out the “best” applicants by comparing the 25,000 pieces of data taken from each applicant’s video against those collected from the interviews of existing “model” employees.
There are four ways drones typically navigate. Either they use GPS or other beacons, or they accept guidance instructions from a computer, or they navigate off a stored map, or they are flown by an expert in control.
What do you when absolutely none of the four are possible?
You put AI on the drone and it flies itself with no outside source of data, no built-in mapping, and no operator in control.
This robot on wheels is seven feet tall, is kitted out with cameras, microphones and sensors, and uses the three “fingers” on its hands to stock supermarket shelves with products such as bottled drinks, cans and rice bowls.
Japan’s convenience stores are turning to robots to solve their labor shortage.
Artificial intelligence (AI) experts at the University of Massachusetts Amherst and the Baylor College of Medicine report that they have successfully addressed what they call a “major, long-standing obstacle to increasing AI capabilities” by drawing inspiration from a human brain memory mechanism known as “replay.”
First author and postdoctoral researcher Gido van de Ven and principal investigator Andreas Tolias at Baylor, with Hava Siegelmann at UMass Amherst, write in Nature Communications that they have developed a new method to protect—” surprisingly efficiently”— deep neural networks from “catastrophic forgetting;” upon learning new lessons, the networks forget what they had learned before.
Siegelmann and colleagues point out that deep neural networks are the main drivers behind recent AI advances, but progress is held back by this forgetting.
Artificial intelligence researchers at North Carolina State University have improved the performance of deep neural networks by combining feature normalization and feature attention modules into a single module that they call attentive normalization (AN). The hybrid module improves the accuracy of the system significantly, while using negligible extra computational power.
“Feature normalization is a crucial element of training deep neural networks, and feature attention is equally important for helping networks highlight which features learned from raw data are most important for accomplishing a given task,” says Tianfu Wu, corresponding author of a paper on the work and an assistant professor of electrical and computer engineering at NC State. “But they have mostly been treated separately. We found that combining them made them more efficient and effective.”
To test their AN module, the researchers plugged it into four of the most widely used neural network architectures: ResNets, DenseNets, MobileNetsV2 and AOGNets. They then tested the networks against two industry standard benchmarks: the ImageNet-1000 classification benchmark and the MS-COCO 2017 object detection and instance segmentation benchmark.
There are a variety of complementary observations that could be used in the search for life in extraterrestrial settings. At the molecular scale, patterns in the distribution of organics could provide powerful evidence of a biotic component. In order to observe these molecular biosignatures during spaceflight missions, it is necessary to perform separation science in situ. Microchip electrophoresis (ME) is ideally suited for this task. Although this technique is readily miniaturized and numerous instruments have been developed over the last 3 decades, to date, all lack the automation capabilities needed for future missions of exploration. We have developed a portable, automated, battery-powered, and remotely operated ME instrument coupled to laser-induced fluorescence detection. This system contains all the necessary hardware and software interfaces for end-to-end functionality. Here, we report the first application of the system for amino acid analysis coupled to an extraction unit in order to demonstrate automated sample-to-data operation. The system was remotely operated aboard a rover during a simulated Mars mission in the Atacama Desert, Chile. This is the first demonstration of a fully automated ME analysis of soil samples relevant to planetary exploration. This validation is a critical milestone in the advancement of this technology for future implementation on a spaceflight mission.