Toggle light / dark theme

L3Harris wins contract to apply artificial intelligence to remotely sensed data

SAN FRANCISCO – L3Harris Technologies will help the U.S. Defense Department extract information and insight from satellite and airborne imagery under a three-year U.S. Army Research Laboratory contract.

L3Harris will develop and demonstrate an artificial intelligence-machine learning interface for Defense Department applications under the multimillion-dollar contract announced Oct. 26.

“L3Harris will assist the Department of Defense with the integration of artificial intelligence and machine learning capabilities and technologies,” Stacey Casella, general manager for L3Harris’ Geospatial Processing and Analytics business, told SpaceNews. L3Harris will help the Defense Department embed artificial intelligence and machine learning in its workflows “to ultimately accelerate our ability to extract usable intelligence from the pretty expansive set of remotely sensed data that we have available today from spaceborne and airborne assets,” she added.

Science-fiction master Ted Chiang explores the rights and wrongs of AI

What rights does a robot have? If our machines become intelligent in the science-fiction way, that’s likely to become a complicated question — and the humans who nurture those robots just might take their side.

Ted Chiang, a science-fiction author of growing renown with long-lasting connections to Seattle’s tech community, doesn’t back away from such questions. They spark the thought experiments that generate award-winning novellas like “The Lifecycle of Software Objects,” and inspire Hollywood movies like “Arrival.”

Chiang’s soulful short stories have earned him kudos from the likes of The New Yorker, which has called him “one of the most influential science-fiction writers of his generation.” During this year’s pandemic-plagued summer, he joined the Museum of Pop Culture’s Science Fiction and Fantasy Hall of Fame. And this week, he’s receiving an award from the Arthur C. Clarke Foundation for employing imagination in service to society.

OpenAI Plays Hide and Seek…and Breaks The Game! 🤖

Circa 2019 o,.o.


❤️ Check out Weights & Biases here and sign up for a free demo: https://www.wandb.com/papers
❤️ Their blog post is available here: https://www.wandb.com/articles/better-paths-through-idea-space

📝 The paper “Emergent Tool Use from Multi-Agent Interaction” is available here:
https://openai.com/blog/emergent-tool-use/

❤️ Watch these videos in early access on our Patreon page or join us here on YouTube:
- https://www.patreon.com/TwoMinutePapers
- https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join

🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible:

IBM, others run pilot test of AI suitcase for guiding blind

A group of five companies including the Japanese unit of IBM Corp are currently developing an artificial intelligence suitcase to assist visually impaired people in traveling independently, with a pilot test of a prototype conducted at an airport in Japan earlier this month.

The small navigation robot, which is able to plan an optimal route to a destination based on the user’s location and map data, uses multiple sensors to assess its surroundings and AI functionality to avoid bumping into obstacles, according to the companies.

At the pilot experiment held on Nov 2, the AI suitcase was able to successfully navigate itself to an All Nippon Airways departure counter after receiving a command from Chieko Asakawa, a visually impaired fellow IBM overseeing the product’s development.

Computer Vision: A Key Concept to Solve Many Problems Related to Image Data

This article was published as a part of the Data Science Blogathon.

Introduction

Computer Vision is evolving from the emerging stage and the result is incredibly useful in various applications. It is in our mobile phone cameras which are able to recognize faces. It is available in self-driving cars to recognize traffic signals, signs, and pedestrians. Also, it is in industrial robots to monitor problems and navigating around co-workers.

This could lead to the next big breakthrough in common sense AI

You’ve probably heard us say this countless times: GPT-3, the gargantuan AI that spews uncannily human-like language, is a marvel. It’s also largely a mirage. You can tell with a simple trick: Ask it the color of sheep, and it will suggest “black” as often as “white”—reflecting the phrase “black sheep” in our vernacular.

That’s the problem with language models: because they’re only trained on text, they lack common sense. Now researchers from the University of North Carolina, Chapel Hill, have designed a new technique to change that. They call it “vokenization,” and it gives language models like GPT-3 the ability to “see.”

It’s not the first time people have sought to combine language models with computer vision. This is actually a rapidly growing area of AI research. The idea is that both types of AI have different strengths. Language models like GPT-3 are trained through unsupervised learning, which requires no manual data labeling, making them easy to scale. Image models like object recognition systems, by contrast, learn more directly from reality. In other words, their understanding doesn’t rely on the kind of abstraction of the world that text provides. They can “see” from pictures of sheep that they are in fact white.

Facebook’s New AI System Can Pass Multiple-Choice Intelligence Tests

Recently, a team of researchers from Facebook AI and Tel Aviv University proposed an AI system that solves the multiple-choice intelligence test, Raven’s Progressive Matrices. The proposed AI system is a neural network model that combines multiple advances in generative models, including employing multiple pathways through the same network.

Raven’s Progressive Matrices, also known as Raven’s Matrices, are multiple-choice intelligence tests. The test is used to measure abstract reasoning and is regarded as a non-verbal estimate of fluid intelligence.

In this test, a person tries to finish the missing location in a 3X3 grid of abstract images. According to the researchers, there have been various similar researches, where the main focus entirely on choosing the right answer out of the various choices. However, in this research, the researchers focussed on generating a correct answer given the grid, without seeing the choices.

Facebook Wants to Make Smart Robots to Explore Every Nook and Cranny of Your Home

If Facebook’s AI research objectives are successful, it may not be long before home assistants take on a whole new range of capabilities. Last week the company announced new work focused on advancing what it calls “embodied AI”: basically, a smart robot that will be able to move around your house to help you remember things, find things, and maybe even do things.

Robots That Hear, Home Assistants That See

In Facebook’s blog post about audio-visual navigation for embodied AI, the authors point out that most of today’s robots are “deaf”; they move through spaces based purely on visual perception. The company’s new research aims to train AI using both visual and audio data, letting smart robots detect and follow objects that make noise as well as use sounds to understand a physical space.

/* */