Toggle light / dark theme

Humans find AI to be a frustrating teammate when playing a cooperative game together, posing challenges for “teaming intelligence,” study shows.

When it comes to games such as chess or Go, artificial intelligence (AI) programs have far surpassed the best players in the world. These “superhuman” AIs are unmatched competitors, but perhaps harder than competing against humans is collaborating with them. Can the same technology get along with people?

In a new study, MIT Lincoln Laboratory researchers sought to find out how well humans could play the cooperative card game Hanabi with an advanced AI model trained to excel at playing with teammates it has never met before. In single-blind experiments, participants played two series of the game: one with the AI agent as their teammate, and the other with a rule-based agent, a bot manually programmed to play in a predefined way.

The results surprised the researchers. Not only were the scores no better with the AI teammate than with the rule-based agent, but humans consistently hated playing with their AI teammate. They found it to be unpredictable, unreliable, and untrustworthy, and felt negatively even when the team scored well. A paper detailing this study has been accepted to the 2021 Conference on Neural Information Processing Systems (NeurIPS).

Trust is a very important aspect of human-robot interactions, as it could play a crucial role in the widespread implementation of robots in real-world settings. Nonetheless, trust is a considerably complex construct that can depend on psychological and environmental factors.

Understanding a robot’s decision-making processes and why it performs specific behaviors is not always easy. The ability to talk to itself while completing a given task could thus make a robot more transparent, allowing its users to understand the different processes, considerations and calculations that lead to specific conclusions.

Full Story:

This robotic arm fuses data from a camera and antenna to locate and retrieve items, even if they are buried under a pile.

Source:


A busy commuter is ready to walk out the door, only to realize they’ve misplaced their keys and must search through piles of stuff to find them. Rapidly sifting through clutter, they wish they could figure out which pile was hiding the keys.

Researchers at MIT have created a robotic system that can do just that. The system, RFusion, is a robotic arm with a camera and radio frequency (RF) antenna attached to its gripper. It fuses signals from the antenna with visual input from the camera to locate and retrieve an item, even if the item is buried under a pile and completely out of view.

Enterprises of all sizes and across virtually all markets are scrambling to augment their analytics capabilities with artificial intelligence (AI) in the hopes of gaining a competitive advantage in a challenging post-pandemic economy.

Plenty of anecdotal evidence points to AI’s ability to improve analytics, but there seems to be less conversation around how it should be implemented in production environments, let alone how organizations should view it strategically over the long term.

Brief summary of this episode:

- (1:35) OpenAI: AI Beneficial for All.
- (7:15) AI as an existential risk.
- (11:47) Safe AI development.
- (16:16) Neuralink — “If you can’t beat ‘em, join ‘em.“
- (19:04) The Future of AI

Join this channel to get access to perks:
https://www.youtube.com/channel/UCDukC60SYLlPwdU9CWPGx9Q/join.

Neura Pod is a series covering topics related to Neuralink, Inc. Topics such as brain-machine interfaces, brain injuries, and artificial intelligence will be explored. Host Ryan Tanaka synthesizes informationopinions, and conducts interviews to easily learn about Neuralink and its future.

It can be difficult to distinguish between substance and hype in the field of artificial intelligence. In order to stay grounded, it is important to step back from time to time and ask a simple question: what has AI actually accomplished or enabled that makes a difference in the real world?

This summer, DeepMind delivered the strongest answer yet to that question in the decades-long history of AI research: AlphaFold, a software platform that will revolutionize our understanding of biology.

In 1,972 in his acceptance speech for the Nobel Prize in Chemistry, Christian Anfinsen made a historic prediction: it should in principle be possible to determine a protein’s three-dimensional shape based solely on the one-dimensional string of molecules that comprise it.

Looks kind of like a computer printer on big wheels.


We got to see Amazon’s latest gadget early — it’s the long-rumored robot.

We spent about 50 minutes with Astro. Like robots of the past (some have fizzled out; others have gone extinct like Anki’s Vector and Cozmo robots), it focuses on peace of mind in the likes of home monitoring and checking in on household members, along with providing entertainment. At the heart of Astro is Amazon’s smart assistant and artificial intelligence chops — so, yes, Alexa is on board just with an “Astro” wake word so that you can ask for the weather, a question or to go to a specific room.

So let’s break down Astro, what it can do and what we think after seeing this unique robot up close.

Google announced today it will be applying AI advancements, including a new technology called Multitask Unified Model (MUM) to improve Google Search. At the company’s Search On event, the company demonstrated new features, including those that leverage MUM, to better connect web searchers to the content they’re looking for, while also making web search feel more natural and intuitive.

One of the features being launched is called “Things to know,” which will focus on making it easier for people to understand new topics they’re searching for. This feature understands how people typically explore various topics and then shows web searchers the aspects of the topic people are most likely to look at first.