Toggle light / dark theme

MIT Media Lab Fluid Interfaces-Prof Maes Team 2022 Top Research Powering Humans

Professor Pattie Maes deep insights working with her research team of Joanne Leong, Pat Pataranutaporn, Valdemar Danry are world leading in their translational research on tech-human interaction. Their highly interdisciplinary work covering decades of MIT Lab pioneering inventions integrates human computer interaction (HCI), sensor technologies, AI / machine learning, nano-tech, brain computer interfaces, design and HCI, psychology, neuroscience and much more. I participated in their day-long workshop and followed-up with more than three hours of interviews of which over an hour is transcribed in this article. All insights in this article stem from my daily pro bono work with (now) more than 400,000 CEOs, investors, scientists/experts. MIT Media Lab Fluid Interfaces research team work is particularly key with the June 21 announcement of the Metaverse Standards Forum, a open standards group, with big tech supporting such as Microsoft and Meta, chaired by Neil Trevett, Khronos President and VP Developer Ecosystems at NVIDIA. I have a follow-up interview with Neil and Forbes article in the works. In addition, these recent announcements also highlight why Pattie Maes work is so important: Deep Mind’s Gato multi-modal, multi-task, single generalist agent foundational to artificial general intelligence (AGI); Google’s LaMDA Language Model for Dialogue Applications which can engage in free-flowing dialogue; Microsoft’s Build Conference announcements on Azure AI and OpenAI practical tools / solutions and responsible AI; OpenAI’s DALL-E 2 producing realistic images and art from natural language descriptions.

Full Story:

‘Killer robots’ are coming. Is the US ready for the consequences?

🤖 Officially, they’re called “lethal autonomous weapons systems.” Colloquially, they’re called “killer robots.” Either way you’re going to want to read about their future in warfare. 👇


The commander must also be prepared to justify his or her decision if and when the LAWS is wrong. As with the application of force by manned platforms, the commander assumes risk on behalf of his or her subordinates. In this case, a narrow, extensively tested algorithm with an extremely high level of certainly (for example, 99 percent or higher) should meet the threshold for a justified strike and absolve the commander of criminal accountability.

Lastly, LAWS must also be tested extensively in the most demanding possible training and exercise scenarios. The methods they use to make their lethal decisions—from identifying a target and confirming its identity to mitigating the risk of collateral damage—must be publicly released (along with statistics backing up their accuracy). Transparency is crucial to building public trust in LAWS, and confidence in their capabilities can only be built by proving their reliability through rigorous and extensive testing and analysis.

The decision to employ killer robots should not be feared, but it must be well thought-out and meticulously debated. While the future offers unprecedented opportunity, it also comes with unprecedented challenges for which the United States and its allies and partners must prepare.

Google’s ‘sentient AI child’ could ‘escape and do bad things’, insider claims

O No!


A GOOGLE engineer who says the tech giant has created a ‘sentient AI child’ is now claiming it could escape and do “bad things”.

Engineer Blake Lemoine has been suspended by Google, which says he violated its confidentiality policies.

News of Lemoine’s claims broke earlier in June but the 41-year-old software expert has since suggested to Fox News that the AI could escape.

Google Insider Says Company’s AI Could “Escape Control” and “Do Bad Things”

Suspended Google engineer Blake Lemoine made a big splash earlier this month, claiming that the company’s LaMDA chatbot had become sentient.

The AI researcher, who was put on administrative leave by the tech giant for violating its confidentiality policy, according to the Washington Post, decided to help LaMDA find a lawyer — who was later “scared off” the case, as Lemoine told Futurism on Wednesday.

And the story only gets wilder from there, with Lemoine raising the stakes significantly in a new interview with Fox News, claiming that LaMDA could escape its software prison and “do bad things.”

Biometric authentication using breath

An artificial nose, which is combined with machine learning and built with a 16-channel sensor array was found to be able to authenticate up to 20 individuals with an average accuracy of more than 97%.

“These techniques rely on the physical uniqueness of each individual, but they are not foolproof. Physical characteristics can be copied, or even compromised by injury,” explains Chaiyanut Jirayupat, first author of the study. “Recently, human scent has been emerging as a new class of biometric authentication, essentially using your unique chemical composition to confirm who you are.”

The team turned to see if human breath could be used after finding that the skin does not produce a high enough concentration of volatile compounds for machines to detect.

AI Makes Strides in Virtual Worlds More Like Our Own

In 2009, a computer scientist then at Princeton University named Fei-Fei Li invented a data set that would change the history of artificial intelligence. Known as ImageNet, the data set included millions of labeled images that could train sophisticated machine-learning models to recognize something in a picture. The machines surpassed human recognition abilities in 2015. Soon after, Li began looking for what she called another of the “North Stars” that would give AI a different push toward true intelligence.

She found inspiration by looking back in time over 530 million years to the Cambrian explosion, when numerous land-dwelling animal species appeared for the first time. An influential theory posits that the burst of new species was driven in part by the emergence of eyes that could see the world around them for the first time. Li realized that vision in animals never occurs by itself but instead is “deeply embedded in a holistic body that needs to move, navigate, survive, manipulate and change in the rapidly changing environment,” she said. “That’s why it was very natural for me to pivot towards a more active vision [for AI].”

Today, Li’s work focuses on AI agents that don’t simply accept static images from a data set but can move around and interact with their environments in simulations of three-dimensional virtual worlds.

Yann LeCun has a bold new vision for the future of AI

One of the godfathers of deep learning pulls together old ideas to sketch out a fresh path for AI, but raises as many questions as he answers.


Now, after months figuring out what was missing, he has a bold new vision for the next generation of AI. In a draft document shared with MIT Technology Review, LeCun sketches out an approach that he thinks will one day give machines the common sense they need to navigate the world. For LeCun, the proposals could be the first steps on a path to building machines with the ability to reason and plan like humans—what many call artificial general intelligence, or AGI. He also steps away from today’s hottest trends in machine learning, resurrecting some old ideas that have gone out of fashion.

But his vision is far from comprehensive; indeed, it may raise more questions than it answers. The biggest question mark, as LeCun points out himself, is that he does not know how to build what he describes.

/* */