Toggle light / dark theme

Google Insider Says Company’s AI Could “Escape Control” and “Do Bad Things”

Suspended Google engineer Blake Lemoine made a big splash earlier this month, claiming that the company’s LaMDA chatbot had become sentient.

The AI researcher, who was put on administrative leave by the tech giant for violating its confidentiality policy, according to the Washington Post, decided to help LaMDA find a lawyer — who was later “scared off” the case, as Lemoine told Futurism on Wednesday.

And the story only gets wilder from there, with Lemoine raising the stakes significantly in a new interview with Fox News, claiming that LaMDA could escape its software prison and “do bad things.”

Biometric authentication using breath

An artificial nose, which is combined with machine learning and built with a 16-channel sensor array was found to be able to authenticate up to 20 individuals with an average accuracy of more than 97%.

“These techniques rely on the physical uniqueness of each individual, but they are not foolproof. Physical characteristics can be copied, or even compromised by injury,” explains Chaiyanut Jirayupat, first author of the study. “Recently, human scent has been emerging as a new class of biometric authentication, essentially using your unique chemical composition to confirm who you are.”

The team turned to see if human breath could be used after finding that the skin does not produce a high enough concentration of volatile compounds for machines to detect.

AI Makes Strides in Virtual Worlds More Like Our Own

In 2009, a computer scientist then at Princeton University named Fei-Fei Li invented a data set that would change the history of artificial intelligence. Known as ImageNet, the data set included millions of labeled images that could train sophisticated machine-learning models to recognize something in a picture. The machines surpassed human recognition abilities in 2015. Soon after, Li began looking for what she called another of the “North Stars” that would give AI a different push toward true intelligence.

She found inspiration by looking back in time over 530 million years to the Cambrian explosion, when numerous land-dwelling animal species appeared for the first time. An influential theory posits that the burst of new species was driven in part by the emergence of eyes that could see the world around them for the first time. Li realized that vision in animals never occurs by itself but instead is “deeply embedded in a holistic body that needs to move, navigate, survive, manipulate and change in the rapidly changing environment,” she said. “That’s why it was very natural for me to pivot towards a more active vision [for AI].”

Today, Li’s work focuses on AI agents that don’t simply accept static images from a data set but can move around and interact with their environments in simulations of three-dimensional virtual worlds.

Yann LeCun has a bold new vision for the future of AI

One of the godfathers of deep learning pulls together old ideas to sketch out a fresh path for AI, but raises as many questions as he answers.


Now, after months figuring out what was missing, he has a bold new vision for the next generation of AI. In a draft document shared with MIT Technology Review, LeCun sketches out an approach that he thinks will one day give machines the common sense they need to navigate the world. For LeCun, the proposals could be the first steps on a path to building machines with the ability to reason and plan like humans—what many call artificial general intelligence, or AGI. He also steps away from today’s hottest trends in machine learning, resurrecting some old ideas that have gone out of fashion.

But his vision is far from comprehensive; indeed, it may raise more questions than it answers. The biggest question mark, as LeCun points out himself, is that he does not know how to build what he describes.

DeepMind Researchers Develop ‘BYOL-Explore’: A Curiosity-Driven Exploration Algorithm That Harnesses The Power Of Self-Supervised Learning To Solve Sparse-Reward Partially-Observable Tasks

DeepMind Researchers Develop ‘BYOL-Explore’, A Curiosity-Driven Exploration Algorithm That Harnesses The Power Of Self-Supervised Learning To Solve Sparse-Reward Partially-Observable Tasks


Reinforcement learning (RL) requires exploration of the environment. Exploration is even more critical when extrinsic incentives are few or difficult to obtain. Due to the massive size of the environment, it is impractical to visit every location in rich settings due to the range of helpful exploration paths. Consequently, the question is: how can an agent decide which areas of the environment are worth exploring? Curiosity-driven exploration is a viable approach to tackle this problem. It entails learning a world model, a predictive model of specific knowledge about the world, and (ii) exploiting disparities between the world model’s predictions and experience to create intrinsic rewards.

An RL agent that maximizes these intrinsic incentives steers itself toward situations where the world model is unreliable or unsatisfactory, creating new paths for the world model. In other words, the quality of the exploration policy is influenced by the characteristics of the world model, which in turn helps the world model by collecting new data. Therefore, it might be crucial to approach learning the world model and learning the exploratory policy as one cohesive problem to be solved rather than two separate tasks. Deepmind researchers keeping this in mind, introduced a curiosity-driven exploration algorithm BYOL-Explore. Its attraction stems from its conceptual simplicity, generality, and excellent performance.

The strategy is based on Bootstrap Your Own Latent (BYOL), a self-supervised latent-predictive method that forecasts an earlier version of its latent representation. In order to handle the problems of creating the representation of the world model and the curiosity-driven policy, BYOL-Explore learns a world model with a self-supervised prediction loss and trains a curiosity-driven policy using the same loss. Computer vision, learning about graph representations, and RL representation learning have all successfully used this bootstrapping approach. In contrast, BYOL-Explore goes one step further and not only learns a flexible world model but also exploits the world model’s loss to motivate exploration.

Artificial intelligence: conscious or just very convincing?

Alex Hern reports on recent developments in artificial intelligence and how a Google employee became convinced an AI chatbot was sentient.

How to listen to podcasts: everything you need to know

Google software engineer Blake Lemoine was put on leave by his employer after claiming that the company had produced a sentient artificial intelligence and posting its thoughts online. Google said it suspended him for breaching confidentiality policies.

Amazon’s Alexa Will Soon be Able to Use a Dead Person’s Voice

Amazon introduced the technology at Amazon re: MARS 2022, its annual AI event centered around machine learning, automation, robotics, and space. Alexa AI head scientist Rohit Prasad referred to the upcoming feature as a way to remember friends and family members who have passed away.

“While AI can’t eliminate the pain of loss, it can definitely make their memories last,” Prasad said.

Prasad demonstrated the feature using a video of a child asking Alexa if his grandmother could finish reading him a story. In its regular Alexa voice, the smart speaker obliged; then the grandmother’s voice took over as the child flipped through his own copy of The Wizard of Oz. Though of course there’s no way for the viewer to know what the woman’s real voice actually sounds like, the grandmother’s synthesized voice admittedly sounded quite natural, speaking with the cadence of your average bedtime story reader.

Computer chips powered by human brain cells exist — but is it ethical?

Although the name and scenario are fictional, this is a question we have to confront now. In December 2021, Melbourne-based Cortical Labs grew groups of neurons (brain cells) that were incorporated into a computer chip. The resulting hybrid chip works because both brains and neurons share a common language: electricity.

In silicon computers, electrical signals travel along metal wires that link different components together. In brains, neurons communicate with each other using electric signals across synapses (junctions between nerve cells). In Cortical Labs’ Dishbrain system, neurons are grown on silicon chips. These neurons act like the wires in the system, connecting different components. The major advantage of this approach is that the neurons can change their shape, grow, replicate, or die in response to the demands of the system.

Dishbrain could learn to play the arcade game Pong faster than conventional AI systems. The developers of Dishbrain said: “Nothing like this has ever existed before … It is an entirely new mode of being. A fusion of silicon and neuron.”