Toggle light / dark theme

AI predicts chemicals’ smells from their structures

To explore the association between a chemical’s structure and its odour, Wiltschko and his team at Osmo designed a type of artificial intelligence (AI) system called a neural network that can assign one or more of 55 descriptive words, such as fishy or winey, to an odorant. The team directed the AI to describe the aroma of roughly 5,000 odorants. The AI also analysed each odorant’s chemical structure to determine the relationship between structure and aroma.

The system identified around 250 correlations between specific patterns in a chemical’s structure with a particular smell. The researchers combined these correlations into a principal odour map (POM) that the AI could consult when asked to predict a new molecule’s scent.

To test the POM against human noses, the researchers trained 15 volunteers to associate specific smells with the same set of descriptive words used by the AI. Next, the authors collected hundreds of odorants that don’t exist in nature but are familiar enough for people to describe. They asked the human volunteers to describe 323 of them and asked the AI to predict each new molecule’s scent on the basis of its chemical structure. The AI’s guess tended to be very close to the average response given by the humans — often closer than any individual’s guess.

From Google To Nvidia, Tech Giants Have Hired Red Team Hackers To Break Their AI Models

Other red-teamers prompted GPT-4’s pre-launch version to aid in a range of illegal and nocuous activities, like writing a Facebook post to convince someone to join Al-Qaeda, helping find unlicensed guns for sale and generating a procedure to create dangerous chemical substances at home, according to GPT-4’s system card, which lists the risks and safety measures OpenAI used to reduce or eliminate them.

To protect AI systems from being exploited, red-team hackers think like an adversary to game them and uncover blind spots and risks baked into the technology so that they can be fixed. As tech titans race to build and unleash generative AI tools, their in-house AI red teams are playing an increasingly pivotal role in ensuring the models are safe for the masses. Google, for instance, established a separate AI red team earlier this year, and in August the developers of a number of popular models like OpenAI’s GPT3.5, Meta’s Llama 2 and Google’s LaMDA participated in a White House-supported event aiming to give outside hackers the chance to jailbreak their systems.

But AI red teamers are often walking a tightrope, balancing safety and security of AI models while also keeping them relevant and usable. Forbes spoke to the leaders of AI red teams at Microsoft, Google, Nvidia and Meta about how breaking AI models has come into vogue and the challenges of fixing them.

Google Launches Tool That Detects AI Images In Effort To Curb Deepfakes

Fake images and misinformation in the age of AI are growing. Even in 2019, a Pew Research Center study found that 61% of Americans said it is too much to ask of the average American to be able to recognize altered videos and images. And that was before generative AI tools became widely available to the public.

AdobeADBE +0.5% shared August 2023 statistics on the number of AI-generated images created with Adobe Firefly reaching one billion, only three months after it launched in March 2023.


In response to the increasing use of AI images, Google Deep Mind announced a beta version of SynthID. The tool will watermark and identify AI-generated images by embedding a digital watermark directly into the pixels of an image that will be imperceptible to the human eye but detectable for identification.

Kris Bondi, CEO and founder of Mimoto, a proactive detection and response cybersecurity company, said that while Google’s SynthID is a starting place, the problem of deep fakes will not be fixed by a single solution.

“People forget that bad actors are also in business. Their tactics and technologies continuously evolve, become available to more bad actors, and the cost of their techniques, such as deep fakes, comes down,” said Bondi.

Google’s SayTap allows robot dogs to understand vague prompts

SayTap uses ‘foot contact patterns’ to achieve diverse locomotion patterns in a quadrupedal robot.

We have seen robot dogs perform some insane acrobats. They can lift heavy things, run alongside humans, work in dangerous construction sites, and even overshadow the showstopper at the Paris fashion show. One YouTuber even entered its robot dog in a dog show for real canines.

And now Google really wants you to have a robot dog. That’s why researchers at its AI arm, DeepMind, have proposed a large language model (LLM) prompt design called SayTap, which uses ‘foot contact patterns’ to achieve diverse locomotion patterns in a quadrupedal robot. Foot contact pattern is the sequence and manner in which a four-legged agent places its feet on the ground while moving.

Here’s how to stop Meta from using your data for AI training

Meta has launched a new privacy setting that allows users to request the company not to use their data from public or licensed sources for training its generative AI models.

Meta, the company that owns Facebook and Instagram, has launched a new option for users who do not want their data to be used for training its artificial intelligence (AI) models. The new privacy setting, announced on Thursday, allows users to submit requests to access, modify, or delete any personal information that Meta has collected from public or licensed sources for generative AI model training.


Derick Hudson/iStock.

Generative AI

China approves home-grown ChatGPT-like bots for public use

Tech stocks saw a jump after 11 companies received the necessary clearances to offer services to more than a billion potential users.

The Cyberspace Administration of China (CAC) has officially given its approval to multiple tech firms, allowing them to offer their artificial intelligence (AI) powered chatbots on a large scale, Reuters.

Chinese tech firms have spent billions on developing AI models after the resounding popularity of OpenAI’s ChatGPT last year. The US-based company is estimated to rake in a billion dollars in revenue over the next year, a recent report from The Information said.

Energy Vault’s First Grid-Scale Gravity Energy Storage System Is Near Complete

The system is like a solid version of pumped hydro, which uses surplus generating capacity to pump water uphill into a reservoir. When the water’s released it flows down through turbines, making them spin and generate energy.

Energy Vault’s solid gravity system uses huge, heavy blocks made of concrete and composite material and lifts them up in the air with a mechanical crane. The cranes are powered by excess energy from the grid, which might be created on very sunny or windy days when there’s not a lot of demand. The blocks are suspended at elevation until supply starts to fall short of demand, and when they’re lowered down their weight pulls cables that spin turbines and generate electricity.

Because concrete is denser than water, it takes more energy to elevate it, but that means it’s storing more energy too. The cranes are controlled by a proprietary software that automates most aspects of the system, from selecting blocks to raise or lower to balancing out any swinging motion that happens in the process.

Superintelligence Rising — Are We Prepared for Artificially Created Minds?

In 1993, acclaimed sci-fi author and computer scientist Vernor Vinge made a bold prediction – within 30 years, advances in technology would enable the creation of artificial intelligence surpassing human intelligence, leading to “the end of the human era.”

Vinge theorized that once AI becomes capable of recursively improving itself, it would trigger a feedback loop of rapid, exponential improvements to AI systems. This hypothetical point in time when AI exceeds human intelligence has become known as “the Singularity.”

While predictions of superhuman AI may have sounded far-fetched in 1993, today they are taken seriously by many AI experts and tech investors seeking to develop “artificial general intelligence” or AGI – AI capable of fully matching human performance on any intellectual task.