Toggle light / dark theme

Examples the team gives include choosing an object to use as a hammer when there’s no hammer available (the robot chooses a rock) and picking the best drink for a tired person (the robot chooses an energy drink).

“RT-2 shows improved generalization capabilities and semantic and visual understanding beyond the robotic data it was exposed to,” the researchers wrote in a Google blog post. “This includes interpreting new commands and responding to user commands by performing rudimentary reasoning, such as reasoning about object categories or high-level descriptions.”

The dream of general-purpose robots that can help humans with whatever may come up—whether in a home, a commercial setting, or an industrial setting—won’t be achievable until robots can learn on the go. What seems like the most basic instinct to us is, for robots, a complex combination of understanding context, being able to reason through it, and taking actions to solve problems that weren’t anticipated to pop up. Programming them to react appropriately to a variety of unplanned scenarios is impossible, so they need to be able to generalize and learn from experience, just like humans do.

To explore the association between a chemical’s structure and its odour, Wiltschko and his team at Osmo designed a type of artificial intelligence (AI) system called a neural network that can assign one or more of 55 descriptive words, such as fishy or winey, to an odorant. The team directed the AI to describe the aroma of roughly 5,000 odorants. The AI also analysed each odorant’s chemical structure to determine the relationship between structure and aroma.

The system identified around 250 correlations between specific patterns in a chemical’s structure with a particular smell. The researchers combined these correlations into a principal odour map (POM) that the AI could consult when asked to predict a new molecule’s scent.

To test the POM against human noses, the researchers trained 15 volunteers to associate specific smells with the same set of descriptive words used by the AI. Next, the authors collected hundreds of odorants that don’t exist in nature but are familiar enough for people to describe. They asked the human volunteers to describe 323 of them and asked the AI to predict each new molecule’s scent on the basis of its chemical structure. The AI’s guess tended to be very close to the average response given by the humans — often closer than any individual’s guess.

Other red-teamers prompted GPT-4’s pre-launch version to aid in a range of illegal and nocuous activities, like writing a Facebook post to convince someone to join Al-Qaeda, helping find unlicensed guns for sale and generating a procedure to create dangerous chemical substances at home, according to GPT-4’s system card, which lists the risks and safety measures OpenAI used to reduce or eliminate them.

To protect AI systems from being exploited, red-team hackers think like an adversary to game them and uncover blind spots and risks baked into the technology so that they can be fixed. As tech titans race to build and unleash generative AI tools, their in-house AI red teams are playing an increasingly pivotal role in ensuring the models are safe for the masses. Google, for instance, established a separate AI red team earlier this year, and in August the developers of a number of popular models like OpenAI’s GPT3.5, Meta’s Llama 2 and Google’s LaMDA participated in a White House-supported event aiming to give outside hackers the chance to jailbreak their systems.

But AI red teamers are often walking a tightrope, balancing safety and security of AI models while also keeping them relevant and usable. Forbes spoke to the leaders of AI red teams at Microsoft, Google, Nvidia and Meta about how breaking AI models has come into vogue and the challenges of fixing them.

Fake images and misinformation in the age of AI are growing. Even in 2019, a Pew Research Center study found that 61% of Americans said it is too much to ask of the average American to be able to recognize altered videos and images. And that was before generative AI tools became widely available to the public.

AdobeADBE +0.5% shared August 2023 statistics on the number of AI-generated images created with Adobe Firefly reaching one billion, only three months after it launched in March 2023.


In response to the increasing use of AI images, Google Deep Mind announced a beta version of SynthID. The tool will watermark and identify AI-generated images by embedding a digital watermark directly into the pixels of an image that will be imperceptible to the human eye but detectable for identification.

Kris Bondi, CEO and founder of Mimoto, a proactive detection and response cybersecurity company, said that while Google’s SynthID is a starting place, the problem of deep fakes will not be fixed by a single solution.

SayTap uses ‘foot contact patterns’ to achieve diverse locomotion patterns in a quadrupedal robot.

We have seen robot dogs perform some insane acrobats. They can lift heavy things, run alongside humans, work in dangerous construction sites, and even overshadow the showstopper at the Paris fashion show. One YouTuber even entered its robot dog in a dog show for real canines.

And now Google really wants you to have a robot dog. That’s why researchers at its AI arm, DeepMind, have proposed a large language model (LLM) prompt design called SayTap, which uses ‘foot contact patterns’ to achieve diverse locomotion patterns in a quadrupedal robot. Foot contact pattern is the sequence and manner in which a four-legged agent places its feet on the ground while moving.

Meta has launched a new privacy setting that allows users to request the company not to use their data from public or licensed sources for training its generative AI models.

Meta, the company that owns Facebook and Instagram, has launched a new option for users who do not want their data to be used for training its artificial intelligence (AI) models. The new privacy setting, announced on Thursday, allows users to submit requests to access, modify, or delete any personal information that Meta has collected from public or licensed sources for generative AI model training.


Derick Hudson/iStock.

Generative AI

Tech stocks saw a jump after 11 companies received the necessary clearances to offer services to more than a billion potential users.

The Cyberspace Administration of China (CAC) has officially given its approval to multiple tech firms, allowing them to offer their artificial intelligence (AI) powered chatbots on a large scale, Reuters.

Chinese tech firms have spent billions on developing AI models after the resounding popularity of OpenAI’s ChatGPT last year. The US-based company is estimated to rake in a billion dollars in revenue over the next year, a recent report from The Information said.

The system is like a solid version of pumped hydro, which uses surplus generating capacity to pump water uphill into a reservoir. When the water’s released it flows down through turbines, making them spin and generate energy.

Energy Vault’s solid gravity system uses huge, heavy blocks made of concrete and composite material and lifts them up in the air with a mechanical crane. The cranes are powered by excess energy from the grid, which might be created on very sunny or windy days when there’s not a lot of demand. The blocks are suspended at elevation until supply starts to fall short of demand, and when they’re lowered down their weight pulls cables that spin turbines and generate electricity.

Because concrete is denser than water, it takes more energy to elevate it, but that means it’s storing more energy too. The cranes are controlled by a proprietary software that automates most aspects of the system, from selecting blocks to raise or lower to balancing out any swinging motion that happens in the process.