Toggle light / dark theme

Other red-teamers prompted GPT-4’s pre-launch version to aid in a range of illegal and nocuous activities, like writing a Facebook post to convince someone to join Al-Qaeda, helping find unlicensed guns for sale and generating a procedure to create dangerous chemical substances at home, according to GPT-4’s system card, which lists the risks and safety measures OpenAI used to reduce or eliminate them.

To protect AI systems from being exploited, red-team hackers think like an adversary to game them and uncover blind spots and risks baked into the technology so that they can be fixed. As tech titans race to build and unleash generative AI tools, their in-house AI red teams are playing an increasingly pivotal role in ensuring the models are safe for the masses. Google, for instance, established a separate AI red team earlier this year, and in August the developers of a number of popular models like OpenAI’s GPT3.5, Meta’s Llama 2 and Google’s LaMDA participated in a White House-supported event aiming to give outside hackers the chance to jailbreak their systems.

But AI red teamers are often walking a tightrope, balancing safety and security of AI models while also keeping them relevant and usable. Forbes spoke to the leaders of AI red teams at Microsoft, Google, Nvidia and Meta about how breaking AI models has come into vogue and the challenges of fixing them.

Fake images and misinformation in the age of AI are growing. Even in 2019, a Pew Research Center study found that 61% of Americans said it is too much to ask of the average American to be able to recognize altered videos and images. And that was before generative AI tools became widely available to the public.

AdobeADBE +0.5% shared August 2023 statistics on the number of AI-generated images created with Adobe Firefly reaching one billion, only three months after it launched in March 2023.


In response to the increasing use of AI images, Google Deep Mind announced a beta version of SynthID. The tool will watermark and identify AI-generated images by embedding a digital watermark directly into the pixels of an image that will be imperceptible to the human eye but detectable for identification.

Kris Bondi, CEO and founder of Mimoto, a proactive detection and response cybersecurity company, said that while Google’s SynthID is a starting place, the problem of deep fakes will not be fixed by a single solution.

In the water, the small beads create no swirling effect, allowing the drawn patterns to stay in place.

Writing is a time-honored cultural practice that traces its origins to ancient times when our ancestors inscribed signs and symbols onto stone slabs. As a result, writing on any solid object has long been common practice.

But if you’ve ever tried writing in water or other liquid substances, you may have found it rather difficult. A new study reveals that might change with the use of a specialized technique.

SayTap uses ‘foot contact patterns’ to achieve diverse locomotion patterns in a quadrupedal robot.

We have seen robot dogs perform some insane acrobats. They can lift heavy things, run alongside humans, work in dangerous construction sites, and even overshadow the showstopper at the Paris fashion show. One YouTuber even entered its robot dog in a dog show for real canines.

And now Google really wants you to have a robot dog. That’s why researchers at its AI arm, DeepMind, have proposed a large language model (LLM) prompt design called SayTap, which uses ‘foot contact patterns’ to achieve diverse locomotion patterns in a quadrupedal robot. Foot contact pattern is the sequence and manner in which a four-legged agent places its feet on the ground while moving.

Called MARAFY, the project will house 130,000 residents in the northern region of Jeddah.

A new development project financed by the Public Investment Fund (PIF) of Saudi Arabia features a 6.8 mile (11 km) long and 328 feet (100 m) wide artificial canal, a press release from the real estate developer ROSHN said.

The world’s largest oil supplier, Saudi Arabia, is preparing for the new world order, where fossil fuels no longer fuel its economy. The country has undertaken ambitious projects such as NEOM and the LINE, which do not meet the norms of construction designs today.

Meta has launched a new privacy setting that allows users to request the company not to use their data from public or licensed sources for training its generative AI models.

Meta, the company that owns Facebook and Instagram, has launched a new option for users who do not want their data to be used for training its artificial intelligence (AI) models. The new privacy setting, announced on Thursday, allows users to submit requests to access, modify, or delete any personal information that Meta has collected from public or licensed sources for generative AI model training.


Derick Hudson/iStock.

Generative AI

Tech stocks saw a jump after 11 companies received the necessary clearances to offer services to more than a billion potential users.

The Cyberspace Administration of China (CAC) has officially given its approval to multiple tech firms, allowing them to offer their artificial intelligence (AI) powered chatbots on a large scale, Reuters.

Chinese tech firms have spent billions on developing AI models after the resounding popularity of OpenAI’s ChatGPT last year. The US-based company is estimated to rake in a billion dollars in revenue over the next year, a recent report from The Information said.

In a paper recently published in Nature Photonics.

<em>Nature Photonics</em> is a prestigious, peer-reviewed scientific journal that is published by the Nature Publishing Group. Launched in January 2007, the journal focuses on the field of photonics, which includes research into the science and technology of light generation, manipulation, and detection. Its content ranges from fundamental research to applied science, covering topics such as lasers, optical devices, photonics materials, and photonics for energy. In addition to research papers, <em>Nature Photonics</em> also publishes reviews, news, and commentary on significant developments in the photonics field. It is a highly respected publication and is widely read by researchers, academics, and professionals in the photonics and related fields.

This post is also available in: he עברית (Hebrew)

The GhostSec cybergang claims to have breached the FANAP Behnama software, exposing 20GB of data including face recognition and motion detection systems it says are used by the Iranian government to monitor and track its people.

Now the group says it intends to make the data public, “in the interests of the Iranian people, but also in the interests of protecting the privacy of each and every one of us.” Cybersecurity analyst Cyberint commented on the group’s statement, saying that while GhostSec’s actions align with hacktivist principles, they also position themselves as advocates for human rights.

The system is like a solid version of pumped hydro, which uses surplus generating capacity to pump water uphill into a reservoir. When the water’s released it flows down through turbines, making them spin and generate energy.

Energy Vault’s solid gravity system uses huge, heavy blocks made of concrete and composite material and lifts them up in the air with a mechanical crane. The cranes are powered by excess energy from the grid, which might be created on very sunny or windy days when there’s not a lot of demand. The blocks are suspended at elevation until supply starts to fall short of demand, and when they’re lowered down their weight pulls cables that spin turbines and generate electricity.

Because concrete is denser than water, it takes more energy to elevate it, but that means it’s storing more energy too. The cranes are controlled by a proprietary software that automates most aspects of the system, from selecting blocks to raise or lower to balancing out any swinging motion that happens in the process.