Toggle light / dark theme

BRUSSELS, Oct 29 (Reuters) — The Group of Seven industrial countries will on Monday agree a code of conduct for companies developing advanced artificial intelligence systems, a G7 document showed, as governments seek to mitigate the risks and potential misuse of the technology.

The voluntary code of conduct will set a landmark for how major countries govern AI, amid privacy concerns and security risks, the document seen by Reuters showed.

Leaders of the Group of Seven (G7) economies made up of Canada, France, Germany, Italy, Japan, Britain and the United States, as well as the European Union, kicked off the process in May at a ministerial forum dubbed the “Hiroshima AI process”.

The BBC has blocked the artificial intelligence software behind ChatGPT from accessing or using its content.

The move aligns the BBC with Reuters, Getty Images and other content providers that have taken similar steps over copyright and privacy concerns. Artificial intelligence can repurpose content, creating new text, images and more from the data.

Rhodri Talfan Davies, director of nations at the BBC said the BBC was taking steps to safeguard the interests of licence fee payers as this new technology evolves.

As we plunge head-on into the game-changing dynamic of general artificial intelligence, observers are weighing in on just how huge an impact it will have on global societies. Will it drive explosive economic growth as some economists project, or are such claims unrealistically optimistic?

Few question the potential for change that AI presents. But in a world of litigation, and ethical boundaries, will AI be able to thrive?

Two researchers from Epoch, a research group evaluating the progression of artificial intelligence and its potential impacts, decided to explore arguments for and against the likelihood that innovation ushered in by AI will lead to explosive growth comparable to the Industrial Revolution of the 18th and 19th centuries.

New AI voice and video tools can look and sound like you. But can they fool your family—or bank?

WSJ’s Joanna Stern replaced herself with her AI twin for the day and put “her” through a series of challenges, including creating a TikTok, making video calls and testing her bank’s voice biometric system.

0:00 How to make an AI video and voice clone.
2:29 Challenge 1: Phone calls.
3:36 Challenge 2: Create a TikTok.
4:47 Challenge 3: Bank Biometrics.
6:05 Challenge 4: Video calls.
6:45 AI vs. Humans.

Tech Things With Joanna Stern.

Security Enhanced Linux (SELinux) has been part of the mainline kernel for two decades to provide a security module implementing access control security policies and is now widely-used for enhancing the security of production Linux servers and other systems. Those that haven’t been involved with Linux for a long time may be unaware that SELinux originates from the US National Security Agency (NSA). But now with Linux 6.6 the NSA references are being removed.

The United States National Security Agency worked on the original code around Security Enhanced Linux and was the primary original developer. The NSA has continued to contribute to SELinux over the years while with its increased adoption does see contributions from a wide range of individuals and organizations.

AI systems are increasingly being employed to accurately estimate and modify the ages of individuals using image analysis. Building models that are robust to aging variations requires a lot of data and high-quality longitudinal datasets, which are datasets containing images of a large number of individuals collected over several years.

Numerous AI models have been designed to perform such tasks; however, many encounter challenges when effectively manipulating the age attribute while preserving the individual’s facial identity. These systems face the typical challenge of assembling a large set of training data consisting of images that show individual people over many years.

The researchers at NYU Tandon School of Engineering have developed a new artificial intelligence technique to change a person’s apparent age in images while ensuring the preservation of the individual’s unique biometric identity.

X, formerly known as Twitter, will begin collecting users’ biometric data, according to its new privacy policy that was first spotted by Bloomberg. The policy also says the company wants to collect users’ job and education history. The policy page indicates that the change will go into effect on September 29.

“Based on your consent, we may collect and use your biometric information for safety, security, and identification purposes,” the updated policy reads. Although X hasn’t specified what it means by biometric information, it is usually used to describe a person’s physical characteristics, such as their face or fingerprints. X also hasn’t provided any details about how it plans to collect it.

The company told Bloomberg that the biometrics are for premium users and will give them the option to submit their government ID and an image in order to add a verification layer. Biometric data may be extracted from both the ID and image for matching purposes, Bloomberg reports.

This post is also available in: he עברית (Hebrew)

Are biometric authentication measures no longer safe? Biometric authentication expert says deepfake videos and camera injection attacks are changing the game.

Biometrics authentication is getting more and more popular due to it being fast, easy, and smooth for the user, but Stuart Wells, CTO at biometrics authentication company Jumio, thinks this may be risky.