Toggle light / dark theme

Eighteen countries have signed an agreement on AI safety, based on the principle that it should be secure by design.

The Guidelines for Secure AI System Development, led by the U.K.’s National Cyber Security Centre and developed with the U.S.’ Cybersecurity and Infrastructure Security Agency, are touted as the first global agreement of their kind.

They’re aimed mainly at providers of AI systems that are using models hosted by an organization, or that are using external application programming interfaces. The aim is to help developers make sure that cybersecurity is baked in as an essential pre-condition of AI system safety and integral to the development process, from the start and throughout.

BOT or NOT?


If you’re asked to imagine a person from North America or a woman from Venezuela, what do they look like? If you give an AI-powered imaging program the same prompts, odds are the software will generate stereotypical responses.

A “person” will usually be male and light-skinned.

A woman from Latin American countries will more often be sexualized than European and Asian women.

Guardrails AI companies add to their products to prevent them from causing harm “aren’t enough” to control AI capabilities that could endanger humanity within five to ten years, former Google CEO Eric Schmidt told Axios’ Mike Allen on Tuesday.

The big picture: Interviewed at Axios’ AI+ Summit in Washington, D.C., Schmidt compared the development of AI to the introduction of nuclear weapons at the end of the Second World War.

Since the explosive launch of ChatGPT, there has been a prevailing fear among workers that they will be unable to compete against artificial intelligence, leading to mass unemployment.


In fact, the European Central Bank’s research predicted that AI will actually create jobs and that any reports suggesting otherwise “may be greatly exaggerated”

After assessing nine years of data gathered across 16 European countries, the ECB found that low and medium-skill jobs were largely unaffected by booming technology, meanwhile, opportunities for younger and high-skilled workers actually increased rather than vanished.

Amazon Q, currently available for contact centers, will be integrated to other AWS services soon.

Amazon’s cloud business AWS launched a chat tool called Amazon Q, where businesses can ask questions specific to their companies.


Amazon Q can work with any of the models found on Amazon Bedrock, AWS’s repository of AI models, which includes Meta’s Llama 2 and Anthropic’s Claude 2. The company said customers who use Q often choose which model works best for them; connect to the Bedrock API for the model; use that to learn their data, policies, and workflow; and then deploy Amazon Q.

AWS said Amazon Q was trained on 17 years’ worth of AWS knowledge and can be used to ask questions specific to AWS use. It can suggest the best AWS services for a project.

Former Snap software engineer Joshua Xu believes AI-generated video is about to have a moment like Snapchat or Instagram had in the early days of the mobile photography revolution.


As early proof of that, he points to his own company HeyGen. After launching its AI-powered video creation app last September, HeyGen reached $1 million in annual recurring revenue in March, then $10 million in August. Today, that number is up to $18 million, Xu, cofounder and CEO, told Forbes.

“Snapchat is a camera company where everyone creates content through the mobile camera,” Xu said. “We think AI can create the content. AI could become the new camera.”

On Wednesday, HeyGen announced $5.6 million in new venture capital funding led by Sarah Guo’s Conviction Partners. The round values the Los Angeles-based company at $75 million; As part of the deal, Guo will take a board seat in place of HongShan (formerly Sequoia China) as HeyGen takes measures to distance itself from its Chinese origins.