Toggle light / dark theme

Why Go With an Evil-Looking Orb?

But OpenAI isn’t Altman’s only project, and it’s not even his only project with ambitions to change the world. He is also a co-founder of a company called Tools for Humanity, which has the lofty goal of protecting people from the economic devastation that may arise from AI taking human jobs. The company’s first major project is Worldcoin, which uses an evil-looking metallic orb—called the Orb—to take eyeball scans from people all over the world.

Those scans are converted into unique codes that confirm you are a real, individual human, not a bot. In the future, this will theoretically grant you access to a universal basic income parceled out through Worldcoin’s cryptocurrency, WLD. (You will want this because you will not be able to find work.) More than 2 million people in 35 countries have been scanned already, according to Tools for Humanity’s World ID app. Although it’s not yet available in the United States, the WLD token has been distributed elsewhere, and the company has also recruited users through cash incentives in countries such as Indonesia and Kenya.

How savvy trillion-dollar chipmaker Nvidia is powering the AI goldrush

The US firm best known for its gaming tech has long been ahead of the curve in supplying the tools needed by tech developers.

It’s not often that the jaws of Wall Street analysts drop to the floor but late last month it happened: Nvidia, a company that makes computer chips, issued sales figures that blew the street’s collective mind. It had pulled in $13.5bn in revenue in the last quarter, which was at least $2bn more than the aforementioned financial geniuses had predicted. Suddenly, the surge in the company’s share price in May that had turned it into a trillion-dollar company made sense.

Well, up to a point, anyway. But how had a company that since 1998 – when it released the revolutionary… More.

Senators Want ChatGPT-Level AI to Require a Government License

Under the proposal, developing face recognition and other “high risk” applications of AI would also require a government license. To obtain one, companies would have to test AI models for potential harm before deployment, disclose instances when things go wrong after launch, and allow audits of AI models by an independent third party.

The framework also proposes that companies should publicly disclose details of the training data used to create an AI model and that people harmed by AI get a right to bring the company that created it to court.

The senators’ suggestions could be influential in the days and weeks ahead as debates intensify in Washington over how to regulate AI. Early next week, Blumenthal and Hawley will oversee a Senate subcommittee hearing about how to meaningfully hold businesses and governments accountable when they deploy AI systems that cause people harm or violate their rights. Microsoft president Brad Smith and the chief scientist of chipmaker Nvidia, William Dally, are due to testify.

Qualcomm CEO says AI may breathe new life into smartphones: ‘It could create a new upgrade cycle’

Qualcomm CEO Cristiano Amon said the company’s upcoming Snapdragon Summit in October could lead to major developments in mobile technology.

New cases from smartphone makers and other manufacturers the company works with “could create a new upgrade cycle for phones,” Amon said.

In 2022, global smartphone sales tumbled 18.3% year-over-year to 1.21 billion, the lowest level since 2013, according to data from market research firm IDC.

The CEO of U.S. chip giant Qualcomm thinks artificial intelligence could give the smartphone market a fresh lease on life.

AI Hospitals: A Step Towards the Future — Science View

That’s perfect and from one of the most technological countries. It’s late and I saved it for watching later but I can imagine what is in this video. AI is always useful especially in medicine.


[Skip Intro] 0:22
Learn more about science on NHK WORLD-JAPAN:
https://www3.nhk.or.jp/nhkworld/en/ondemand/category/23/?cid…9-sv303-hp.
More quality content available on NHK WORLD-JAPAN:
https://www3.nhk.or.jp/nhkworld/en/ondemand/video/?cid=wohk-yt-2309-sv303-hp.

The integration of artificial intelligence into healthcare seeks to reduce diagnosis errors while increasing humanity.

Scientists Devised a Way to Tell if ChatGPT Becomes Aware of Itself

Our lives were already infused with artificial intelligence (AI) when ChatGPT reverberated around the online world late last year. Since then, the generative AI system developed by tech company OpenAI has gathered speed and experts have escalated their warnings about the risks.

Meanwhile, chatbots started going off-script and talking back, duping other bots, and acting strangely, sparking fresh concerns about how close some AI tools are getting to human-like intelligence.

For this, the Turing Test has long been the fallible standard set to determine whether machines exhibit intelligent behavior that passes as human. But in this latest wave of AI creations, it feels like we need something more to gauge their iterative capabilities.