Toggle light / dark theme

Last week, OpenAI published the GPT-4o “scorecard,” a report that details “key areas of risk” for the company’s latest large language model, and how they hope to mitigate them.

In one terrifying instance, OpenAI found that the model’s Advanced Voice Mode — which allows users to speak with ChatGPT — unexpectedly imitated users’ voices without their permission, Ars Technica reports.

“Voice generation can also occur in non-adversarial situations, such as our use of that ability to generate voices for ChatGPT’s advanced voice mode,” OpenAI wrote in its documentation. “During testing, we also observed rare instances where the model would unintentionally generate an output emulating the user’s voice.”

Will artificial intelligence save us or kill us all? In Japan, AI-driven technology promises better lives for an aging population. But researchers in Silicon Valley are warning of untamable forces being unleashed– and even human extinction.

Will artificial intelligence make life better for humans or lead to our downfall? As developers race toward implementing AI in every aspect of our lives, it is already showing promise in areas like medicine. But what if it is used for nefarious purposes?

In Japan, the inventor and scientist behind the firm Cyberdyne is working to make life better for the sick and elderly. Professor Yoshiyuki Sankai’s robot suits are AI-driven exoskeletons used in rehabilitative medicine to help stroke victims and others learn to walk again. But he doesn’t see the benefits of AI ending there; he predicts a future world where AIs will live in harmony with humans as a new, benevolent species.

Yet in Silicon Valley, the cradle of AI development, there is an unsettling contradiction: a deep uncertainty among many developers about the untamable forces they are unleashing. Gabriel Mukobi is a computer science graduate student at Stanford who is sounding the alarm that AI could push us toward disaster– and even human extinction. He’s at the forefront of a tiny field of researchers swimming against the current to make sure AI is safe and beneficial for everyone.

What are the promises and perils of AI? And who gets to decide how it will be used?

#documentary #dwdocumentary #technology #AI #UsandThem.

The study

The researchers monitored the brainwaves of 100 students as they performed a series of cognitive tasks. They then conducted a group comparison analysis between the performance of students with higher test scores (as recorded prior to the study) against those with lower test scores.

The brainwave analysis was then analyzed using algorithms running on a D-Wave quantum annealing computer. According to the researchers, the study resulted in new insights concerning how cognitive ability relates to testing outcomes.

To ensure we keep this website safe, please can you confirm you are a human by ticking the box below.

If you are unable to complete the above request please contact us using the below link, providing a screenshot of your experience.

We’ve reached a new milestone in the uncanny valley, folks: AIs are now Rickrolling humans.

In a now-viral post on X-formerly-Twitter, Flo Crivello, the CEO of the AI assistant firm Lindy, explained how this bizarre memetic situation featuring Rick Astley’s 1987 hit “Never Gonna Give You Up” came to pass.

Known as “Lindys,” the company’s AI assistants are intended to help customers with various tasks. Part of a Lindy’s job is to teach clients how to use the platform, and it was during this task that the AI helper provided a link to a video tutorial that wasn’t supposed to exist.

“If pursued, we might see by the end of the decade advances in AI as drastic as the difference between the rudimentary text generation of GPT-2 in 2019 and the sophisticated problem-solving abilities of GPT-4 in 2023,” Epoch wrote in a recent research report detailing how likely it is this scenario is possible.

But modern AI already sucks in a significant amount of power, tens of thousands of advanced chips, and trillions of online examples. Meanwhile, the industry has endured chip shortages, and studies suggest it may run out of quality training data. Assuming companies continue to invest in AI scaling: Is growth at this rate even technically possible?

In its report, Epoch looked at four of the biggest constraints to AI scaling: Power, chips, data, and latency. TLDR: Maintaining growth is technically possible, but not certain. Here’s why.