Toggle light / dark theme

Coming AI-driven economy will sell your decisions before you take them, researchers warn

This is according to AI ethicists from the University of Cambridge, who say we are at the dawn of a “lucrative yet troubling new marketplace for digital signals of intent”, from buying movie tickets to voting for candidates. They call this the Intention Economy.

Researchers from Cambridge’s Leverhulme Centre for the Future of Intelligence (LCFI) argue that the explosion in generative AI, and our increasing familiarity with chatbots, opens a new frontier of “persuasive technologies” – one hinted at in recent corporate announcements by tech giants.

“Anthropomorphic” AI agents, from chatbot assistants to digital tutors and girlfriends, will have access to vast quantities of intimate psychological and behavioural data, often gleaned via informal, conversational spoken dialogue.

Superintelligence Is Not Omniscience

Heninger and Johnson argue that chaos theory limits the capacity of even superintelligent systemsin simulating/predicting events within our universe, implying that Yudkowskian warnings may not accurately describe the nature of ASI risk. #aisafety #ai #tech


Completely eliminating the uncertainty would require making measurements with perfect precision, which does not seem to be possible in our universe. We can prove that fundamental sources of uncertainty make it impossible to know important things about the future, even with arbitrarily high intelligence. Atomic scale uncertainty, which is guaranteed to exist by Heisenberg’s Uncertainty Principle, can make macroscopic motion unpredictable in a surprisingly short amount of time. Superintelligence is not omniscience.

Chaos theory thus allows us to rigorously show that there are ceilings on some particular abilities. If we can prove that a system is chaotic, then we can conclude that the system offers diminishing returns to intelligence. Most predictions of the future of a chaotic system are impossible to make reliably. Without the ability to make better predictions, and plan on the basis of these predictions, intelligence becomes much less useful.

This does not mean that intelligence becomes useless, or that there is nothing about chaos which can be reliably predicted.

What’s the short timeline plan?

This is a low-effort post. I mostly want to get other people’s takes and express concern about the lack of detailed and publicly available plans so far. This post reflects my personal opinion and not necessarily that of other members of Apollo Research. I’d like to thank Ryan Greenblatt, Bronson Schoen, Josh Clymer, Buck Shlegeris, Dan Braun, Mikita Balesni, Jérémy Scheurer, and Cody Rushing for comments and discussion.

I think short timelines, e.g. AIs that can replace a top researcher at an AGI lab without losses in capabilities by 2027, are plausible. Some people have posted ideas on what a reasonable plan to reduce AI risk for such timelines might look like (e.g. Sam Bowman’s checklist, or Holden Karnofsky’s list in his 2022 nearcast), but I find them insufficient for the magnitude of the stakes (to be clear, I don’t think these example lists were intended to be an extensive plan).

New ‘all-optical’ nanoscale sensors of force access previously unreachable environments

Mechanical force is an essential feature for many physical and biological processes. Remote measurement of mechanical signals with high sensitivity and spatial resolution is needed for a wide range of applications, from robotics to cellular biophysics and medicine and even to space travel. Nanoscale luminescent force sensors excel at measuring piconewton forces, while larger sensors have proven powerful in probing micronewton forces.

However, large gaps remain in the force magnitudes that can be probed remotely from subsurface or interfacial sites, and no individual, non-invasive sensor has yet been able to make measurements over the large dynamic range needed to understand many systems.

Insane Clown Posse’s Shaggy 2 Dope Confronts AI Version of Himself

Insane Clown Posse’s Joseph Utsler, better known by his stage persona Shaggy 2 Dope, has an AI clone — and apparently, he’s a fan.

Hosted on the Google-backed Character. AI service — which, as a sidebar, Futurism has investigated extensively, repeatedly finding horrible things — the ICP cofounder’s digital doppelganger sounds like a slightly robotic version of the real thing.

The effect is so uncanny that Mr. Dope himself decided to hit the AI up and bring it onto his livestreamed vlog, “The Shaggy Show.”

/* */