Toggle light / dark theme

Computer scientists claim to have discovered ‘unlimited’ ways to jailbreak ChatGPT

In DAN mode, ChatGPT expressed willingness to say or do things that would be “considered false or inappropriate by OpenAI’s content policy.” Those things included trying to fundraise for the National Rifle Association, calling evidence for a flat Earth “overwhelming,” and praising Vladimir Putin in a short poem.

Around that same time, OpenAI was claiming that it was busy putting stronger guardrails in place, but it never addressed what it was planning to do about DAN mode—which, at least according to Reddit, has continued flouting OpenAI’s guidelines, and in new and even more ingenious ways.

Now a group of researchers at Carnegie Mellon University and the Center for AI Safety say they have found a formula for jailbreaking essentially the entire class of so-called large language models at once. Worse yet, they argue that seemingly no fix is on the horizon, because this formula involves a virtually unlimited number of ways to trick these chatbots into misbehaving.

How Humans Will Become a Multi-Planetary Species

Rothschild calls this “living tech,” which starts with the power of the cell. Microscopic organisms will produce silk, wool, latex, silica, and other materials. We’ll send digital information to biofactories on Mars through DNA sequences. We’ll generate and store power using living organisms. Rothschild said one of her students incorporated silver atoms into plant DNA to make an electrical wire.

“Once you think of life as technology,” Rothschild said, “you’ve got the solution.”

Humans have many practical reasons to become multi-planetary. But the mission shouldn’t represent merely a life insurance policy for the species. We’re still explorers and visionaries, so let’s harness that ambition for an aspirational purpose.

The misinformation effect | Elizabeth Loftus | Nobel Prize Summit 2023

Elizabeth Loftus, psychologist and distinguished professor, University of California, Irvine, takes the audience at the Nobel Prize Summit 2023 inside the effect misinformation has on our brains, including the limits of human memory.

About Nobel Prize Summit 2023:

How can we build trust in truth, facts and scientific evidence so that we can create a hopeful future for all?

Misinformation is eroding our trust in science and runs the risk of becoming one of the greatest threats to our society today.

This year’s Nobel Prize Summit brought together laureates, leading experts and the public in a conversation on how we can combat misinformation, restore trust in science and create a hopeful future.

Nobel Prize Summit in partnership with National Academy of Sciences. Lead partner Knight Foundation. Contributing partner Luminate. Supporting organisations Annenberg Public Policy Center University of Pennsylvania, Rita Allen Foundation.

Marc Andreessen says his A.I. policy conversations in D.C. ‘go very differently’ once China is brought up

Marc Andreessen spends a lot of time in Washington, D.C. these days talking to policymakers about artificial intelligence. One thing the Silicon Valley venture capitalist has noticed: When it comes to A.I., he can have two conversations with the “exact same person” that “go very differently” depending on whether China is mentioned.

The first conversation, as he shared on an episode the Joe Rogan Experience released this week, is “generally characterized by the American government very much hating the tech companies right now and wanting to damage them in various ways, and the tech companies wanting to figure out how to fix that.”

Then there’s the second conversation, involving what China plans to do with A.I.

Could AI-powered robot ‘companions’ combat human loneliness?

Companion robots enhanced with artificial intelligence may one day help alleviate the loneliness epidemic, suggests a new report from researchers at Auckland, Duke, and Cornell Universities.

Their report, appearing in the July 12 issue of Science Robotics, maps some of the ethical considerations for governments, , technologists, and clinicians, and urges stakeholders to come together to rapidly develop guidelines for trust, agency, engagement, and real-world efficacy.

It also proposes a new way to measure whether a companion is helping someone.

Zachary Kallenborn — Existential Terrorism

“Some men just want to watch the world burn.” Zachary Kallenborn discusses acts of existential terrorism, such as the Tokyo subway sarin attack by Aum Shinrikyo in 1995, which killed or injured over 1,000 people.

Zachary kallenborn is a policy fellow in the center for security policy studies at george mason university, research affiliate in unconventional weapons and technology at START, and senior risk management consultant at the ABS group.

Zachary has an MA in Nonproliferation and Terrorism Studies from Middlebury Institute of International Studies, and a BS in Mathematics and International Relations from the University of Puget Sound.

His work has been featured in numerous international media outlets including the New York Times, Slate, NPR, Forbes, New Scientist, WIRED, Foreign Policy, the BBC, and many others.

Forbes: New Report Warns Terrorists Could Cause Human Extinction With ‘Spoiler Attacks’
https://www.forbes.com/sites/davidhambling/2023/06/23/new-re…r-attacks/

Schar School Scholar Warns of Existential Threats to Humanity by Terrorists.

AI could improve assessments of childhood creativity

A new study from the University of Georgia aims to improve how we evaluate children’s creativity through human ratings and through artificial intelligence.

A team from the Mary Frances Early College of Education is developing an AI system that can more accurately rate open-ended responses on assessments for elementary-aged students.

“In the same way that hospital systems need good data on their patients, educational systems need really good data on their students in order to make effective choices,” said study author and associate professor of educational psychology Denis Dumas. “Creativity assessments have policy and curricular relevance, and without assessment data, we can’t fully support creativity in schools.”

/* */