Toggle light / dark theme

Assessing AI Risk Skepticism

How should we respond to the idea that advances in AI pose catastrophic risks for the wellbeing of humanity?

Two sets of arguments have been circulating online for many years, but in light of recent events, are now each mutating into new forms and are attracting much more attention from the public. The first set argues that AI risks are indeed serious. The second set is skeptical. It argues that the risks are exaggerated, or can easily be managed, and are a distraction from more important issues and opportunities.

In this London Futurists webinar, recorded on the 27th of May 2023, we assessed the skeptical views. To guide us, we were joined by the two authors of a recently published article, “AI Risk Skepticism: A Comprehensive Survey”, namely Vemir Ambartsoumean and Roman Yampolskiy. We were also joined by Mariana Todorova, a member of the Millennium Project’s AGI scenarios study team.

The meeting was introduced and moderated by David Wood, Chair of London Futurists.

For more details about the event and the panellists, see https://www.meetup.com/london-futurists/events/293488808/

The paper “AI Risk Skepticism — A comprehensive study” can be found at https://arxiv.org/abs/2303.03885.

How generative AI can revolutionize customization and user empowerment

Join top executives in San Francisco on July 11–12, to hear how leaders are integrating and optimizing AI investments for success. Learn More

Last year generative artificial intelligence (AI) took the world by storm as advancements populated news and social media. Investors were swarming the space as many recognized its potential across industries. According to IDC, there is already a 26.9% increase in global AI spending compared to 2022. And this number is forecast to exceed $300 billion in 2026.

It’s also caused a shift in how people view AI. Before, people thought of artificial intelligence as an academic, high-tech pursuit. It used to be that the most talked-about example of AI was autonomous vehicles. But even with all the buzz, it had yet to be a widely available and applied form of consumer-grade AI.

Microsoft president says “must always ensure AI remains under human control”

As companies race to develop more products powered by artificial intelligence, Microsoft president Brad Smith has issued a stark warning about deep fakes. Deep fakes use a form of AI to generate completely new video or audio, with the end goal of portraying something that did not actually occur in reality. But as AI quickly gets better at mimicking reality, big questions remain over how to regulate it. In short, Mr Smith said, “we must always ensure that AI remains under human control”.

Follow us:
CNA: https://cna.asia.
CNA Lifestyle: http://www.cnalifestyle.com.
Facebook: https://www.facebook.com/channelnewsasia.
Instagram: https://www.instagram.com/channelnewsasia.
Twitter: https://www.twitter.com/channelnewsasia.
TikTok: https://www.tiktok.com/@channelnewsasia

OpenAI Competitor Says Its Chatbot Has a Rudimentary Conscience

With AI chatbots’ propensity for making things up and spewing bigoted garbage, one firm founded by ex-OpenAI researchers has a different approach — teaching AI to have a conscience.

As Wired reports, the OpenAI competitor Anthropic’s intriguing chatbot Claude is built with what its makers call a “constitution,” or set of rules that draws from the Universal Declaration of Human Rights and elsewhere to ensure that the bot is not only powerful, but ethical as well.

Jared Kaplan, a former OpenAI research consultant who went on to found Anthropic with a group of his former coworkers, told Wired that Claude is, in essence, learning right from wrong because its training protocols are “basically reinforcing the behaviors that are more in accord with the constitution, and discourages behaviors that are problematic.”

Neuralink says it has the FDA’s OK to start clinical trials

In December 2022, founder Elon Musk gave an update on his other, other company, the brain implant startup Neuralink. As early as 2020, the company had been saying it was close to starting clinical trials of the implants, but the December update suggested those were still six months away. This time, it seems that the company was correct, as it now claims that the Food and Drug Administration (FDA) has given its approval for the start of human testing.

Neuralink is not ready to start recruiting test subjects, and there are no details about what the trials will entail. Searching the ClinicalTrials.gov database for “Neuralink” also turns up nothing. Typically, the initial trials are small and focused entirely on safety rather than effectiveness. Given that Neuralink is developing both brain implants and a surgical robot to do the implanting, there will be a lot that needs testing.

It’s likely that these will focus on the implants first, given that other implants have already been tested in humans, whereas an equivalent surgical robot has not.

Researchers develop interactive ‘Stargazer’ camera robot that can help film tutorial videos

A group of computer scientists from the University of Toronto wants to make it easier to film how-to videos.

The team of researchers have developed Stargazer, an interactive robot that helps university instructors and other content creators create engaging tutorial videos demonstrating physical skills.

For those without access to a cameraperson, Stargazer can capture dynamic instructional videos and address the constraints of working with static cameras.

Identification of dual-purpose therapeutic targets implicated in aging and glioblastoma multiforme using PandaOmics — an AI-enabled biological target discovery platform

Glioblastoma Multiforme (GBM) is the most aggressive and most common primary malignant brain tumor. The age of GBM patients is considered as one of the disease’s negative prognostic factors and the mean age of diagnosis is 62 years. A promising approach to preventing both GBM and aging is to identify new potential therapeutic targets that are associated with both conditions as concurrent drivers. In this work, we present a multi-angled approach of identifying targets, which takes into account not only the disease-related genes but also the ones important in aging. For this purpose, we developed three strategies of target identification using the results of correlation analysis augmented with survival data, differences in expression levels and previously published information of aging-related genes.

What you need to know about the mindset and motivation of ethical hackers

Join top executives in San Francisco on July 11–12, to hear how leaders are integrating and optimizing AI investments for success. Learn More

Why do people become ethical hackers? Given the negative connotations that the word “hacker” has unfortunately acquired over the past few decades, it’s tough to understand why anyone would ascribe themselves to that oxymoron.

Yet, ethical hackers are playing an increasingly vital role in cybersecurity, and the ranks of the ethical hacking community are growing significantly. If you’re thinking about working with or hiring ethical hackers — or even becoming one yourself — it’s important to understand what makes this unique breed of cyber-pro tick.

Medical ‘microrobots’ could one day treat bladder disease, other human illnesses

A team of engineers at the University of Colorado Boulder has designed a new class of tiny, self-propelled robots that can zip through liquid at incredible speeds—and may one day even deliver prescription drugs to hard-to-reach places inside the human body.

The researchers describe their mini healthcare providers in a paper published last month in the journal Small.

“Imagine if microrobots could perform certain tasks in the body, such as non-invasive surgeries,” said Jin Lee, lead author of the study and a postdoctoral researcher in the Department of Chemical and Biological Engineering. “Instead of cutting into the patient, we can simply introduce the robots to the body through a pill or an injection, and they would perform the procedure themselves.”