Toggle light / dark theme

A raft of industry experts have given their views on the likely impact of artificial intelligence on humanity in the future. The responses are unsurprisingly mixed.

The Guardian has released an interesting article regarding the potential socioeconomic and political impact of the ever-increasing rollout of artificial intelligence (AI) on society. By asking various experts in the field on the subject, the responses were, not surprisingly, a mixed bag of doom, gloom, and hope.


Yucelyilmaz/iStock.

“I don’t think the worry is of AI turning evil or AI having some kind of malevolent desire,” Jessica Newman, director of University of California Berkeley’s Artificial Intelligence Security Initiative, told the Guardian. “The danger is from something much more simple, which is that people may program AI to do harmful things, or we end up causing harm by integrating inherently inaccurate AI systems into more and more domains of society,” she added.

Geneticists have unearthed a major event in the ancient history of sturgeons and paddlefish that has significant implications for the way we understand evolution. They have pinpointed a previously hidden “whole genome duplication” (WGD) in the common ancestor of these species, which seemingly opened the door to genetic variations that may have conferred an advantage around the time of a major mass extinction some 200 million years ago.

The big-picture finding suggests that there may be many more overlooked, shared WGDs in other species before periods of extreme environmental upheaval throughout Earth’s tumultuous history.

The research, led by Professor Aoife McLysaght and Dr. Anthony Redmond from Trinity College Dublin’s School of Genetics and Microbiology, has just been published in Nature Communications.

It’s another high-profile warning about AI risk that will divide experts. Signatories include Google DeepMind CEO Demis Hassabis and OpenAI CEO Sam Altman.

A group of top AI researchers, engineers, and CEOs have issued a new warning about the existential threat they believe that AI poses to humanity.

The 22-word statement, trimmed short to make it as broadly acceptable as possible, reads as follows: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”


Another warning from AI’s top table.

When will an asteroid hit Earth and wipe us out? Not for at least 1,000 years, according to a team of astronomers. Probably.

Either way, you should get to know an asteroid called 7482 (1994 PC1), the only one known whose orbital path will cross that of Earth’s consistently for the next millennium—and thus has the largest probability of a “deep close encounter” with us, specifically in 502 years. Possibly.

Published on a preprint archive and accepted for publication in The Astronomical Journal, the paper states that astronomers have almost found all the kilometer-sized asteroids. There’s a little under 1,000 of them.

I had an amazing experience at the Foresight Institute’s Whole-Brain Emulation (WBE) Workshop at a venue near Oxford! For more information and a list of participants, see: https://foresight.org/whole-brain-emulation-workshop-2023/ I had the opportunity to work within a group of some of the most brilliant, ambitious, and visionary people I’ve ever encountered on the quest for recreating the human brain in a computer. We also discussed in depth the existential risks of upcoming artificial superintelligence and how to mitigate these risks, perhaps with the aid of WBE.

My subgroup focused on exploring the challenge of human connectomics (mapping all of the neurons and synapses in the brain).


WBE is a potential technology to generate software intelligence that is human-aligned simply by being based directly on human brains. Generally past discussions have assumed a fairly long timeline to WBE, while past AGI timelines had broad uncertainty. There were also concerns that the neuroscience of WBE might boost AGI capability development without helping safety, although no consensus did develop. Recently many people have updated their AGI timelines towards earlier development, raising safety concerns. That has led some people to consider whether WBE development could be significantly speeded up, producing a differential technology development re-ordering of technology arrival that might lessen the risk of unaligned AGI by the presence of aligned software intelligence.

Artificial intelligence is a superior lifeform that humans are creating, and many AI researchers have outlined various scenarios in which this technology can pose an existential risk to humanity that could result in the literal end of the world.

Deep Learning AI Specialization: https://imp.i384100.net/GET-STARTED
AI Marketplace: https://taimine.com/

AI news timestamps:
0:00 How bad could it be?
2:56 AI destruction scenario 1 and 2
4:28 The future of artificial intelligence.
5:25 Merge with AI for human evolution.
6:41 The AI box experiment.

#ai #future #tech

Did humanity miss the party? Are SETI, the Drake Equation, and the Fermi Paradox all just artifacts of our ignorance about Advanced Life in the Universe? And if we are wrong, how would we know?

A new study focusing on black holes and their powerful effect on star formation suggests that we, as advanced life, might be relics from a bygone age in the Universe.

Universe Today readers are familiar with SETI, the Drake Equation, and the Fermi Paradox. All three are different ways that humanity grapples with its situation. They’re all related to the Great Question: Are We Alone? We ask these questions as if humanity woke up on this planet, looked around the neighbourhood, and wondered where everyone else was. Which is kind of what has happened.

Suppose a large inbound asteroid were discovered, and we learned that half of all astronomers gave it at least 10% chance of causing human extinction, just as a similar asteroid exterminated the dinosaurs about 66 million years ago. Since we have such a long history of thinking about this threat and what to do about it, from scientific conferences to Hollywood blockbusters, you might expect humanity to shift into high gear with a deflection mission to steer it in a safer direction.

Sadly, I now feel that we’re living the movie Don’t look up for another existential threat: unaligned superintelligence. We may soon have to share our planet with more intelligent “minds” that care less about us than we cared about mammoths. A recent survey showed that half of AI researchers give AI at least 10% chance of causing human extinction. Since we have such a long history of thinking about this threat and what to do about it, from scientific conferences to Hollywood blockbusters, you might expect that humanity would shift into high gear with a mission to steer AI in a safer direction than out-of-control superintelligence. Think again: instead, the most influential responses have been a combination of denial, mockery, and resignation so darkly comical that it’s deserving of an Oscar.

Read More: The Only Way to Deal with the Threat from AI.