Menu

Blog

Sep 22, 2021

I Tried Warning Them — Elon Musk on Superhuman AI

Posted by in categories: biological, Elon Musk, existential risks, robotics/AI, singularity

I tried to warn them.-Elon Musk.


Elon Musk has warned humanity many times about the dangers of superhuman AI. He thinks the advent of digital superintelligence will bring about profound changes to human civilization. Elon Musk thinks the technological singularity could either be super beneficial or it could be terrible for our society. Elon said that no one knows for sure the impact superhuman AI will have on our world but that one thing is for certain: We will not be able to control it. He thinks artificial intelligence will be used as a weapon and warns that the lack of AI regulation could mean it’s already too late for humanity.

Elon Musk now has adopted a “fatalistic” attitude towards the AI control problem because he feels that nothing is being done to try to mitigate the negative effects of future AI systems.

The reasonable concern about a possible extinction level event from digital Superintelligence stems from the period of time in which Narrow AI achieves artificial general intelligence. Where presumably in this time frame we can do something to stack the odds in our favor.

Today, right now, with our seemingly endless desire for better, faster and cheaper technology, we are collectively contributing in building future AI systems. Whether we are aware of it or not. As Elon Musk put it: We are the biological bootloader for AI.

One common criticism of Elon Musk is his focus on the development of AI systems such as Neuralink, the implantable brain–machine interface all the while he warns about the dangers of AI. While some view this as hypocrisy, Elon Musk like many others involved in the field of AI believe that the ultimate solution to the AI control or alignment problem is the merging of AI with humans.

Hopefully the merge scenario between humans and machines will prove to be key for solving the AI control problem.

#ElonMusk #AI #ASI

Comments are closed.