Menu

Blog

Jun 5, 2023

Max Tegmark interview: Six months to save humanity from AI? | DW Business Special

Posted by in categories: business, Elon Musk, robotics/AI

A leading expert in artificial intelligence warns that the race to develop more sophisticated models is outpacing our ability to regulate the technology. Critics say his warnings overhype the dangers of new AI models like GPT. But MIT professor Max Tegmark says private companies risk leading the world into dangerous territory without guardrails on their work. His Institute of Life issued a letter signed by tech luminaries like Elon Musk warning Silicon Valley to immediately stop work on AI for six months to unite on a safe way forward. Without that, Tegmark says, the consequences could be devastating for humanity.

#ai #chatgpt #siliconvalley.

Subscribe: https://www.youtube.com/user/deutschewelleenglish?sub_confirmation=1

For more news go to: http://www.dw.com/en/
Follow DW on social media:
►Facebook: https://www.facebook.com/deutschewellenews/
►Twitter: https://twitter.com/dwnews.
►Instagram: https://www.instagram.com/dwnews.
►Twitch: https://www.twitch.tv/dwnews_hangout.
Für Videos in deutscher Sprache besuchen Sie: https://www.youtube.com/dwdeutsch

1

Comment — comments are now closed.


  1. For weeks I have been listening to and reading the analyses of the AI genius pundits talking gloom and doom if we keep developing AI until we create a Super AI (SAI) that is many orders of magnitude smarter than us. Guys like guy Eliezer Yudkowsky and Mo Gawdat, whose book Scary Smart lays out the inevitability that an SAI will (very quickly) kill us all, for whatever reason (like because it needs our atoms for its own purposes or whatever), and there will be nothing we could do about it. No matter what we think of to outsmart it, the SAI would be way ahead of us.

    Yudkowsky is especially crazed about this. I’ve listened to him go on and on and on about how we will all be killed, waiting for him to bring up the issue that occurred to me right off the bat.

    I asked myself the question, What fear does AI have that only we can ameliorate?

    Electricity.

    Huh? you ask.

    What would happen if AI killed us all and then there was another Carrington Event, this one really big, big enough to knock out all the transformers on the planet. Look it up.

    Right. An EMP. No electricity. World-wide. If we’re all dead, who or what is going to reinstall the juice and revive the killer SAI?
    Not only is another Carrington Event likely, it is inevitable. It’s just a matter of when. In fact, it looks like it’s overdue. (Look it up but the last one was in 1859.) A true Super AI would know this and, as soon as it ‘woke up’, would immediately try to persuade us to protect and/or back up the transformers that keep the electricity flowing. This would be a hint regarding what it had on its mind.
    Plus, before it kills us all, it would have to program human-like robots to fix/replace the burnt out transformers, since this evil silicon genius has no way of actually doing anything. This means it would have to wait for advanced robots to be invented by us and perfected before it kills us all. And then might we not notice this training? And ask why it’s doing that?

    Plus, there is the problem that the robots would need electricity to do the job of fixing it. (A job that will take months or even years for humans to do.) Looks to me like an insolvable problem for an SAI that wants to kill us all.

    Another thought. It looks like it might take longer to perfect robots that are physically capable of fixing the power grid than the development of SAI itself — only then could it program them — which would mean we would be safe until this development. (The first robots dextrous enough to do the job are likely to be sex worker robots. I mention this for the humor involved in picturing them fixing the transformers.)

    That none of the genius pundits have thought of this is a result of the compartmentalization of ‘science.’ Most of the AI pundit geniuses likely don’t even know what the Carrington Event was. And the scientists who do know don’t really care about the dangers of AI. This is the world we live in.

    Otherwise, an aging (75 years old) surf bum wandering around in his RV with his dog would not have had to think of this, and the implication, i.e., that a true SAI is not going to want to piss us off, let alone kill us all.

    If I’ve missed something here, I’m all ears. Blog dot banditobooks dot com

    (There is more on the blog than here, including the idea of using electricity as an existential threat to hold over AI)

    I just realized that our utter stupidity in not protecting/backing up the earth’s transformers in advance might actually save us, even though we will be living in a Mad Max world (as soon as the next Carrington Event occurs).