Toggle light / dark theme

Get a Wonderful Person Tee: https://teespring.com/stores/whatdamath.
More cool designs are on Amazon: https://amzn.to/3wDGy2i.
Alternatively, PayPal donations can be sent here: http://paypal.me/whatdamath.

Hello and welcome! My name is Anton and in this video, we will talk about a potential resolution to Fermi paradox using another — Levinthal’s Paradox.
Links:
https://theconversation.com/ai-makes-huge-progress-predictin…ent-151181
https://en.wikipedia.org/wiki/Levinthal%27s_paradox.
https://en.wikipedia.org/wiki/Protein_structure_prediction.
https://web.archive.org/web/20110523080407/http://www-miller…nthal.html.
Previous Fermi Paradox part: https://youtu.be/iCDM5uLYeJU
#fermiparadox #proteins #alienlife.

Support this channel on Patreon to help me make this a full time job:
https://www.patreon.com/whatdamath.

Bitcoin/Ethereum to spare? Donate them here to help this channel grow!
bc1qnkl3nk0zt7w0xzrgur9pnkcduj7a3xxllcn7d4
or ETH: 0x60f088B10b03115405d313f964BeA93eF0Bd3DbF

Space Engine is available for free here: http://spaceengine.org.
Enjoy and please subscribe.

Twitter: https://twitter.com/WhatDaMath.

https://youtube.com/watch?v=qusXSHkcQZg&feature=share

(Filmed as: Blade Runner) by Philip K. Dick full audiobook. With cast and corresponding animated imagery.

Bounty hunter Rick Deckard wakes up to a world devastated by nuclear war, where humans care for animals to prevent the mass extinction of several species, where androids are colonial slaves who kill their masters and flee to hide on Earth.
Deckard’s boss Harry Bryant tells him that Dave Holden, another bounty hunter, was hurt while hunting fugitive androids, and now Deckard has to finish the job.
The catch? The androids are Nexus-6 models, the most intelligent, advanced androids ever created.

We appreciate the support. Thank you for listening! As always, we truly hope you enjoy!
Please Like and Subscribe.
Thanks again to all those who’ve listened and joined our channel!

All male parts (except Roy Batty) voiced by Matthew Silas Sedgwick.
All female parts voiced by Makyla Meyer.
Roy Batty voiced by Chris Carter.

Prologue — 00:00:00
Chapter 1 — 00:00:40
Chapter 2 — 00:19:19
Chapter 3 — 00:38:44
Chapter 4 — 00:51:12
Chapter 5 — 01:10:13
Chapter 6 — 01:30:45
Chapter 7 — 01:42:17
Chapter 8 — 02:05:21
Chapter 9 — 02:23:44
Chapter 10 — 02:45:38
Chapter 11 — 02:58:12
Chapter 12 — 03:10:20
Chapter 13 — 03:33:27
Chapter 14 — 03:47:35
Chapter 15 — 04:06:07
Chapter 16 — 04:34:16
Chapter 17 — 04:53:27
Chapter 18 — 05:04:06
Chapter 19 — 05:25:33
Chapter 20 — 05:39:39
Chapter 21 — 05:43:52
Chapter 22 — 05:56:07

#philipkdick #bladerunner #audiobook

Not going to happen unless some “doomsdayers” decide to take man back to analog. Perish the thought!

Which brings us to Big Blue – not Big Brother – and its move to take artificial intelligence into the cloud minus all the hardware.

Yes, IBM (and let’s not leave out Red Hat, IBM’s core cloud player) has found another way to tout its cloud computing business by creating what it calls an artificial intelligence-focused supercomputer that exists in the cloud.

If life is common in our Universe, and we have every reason to suspect it is, why do we not see evidence of it everywhere? This is the essence of the Fermi Paradox, a question that has plagued astronomers and cosmologists almost since the birth of modern astronomy.

It is also the reasoning behind the Hart-Tipler Conjecture, one of the many (many!) proposed resolutions, which asserts that if advanced life had emerged in our galaxy sometime in the past, we would see signs of their activity everywhere we looked. Possible indications include self-replicating probes, megastructures, and other Type III-like activity.

On the other hand, several proposed resolutions challenge the notion that advanced life would operate on such massive scales. Others suggest that advanced extraterrestrial civilizations would be engaged in activities and locales that would make them less noticeable.

Architects urgently need to get to grips with the existential threat posed by AI or risk, in ChatGPT’s words, “sleepwalking into oblivion”, writes Neil Leach.

In the near future, architects may become a thing of the past. Artificial intelligence (AI) is quickly advancing to a point where it can generate the design of a building completely autonomously. With the potential to create designs faster and with more accuracy than ever before, AI has the potential to revolutionize the architecture industry, leaving traditional architects out of the equation. This could spell the end of the profession as we know it, raising questions of what the future holds for architects in a world of AI-generated buildings.

I did not write the paragraph above. It was generated by ChatGPT, a highly impressive AI text generator that recently launched. Make no mistake: despite its innocuous-sounding name, ChatGPT is no simple chat bot. It is based on GPT3, a massive Generative Pre-Trained Transformer (GPT) that uses Deep Learning to produce human-like text from user-inputted prompts.

Dr. Craig Kaplan discusses Artificial Intelligence — the past, present, and future. He explains how the history of AI, in particular the evolution of machine learning, holds the key to understanding the future of AI. Dr. Kaplan believes we are on an inexorable path towards Artificial General Intelligence (AGI) which is both an existential threat to humanity AND an unprecedented opportunity to solve climate change, povery, disease and other challenges. He explains the likely paths that will lead to AGI and what all of us can do NOW to increase the chances of a positive future.

Chapters.
0:00 Intro.
0:22 Overiew & summary.
0:45 Antecedents of AI
1:15 1956: Birth of the field / Dartmouth conference.
1:33 1956: The Logic Theorist.
1:58 1986: Backprogation algorithm.
2:26 2016: SuperIntelligent AI / Alpha Go.
2:51 Lessons from the past.
3:59 Today’s “Idiot Savant” AI
4:45 Narrow vs. General AI (AGI)
5:15 Deep Mind’s Alpha Zero.
6:19 Demis Hassabis on Alpha Fold.
6:47 Alpha Fold’s amazing performance.
8:03 OpenAI’s ChatGPT
9:16 OpenAI’s DALL-E2
9:50 The future of AI
10:00 AGI is not a tool.
10:30 AGI: Intelligent entity.
10:48 Humans will not be in control.
11:16 The alignment problem.
11:45 Alignment problem is unsolved!
12:45 Likely paths to AGI
13:00 Augmented Reality path to AGI
13:26 Metaverse / Omniverse path to AGI
14:20 AGI: Threat AND Opportunity.
15:10 Get educated — books.
15:48 Get educated — videos.
16:20 Raise awareness.
16:44 How to influence values of AGI
17:52 No guarantees, we must do what we can.
18:47 AGI will learn our values.
19:30 Wrap up / contact info.

LINKS & REFERENCES
Contact:
@iqcompanies.
[email protected].

Websites.
iQStudios website (Free educational videos):
https://www.iqstudios.net/

IQ Company website (Consulting firm specializing in AI & AGI):
https://www.iqco.com/

OpenAI website (Creators of ChatGPT and DALL – E2):

A more frightening implication of the fermi paradox.


The berserker hypothesis, also known as the deadly probes scenario, is the idea that humans have not yet detected intelligent alien life in the universe because it has been systematically destroyed by a series of lethal Von Neumann probes.[1][2] The hypothesis is named after the Berserker series of novels (1963−2005) written by Fred Saberhagen.[1]

The hypothesis has no single known proposer, and instead is thought to have emerged over time in response to the Hart–Tipler conjecture,[3] or the idea that an absence of detectable Von Neumann probes is contrapositive evidence that no intelligent life exists outside of the Sun’s Solar System. According to the berserker hypothesis, an absence of such probes is not evidence of life’s absence, since interstellar probes could “go berserk” and destroy other civilizations, before self-destructing.

In his 1983 paper “The Great Silence”, astronomer David Brin summarized the frightening implications of the berserker hypothesis: it is entirely compatible with all the facts and logic of the Fermi paradox, but would mean that there exists no intelligent life left to be discovered. In the worst-case scenario, humanity has already alerted others to its existence, and is next in line to be destroyed.[4].

Two enormous cracks in Earth’s crust opened near the Turkish-Syrian border after two powerful earthquakes shook the region on Monday (Feb. 6), killing over 20,000 people.

Researchers from the U.K. Centre for the Observation & Modelling of Earthquakes, Volcanoes & Tectonics (COMET) found the ruptures by comparing images of the area near the Mediterranean Sea coast taken by the European Earth-observing satellite Sentinel-1 before and after the devastating earthquakes.

The longer of the two ruptures stretches 190 miles (300 kilometers) in the northeastern direction from the northeastern tip of the Mediterranean Sea. The crack was created by the first of the two major tremors that hit the region on Monday, the more powerful 7.8-magnitude earthquake that struck at 4:17 a.m. local time (8:17 p.m. EST on Feb. 5). The second crack, 80 miles long (125 km), opened during the second, somewhat milder 7.5-magnitude temblor about nine hours later, COMET said in a tweet on Friday (Feb. 10).

Artificial Superintelligence or short, ASI, also known as digital superintelligence is the advent of a hypothetical agent that possesses intelligence far surpassing that of the smartest and most gifted human minds.

If we as a species manage not to destroy ourselves up until the advent of true artificial general intelligence, the moment of the next phase for our survival in a post AGI world, will be even more paramount.

The consensus among AI experts is that the time it takes to go from AGI or artificial general intelligence to ASI or artificial superintelligence, is exceptionally shorter than the time it takes to achieve AGI from current narrow artificial intelligence systems.

So we really have only one try to make it right.

Philosopher Nick Bostrom has expressed concern about the advent of ASI. He thinks artificial superintelligence may pose the biggest existential threat to humanity.

Besides Nick Bostrom, other well known public thinkers that are concerned about ASI include Elon Musk.