Toggle light / dark theme

Putin has previously threatened to resort to nuclear weapons if Russia’s goals in Ukraine continue to be thwarted. The annexation brings the use of a nuclear weapon a step closer by giving Putin a potential justification on the grounds that “the territorial integrity of our country is threatened,” as he put it in his speech last week.

He renewed the threat on Friday with an ominous comment that the U.S. atomic bombing of Hiroshima and Nagasaki created a “precedent” for the use of nuclear weapons, echoing references he has made in the past to the U.S. invasion of Iraq as setting a precedent for Russia’s invasion of Ukraine.

Last night at 23:14 UTC, NASA’s DART spacecraft successfully struck asteroid Dimorphos, the 160-metre moonlet orbiting around the larger Didymos asteroid. About 38 seconds later, the time it took for the light to arrive at Earth, people all over the world saw the abrupt end of the live stream from the spacecraft, signalling that the impact had happened successfully – DART was no more.

Astronomers on a small slice of our planet’s surface, extending from southern and eastern Africa to the Indian Ocean and the Arabian Peninsula, could actually watch it live with their telescopes. Among those were a half dozen stations joined together for a dedicated observing campaign organised by ESA’s Planetary Defence Office and coordinated by the team of observers of the Agency’s Near-Earth Object Coordination Centre (NEOCC). As usual, when such a timely astronomical event happens, not all stations were successful in their observations: clouds, technical problems and other issues always affect real-life observations.

However, a few of ESA’s collaborating stations could immediately report a successful direct confirmation of DART’s impact. Among them was the team of the Les Makes observatory, on the French island of La Reunion in the Indian Ocean. The sequence of images they provided in real time was impressive: the asteroid immediately started brightening upon impact, and within a few seconds it was already noticeably brighter. Within less than a minute a cloud of ejected material became visible and could be followed while it drifted eastwards and slowly dissipated.

In a new paper published today in the journal Nature Ecology and Evolution, scientists have estimated the conservation status of nearly 1,900 palm species using artificial intelligence, and found more than 1,000 may be at risk of extinction.

The international team of researchers from the Royal Botanic Gardens, Kew, the University of Zurich, and the University of Amsterdam, combined existing data from the International Union for Conservation of Nature (IUCN) Red List with novel machine learning techniques to paint a clearer picture of how palms may be threatened. Although popular and well represented on the Red List, the threat to some 70% of these plants has remained unclear until now.

The IUCN Red List of Threatened Species is widely considered to be a gold standard for evaluating the conservation status of animal, plant, and . But there are gaps in the Red List that need to be addressed, as not all species have been listed and many of the assessments are in need of an update. Conservation efforts are further complicated by inadequate funding, the sheer amount of time needed to manually assess a species, and public perception favoring certain over plants and fungi.

“Tech billionaires are buying up luxurious bunkers to survive a societal collapse they helped create,” Rushkoff says.

The world is going to hell in a handbasket. And no, we’re not saying that; science does. It seems that billionaires cannot ignore all the signals pointing at a doomsday scenario while trying to make their way out of this world — or stay in this world.

According to an edited extract from Survival of the Richest by Douglas Rushkoff, published by The Guardian, tech billionaires buy luxury bunkers and take cautions to escape a possible apocalypse which they call The Event.


Bet the dinosaurs wish they’d thought of this.

NASA on Monday will attempt a feat humanity has never before accomplished: deliberately smacking a spacecraft into an asteroid to slightly deflect its orbit, in a key test of our ability to stop cosmic objects from devastating life on Earth.

The Double Asteroid Redirection Test (DART) spaceship launched from California last November and is fast approaching its target, which it will strike at roughly 14,000 miles per hour (23,000 kph).

Neutronium was the material used in the hull of the doomsday machine in Star Trek.

Now I’m not terribly sure what the mechanical properties of neutronium would be like. It certainly is very dense (about a billion tons per cm3, about the volume of the end of your little finger), but it interacts with matter only weakly. I would expect both it to be pretty inefficient at stopping both electromagnetic radiation (neutrons only have a magnetic moment), and matter.

However in reality there’s a somewhat bigger problem. When neutrons are outside of the nucleus of atoms, or are outside the huge pressure that exists in neutron stars, they have a half life of about 10 minutes. To make it even more awkward, when a neutron decays, it releases about a MeV of energy. So put a few extra numbers into this, like a mole of neutrons (6e23 neutrons) weighs about a gram, and a ton of TNT is 4e9 Joules and you can work out that just the neutrons in your typical human (about half your body weight), will release about the same energy as a megaton (one million tons of TNT). A little more scratching around on half life calculations and you can work out that if you have a half life of 10 minutes, then you will release about 1 part in 1,000 of its total energy in the very first second.

This means that if you could extra merely a ml of neutronium, and free it from that immense pressure, then it would release the same energy as 15 million Czar bombs (the largest man-made bomb ever) in the very first second.

For Physics & Chemistry experiments for kids delivered to your door head to https://melscience.com/sBIs/ and use promo code DRBECKY50 for 50% off the first month of any subscription (valid until 22nd October 2022).

To find out whether you can see the partial solar eclipse on 25th October 2022 put in your location here: https://www.timeanddate.com/eclipse/map/2022-october-25

To watch the next launch attempt for Artemis live at 6pm EST on Tuesday 27th September head to @NASA ‘s YouTube channel here: https://www.youtube.com/watch?v=CMLD0Lp0JBg.
To watch the DART mission impact live on Monday 26th September 2022 head to @NASA ‘s YouTube channel here: https://www.youtube.com/watch?v=4RA8Tfa6Sck.
My previous video on the DART mission: https://youtu.be/ZBhTtaTGhao.
My previous video on whether aliens exist (inc. Drake equation): https://www.youtube.com/watch?v=fihVzPl7Dys.
My previous Night Sky News debunking these JWST Big Bang Theory claims: https://www.youtube.com/watch?v=Fqfap3v0xxw.
My previous video chatting with Dr. Libby Jones about being in control of JWST: https://www.youtube.com/watch?v=UPO8pw8r7ak.
My previous video on the discovery of the star Earendel: https://www.youtube.com/watch?v=VChgsXbIgdw.
Welch et al. (2022; Earendel imaged with JWST — not peer reviewed) — https://arxiv.org/pdf/2208.09007.pdf.
Welch et al. (2022; Earendel discovered with HST — behind pay wall) — https://www.nature.com/articles/s41586-022-04449-y.
Carter et al. (2022; JWST direct image exoplanet HIP 65426b — not peer reviewed) — https://arxiv.org/pdf/2208.14990.pdf.
El Baldry et al. (2022; a black hole orbiting a Sun-like star — not peer reviewed) — https://arxiv.org/pdf/2209.06833.pdf.

PDRs4ALL project (that imaged the Orion nebula with JWST) — https://pdrs4all.org/

Started out as an interview ended up being a discussion between Hugo de Garis and (off camera) Adam Ford + Michel de Haan.
00:11 The concept of understanding under-recognised as an important aspect of developing AI
00:44 Re-framing perspectives on AI — the Chinese Room argument — and how can consciousness or understanding arise from billions of seemingly discreet neurons firing? (Should there be a binding problem of understanding similar to the binding problem of consciousness?)
04:23 Is there a difference between generality in intelligence and understanding? (and extentionally between AGI and artificial understanding?)
05:08 Ah Ha! moments — where the penny drops — what’s going on when this happens?
07:48 Is there an ideal form of understanding? Coherence & debugging — ah ha moments.
10:18 Webs of knowledge — contextual understanding.
12:16 Early childhood development — concept formation and navigation.
13:11 The intuitive ability for concept navigation isn’t complete.
Is the concept of understanding a catch all?
14:29 Is it possible to develop AGI that doesn’t understand? Is generality and understanding the same thing?
17:32 Why is understanding (the nature of) understanding important?
Is understanding reductive? Can it be broken down?
19:52 What would be the most basic primitive understanding be?
22:11 If (strong) AI is important, and understanding is required to build (strong) AI, what sorts of things should we be doing to make sense of understanding?
Approaches — engineering, and copy the brain.
24:34 Is common sense the same thing as understanding? How are they different?
26:24 What concepts do we take for granted around the world — which when strong AI comes about will dissolve into illusions, and then tell us how they actually work under the hood?
27:40 Compression and understanding.
29:51 Knowledge, Gettier problems and justified true belief. Is knowledge different from understanding and if so how?
31:07 A hierarchy of intel — data, information, knowledge, understanding, wisdom.
33:37 What is wisdom? Experience can help situate knowledge in a web of understanding — is this wisdom? Is the ostensible appearance of wisdom necessarily wisdom? Think pulp remashings of existing wisdom in the form of trashy self-help literature.
35:38 Is understanding mapping knowledge into a useful framework? Or is it making accurate / novel predictions?
36:00 Is understanding like high resolution carbon copy like models that accurately reflect true nature or a mechanical process?
37:04 Does understanding come in gradients of topologies? Is there degrees or is it just on or off?
38:37 What comes first — understanding or generality?
40:47 Minsky’s ‘Society of Mind’
42:46 Is vitalism alive in well in the AI field? Do people actually think there are ghosts in the machines?
48:15 Anthropomorphism in AI literature.
50:48 Deism — James Gates and error correction in super-symmetry.
52:16 Why are the laws of nature so mathematical? Why is there so much symmetry in physics? Is this confusing the map with the territory?
52:35 The Drake equation, and the concept of the Artilect — does this make Deism plausible? What about the Fermi Paradox?
55:06 Hyperintelligence is tiny — the transcention hypothesis — therefore civs go tiny — an explanation for the fermi paradox.
56:36 Why would *all* civs go tiny? Why not go tall, wide and tiny? What about selection pressures that seem to necessitate cosmic land grabs?
01:01:52 The Great Filter and the The Fermi Paradox.
01:02:14 Is it possible for an AGI to have a deep command of knowledge across a wide variety of topics/categories without understanding being an internal dynamic? Is the turing test good enough to test for understanding? What kinds of behavioral tests could reliably test for understanding? (Of course without the luxury of peering under the hood)
01:03:09 Does AlphaGo understand Go, or DeepBlue understand chess? Revisiting the Chinese Room argument.
01:04:23 More on behavioral tests for AI understanding.
01:06:00 Zombie machines — David Chalmers Zombie argument.
01:07:26 Complex enough algorithms — is there a critical point of complexity beyond which general intelligence likely emerges? Or understanding emerges?
01:08:11 Revisiting behavioral ‘turing’ tests for understanding.
01:13:05 Shape sorters and reverse shape sorters.
01:14:03 Would slightly changing the rules of Go confuse AlphaGo (after it had been trained)? Need for adaptivity — understanding concept boundaries, predicting where they occur, and the ability to mine outwards from these boundaries…
01:15:11 Neural nets and adaptivity.
01:16:41 AlphaGo documentary — worth a watch. Progresses in AI challenges human dignity which is a concern, but the DeepMind and the AlphaGo documentary seemed to be respectful. Can we manage a transition from human labor to full on automation while preserving human dignity?

Filmed in the dandenong ranges in victoria, australia.

Many thanks for watching!