Toggle light / dark theme

This show is sponsored by Numerai, please visit them here with our sponsor link (we would really appreciate it) http://numer.ai/mlst.

Prof. Karl Friston recently proposed a vision of artificial intelligence that goes beyond machines and algorithms, and embraces humans and nature as part of a cyber-physical ecosystem of intelligence. This vision is based on the principle of active inference, which states that intelligent systems can learn from their observations and act on their environment to reduce uncertainty and achieve their goals. This leads to a formal account of collective intelligence that rests on shared narratives and goals.

To realize this vision, Friston suggests developing a shared hyper-spatial modelling language and transaction protocol, as well as novel methods for measuring and optimizing collective intelligence. This could harness the power of artificial intelligence for the common good, without compromising human dignity or autonomy. It also challenges us to rethink our relationship with technology, nature, and each other, and invites us to join a global community of sense-makers who are curious about the world and eager to improve it.

Pod version: https://podcasters.spotify.com/pod/show/machinelearningstree…on-e208f50

Go to https://brilliant.org/IsaacArthur/ to get a 30-day free trial + the first 200 people will get 20% off their annual subscription.
If the end of the world is nigh, it may be too late to avert a catastrophe. So what can we do to mitigate the damage or recover after a cataclysm comes?

Visit our Website: http://www.isaacarthur.net.
Join Nebula: https://go.nebula.tv/isaacarthur.
Support us on Patreon: https://www.patreon.com/IsaacArthur.
Support us on Subscribestar: https://www.subscribestar.com/isaac-arthur.
Facebook Group: https://www.facebook.com/groups/1583992725237264/
Reddit: https://www.reddit.com/r/IsaacArthur/
Twitter: https://twitter.com/Isaac_A_Arthur on Twitter and RT our future content.
SFIA Discord Server: https://discord.gg/53GAShE

▬ Cataclysm Index ▬▬▬▬▬▬▬▬▬▬
0:00 — Intro.
03:43 — Nuclear War.
11:24 — Asteroid.
15:34 — Supernova.
18:34 — Gamma Ray Burst.
21:51 — Massive Climate Shift.
23:15 — Snowball Earth.
24:34 — Super Volcano.
28:51 — BioWar.
30:46 — Zombie Apocalypse.
32:25 — Robots / AI
35:10 — Alien Invasions.

Listen or Download the audio of this episode from Soundcloud: Episode’s Audio-only version: https://soundcloud.com/isaac-arthur-148927746/journey-to-alpha-centauri.

LLMs stands for Large Language Models. These are advanced machine learning models that are trained to comprehend massive volumes of text data and generate natural language. Examples of LLMs include GPT-3 (Generative Pre-trained Transformer 3) and BERT (Bidirectional Encoder Representations from Transformers). LLMs are trained on massive amounts of data, often billions of words, to develop a broad understanding of language. They can then be fine-tuned on tasks such as text classification, machine translation, or question-answering, making them highly adaptable to various language-based applications.

LLMs struggle with arithmetic reasoning tasks and frequently produce incorrect responses. Unlike natural language understanding, math problems usually have only one correct answer, making it difficult for LLMs to generate precise solutions. As far as it is known, no LLMs currently indicate their confidence level in their responses, resulting in a lack of trust in these models and limiting their acceptance.

To address this issue, scientists proposed ‘MathPrompter,’ which enhances LLM performance on mathematical problems and increases reliance on forecasts. MathPrompter is an AI-powered tool that helps users solve math problems by generating step-by-step solutions. It uses deep learning algorithms and natural language processing techniques to understand and interpret math problems, then generates a solution explaining each process step.

A new mechanism that gives rise to superconductivity in a material where the speed of electrons is almost zero has been discovered by scientists at The University of Texas at Dallas and their partners at The Ohio State University. This breakthrough could pave the way for the development of novel superconductors.

The results of their study, which was recently published in the journal Nature, describe a novel approach to calculate electron speed. This study also represents the first instance where quantum geometry has been recognized as the primary contributing mechanism to superconductivity in any material.

The material the researchers studied is twisted bilayer graphene.

There are those who are alarmed at even this minute form of self-awareness in robots, for they fear that this could pave the way for AI taking over humankind. Such fears are unfounded, as they are a long way off from becoming self-aware enough to have self-will and volition.

The jury is still out on the extent to which we humans can claim to be self-aware, apart from being aware of one’s body and its functions. Every one of us is at a different level of self-awareness, and most of us are at the very infancy of this evolutionary journey that could one day lead to us becoming aware of not only the individual self but also of the Supreme Self, Atman.

There’s an old magic trick known as the miser’s dream, where the magician appears to pull coins from thin air. Australian scientists say they can now generate electricity out of thin air with the help of some enzymes. The enzyme reacts to hydrogen in the atmosphere to generate a current.

They learned the trick from bacteria which are known to use hydrogen for fuel in inhospitable environments like Antarctica or in volcanic craters. Scientists knew hydrogen was involved but didn’t know how it worked until now.

The enzyme is very efficient and can even work on trace amounts of hydrogen. The enzyme can survive freezing and temperature up to 80 °C (176 °F). The paper seems more intent on the physical mechanisms involved, but you can tell the current generated is minuscule. We don’t expect to see air-powered cell phones anytime soon. Then again, you have to start somewhere, and who knows where this could lead?

[Russ Maschmeyer] and Spatial Commerce Projects developed WonkaVision to demonstrate how 3D eye tracking from a single webcam can support rendering a graphical virtual reality (VR) display with realistic depth and space. Spatial Commerce Projects is a Shopify lab working to provide concepts, prototypes, and tools to explore the crossroads of spatial computing and commerce.

The graphical output provides a real sense of depth and three-dimensional space using an optical illusion that reacts to the viewer’s eye position. The eye position is used to render view-dependent images. The computer screen is made to feel like a window into a realistic 3D virtual space where objects beyond the window appear to have depth and objects before the window appear to project out into the space in front of the screen. The resulting experience is like a 3D view into a virtual space. The downside is that the experience only works for one viewer.

Eye tracking is performed using Google’s MediaPipe Iris library, which relies on the fact that the iris diameter of the human eye is almost exactly 11.7 mm for most humans. Computer vision algorithms in the library use this geometrical fact to efficiently locate and track human irises with high accuracy.