Toggle light / dark theme

Are you ready to discover the potential technological developments that could shape the world we live in and get a glimpse of what life in the year 2100 might be like? As we approach the turn of the century, the world is expected to undergo significant changes and challenges. In this video, we will show you how the merging of humans and artificial intelligence can help solve any problem that comes our way and even predict the future.

Imagine being able to access the thoughts, memories, and emotions of billions of people through the hive mind concept. This will provide a unique way of experiencing other people’s lives and gaining new perspectives. Hyper-personalized virtual realities customized to fulfill every individual’s desire will be the norm. Users will enter a world where their every wish and fantasy constantly comes to life, maximizing their happiness, joy, and pleasure.

Education as we know it will change forever with the ability to download skills and knowledge directly into a person’s brain. People will be able to learn new skills and gain knowledge at unprecedented speeds, becoming experts in any field within seconds. The discovery and use of room-temperature superconductors will revolutionize many industries and transform the world’s infrastructure, especially in transportation. By 2100, this technology will be a reality and used in numerous industries.

Join us until the end of the video, where the final development will really raise your eyebrows. The future is exciting, and it’s happening now. Don’t miss out on this incredible journey!

This show is sponsored by Numerai, please visit them here with our sponsor link (we would really appreciate it) http://numer.ai/mlst.

Prof. Karl Friston recently proposed a vision of artificial intelligence that goes beyond machines and algorithms, and embraces humans and nature as part of a cyber-physical ecosystem of intelligence. This vision is based on the principle of active inference, which states that intelligent systems can learn from their observations and act on their environment to reduce uncertainty and achieve their goals. This leads to a formal account of collective intelligence that rests on shared narratives and goals.

To realize this vision, Friston suggests developing a shared hyper-spatial modelling language and transaction protocol, as well as novel methods for measuring and optimizing collective intelligence. This could harness the power of artificial intelligence for the common good, without compromising human dignity or autonomy. It also challenges us to rethink our relationship with technology, nature, and each other, and invites us to join a global community of sense-makers who are curious about the world and eager to improve it.

Pod version: https://podcasters.spotify.com/pod/show/machinelearningstree…on-e208f50

Go to https://brilliant.org/IsaacArthur/ to get a 30-day free trial + the first 200 people will get 20% off their annual subscription.
If the end of the world is nigh, it may be too late to avert a catastrophe. So what can we do to mitigate the damage or recover after a cataclysm comes?

Visit our Website: http://www.isaacarthur.net.
Join Nebula: https://go.nebula.tv/isaacarthur.
Support us on Patreon: https://www.patreon.com/IsaacArthur.
Support us on Subscribestar: https://www.subscribestar.com/isaac-arthur.
Facebook Group: https://www.facebook.com/groups/1583992725237264/
Reddit: https://www.reddit.com/r/IsaacArthur/
Twitter: https://twitter.com/Isaac_A_Arthur on Twitter and RT our future content.
SFIA Discord Server: https://discord.gg/53GAShE

▬ Cataclysm Index ▬▬▬▬▬▬▬▬▬▬
0:00 — Intro.
03:43 — Nuclear War.
11:24 — Asteroid.
15:34 — Supernova.
18:34 — Gamma Ray Burst.
21:51 — Massive Climate Shift.
23:15 — Snowball Earth.
24:34 — Super Volcano.
28:51 — BioWar.
30:46 — Zombie Apocalypse.
32:25 — Robots / AI
35:10 — Alien Invasions.

Listen or Download the audio of this episode from Soundcloud: Episode’s Audio-only version: https://soundcloud.com/isaac-arthur-148927746/journey-to-alpha-centauri.

LLMs stands for Large Language Models. These are advanced machine learning models that are trained to comprehend massive volumes of text data and generate natural language. Examples of LLMs include GPT-3 (Generative Pre-trained Transformer 3) and BERT (Bidirectional Encoder Representations from Transformers). LLMs are trained on massive amounts of data, often billions of words, to develop a broad understanding of language. They can then be fine-tuned on tasks such as text classification, machine translation, or question-answering, making them highly adaptable to various language-based applications.

LLMs struggle with arithmetic reasoning tasks and frequently produce incorrect responses. Unlike natural language understanding, math problems usually have only one correct answer, making it difficult for LLMs to generate precise solutions. As far as it is known, no LLMs currently indicate their confidence level in their responses, resulting in a lack of trust in these models and limiting their acceptance.

To address this issue, scientists proposed ‘MathPrompter,’ which enhances LLM performance on mathematical problems and increases reliance on forecasts. MathPrompter is an AI-powered tool that helps users solve math problems by generating step-by-step solutions. It uses deep learning algorithms and natural language processing techniques to understand and interpret math problems, then generates a solution explaining each process step.

There are those who are alarmed at even this minute form of self-awareness in robots, for they fear that this could pave the way for AI taking over humankind. Such fears are unfounded, as they are a long way off from becoming self-aware enough to have self-will and volition.

The jury is still out on the extent to which we humans can claim to be self-aware, apart from being aware of one’s body and its functions. Every one of us is at a different level of self-awareness, and most of us are at the very infancy of this evolutionary journey that could one day lead to us becoming aware of not only the individual self but also of the Supreme Self, Atman.

A world of levitating trains, quantum computers and massive energy savings may have come a little closer, after scientists claimed to have attained a long hoped-for dream of physics: room temperature superconductivity.

However, the achievement, announced in the prestigious journal Nature, came with two caveats. The first is that at present it only works at 10,000 times atmospheric pressure. The second is that the last time members of the same team announced similar findings they had to retract them amid allegations of malpractice.

Jorge Hirsch, from the University of California, San Diego, said that on the face of it the achievement was stunning. “If this is real it’s extremely impressive, groundbreaking and worthy of the Nobel prize,” he said. But, he added, “I do not.

Alloys that can return to their original structure after being deformed have a so-called shape memory. This phenomenon and the resulting forces are used in many mechanical actuating systems, for example in generators or hydraulic pumps. However, it has not been possible to use this shape-memory effect at a small nanoscale. Objects made of shape-memory alloy can only change back to their original shape if they are larger than around 50 nanometers.

Researchers led by Salvador Pané, Professor of Materials of Robotics at ETH Zurich, and Xiang-Zhong Chen, a senior scientist in his group, were able to circumvent this limitation using . In a study published in the journal Nature Communications, they demonstrate the shape-memory effect on a layer that is about twenty nanometers thick and made of materials called ferroic oxides. This achievement now makes it possible to apply the shape-memory effect to tiny nanoscale machines.

At first glance, ferroic oxides do not appear to be very suitable for the shape-memory effect: They are brittle in bulk scale, and in order to produce very thin layers of them, they usually have to be fixed onto a substrate, which makes them inflexible. In order to still be able to induce the shape-memory effect, the researchers used two different oxides, and cobalt ferrite, of which they temporarily applied thin layers onto a magnesium substrate. The lattice parameters of the two oxides differ significantly from each other. After the researchers had detached the two-layered strip from the supporting substrate, the tension between the two oxides generated a spiral-shaped twisted structure.

Retro Biosciences’ mysterious backer has finally been revealed!


In 2021 the longevity industry received one of its largest investments to date, with a $180m investment being made into the pharmaceutical start known as Retro Biosciences, or Retro Bio for short. Not only was this investment cause for celebration within the field of regenerative medicine, but it also came with a tantalising mystery, as the backer, or indeed backer, did not make themselves publicly known. It was assumed that due to the secrecy involved, it was likely that this investment had come from a small number of individuals, potentially just a single backer. This mystery backer, combined with the notable capital investment, led to much media attention at the time, and has since garnered a significant amount of interest in Retro Bio from both the general public and future potential financial backers. That was until last week, when the mystery backer finally decided that now was the right time to reveal their identity to the general public.

In an interview with MIT Technology review, American entrepreneur Sam Altman revealed that he was the sole backer for the pharmaceutical start-up, who single handily provided the entire $180m investment. Sam Altman, who primarily made his fortune in the tech industry (specifically through social media companies such as Loopt) has become somewhat of an angel investor for a slew of world changing, innovative companies which are involved in everything from artificial intelligence to nuclear energy. It is hoped that this significant single investment marks the beginning of a longevity tech boom, similar to what was seen during the dot-com boom (but hopefully without the disastrous ending).