Toggle light / dark theme

Chapter 1: 0:00 — 25:49
Chapter 2: 26:00 — 43:44
Chapter 3: 43:55 — 1:25:36
Chapter 4: 1:25:48 — 1:49:50
Chapter 5: 1:50:00 — 2:17:16
Chapter 6: 2:17:27 — 2:49:22
Chapter 7: 2:49:29 — 3:19:09
Chapter 8: 3:19:32 — 3:52:27
Chapter 9: 3:52:38 — 4:01:57
Chapter 10: 4:02:04 — 4:13:39
Chapter 11: 4:13:48 — 4:47:54
Chapter 12: 4:48:03 — 5:12:22
Chapter 13: 5:12:32 — 5:32:24
Chapter 14: 5:32:33 — 5:50:33
Chapter 15: 5:50:42 — 6:05:56
Chapter 16: 6:06:06 — 6:30:30
Chapter 17: 6:30:40 — 6:50:40
Chapter 18: 6:50:49 — 7:25:54

Support Steve on Patreon.
https://www.patreon.com/SteveParker.

Audiobooks + merch available for purchase.
www.SteveParkerAudiobooks.com.

Drop a tip in Steve’s Paypal tip jar.

Summary: Study reveals a skin-to-brain neural circuit that responds to rewarding forms of social touch. Researchers say the findings could provide an avenue for harnessing the power of touch to assist in treating social and emotional disorders.

Source: Columbia University.

A parent’s reassuring touch. A friend’s warm hug. A lover’s enticing embrace. These are among the tactile joys in our lives.

Out solar system will soon be getting a new explorer, as a mission to study the moons of Jupiter readies for launch.

The European Space Agency’s Jupiter Icy Moons Explorer mission, or JUICE, is scheduled for launch in just a few months’ time, so the spacecraft is now being packed up at its testing location in Toulouse, France, for transport to its launch location in French Guiana.

The spacecraft recently went through its final round of testing, including a thermal vacuum test to ensure it can handle the cold temperatures of space, and the system validation test in which the immediate steps after launch are simulated, like the deploying of booms and arrays that will happen in space.

Maintaining eye contact is crucial to establishing engagement and trust in a conversation. This can be challenging in a video conference because it requires participants to look at the camera instead of the screen. The NVIDIA Maxine Eye Contact feature creates an in-person experience for virtual meetings. Powered by AI, Maxine Eye Contact directs your eyes to a centered position to maintain eye contact with your audience. Eye Contact is available to developers through the Maxine Augmented Reality SDK at https://developer.nvidia.com/maxine#ar-sdk.

Learn more about Maxine at https://developer.nvidia.com/maxine and all of NVIDIA’s AI solutions at https://www.nvidia.com/en-us/deep-learning-ai/products/solutions/
#AI #NVIDIA #Maxine

Large Language Models (LLM) are on fire, capturing public attention by their ability to provide seemingly impressive completions to user prompts (NYT coverage). They are a delicate combination of a radically simplistic algorithm with massive amounts of data and computing power. They are trained by playing a guess-the-next-word game with itself over and over again. Each time, the model looks at a partial sentence and guesses the following word. If it makes it correctly, it will update its parameters to reinforce its confidence; otherwise, it will learn from the error and give a better guess next time.

While the underpinning training algorithm remains roughly the same, the recent increase in model and data size has brought about qualitatively new behaviors such as writing basic code or solving logic puzzles.

How do these models achieve this kind of performance? Do they merely memorize training data and reread it out loud, or are they picking up the rules of English grammar and the syntax of C language? Are they building something like an internal world model—an understandable model of the process producing the sequences?