Toggle light / dark theme

Join us on Patreon!
https://www.patreon.com/MichaelLustgartenPhD

Biomarker timestamps:
Glucose 1:37
HDL 2:43
Triglycerides 4:10
RBCs, Hemoglobin 5:29
Platelets 7:16
Uric Acid 8:37
AST, ALT 11:04
Total Cholesterol 13:55
WBCs 15:47
Total Protein, Albumin, Globulin 17:38
Creatinine 21:27
BUN 22:35

Papers referenced in the video:
Laboratory parameters in centenarians of Italian ancestry.
https://pubmed.ncbi.nlm.nih.gov/17681733/

Risk Factors For Hyperuricemia In Chinese Centenarians And Near-Centenarians.

They may be the only cars to go more viral than Cybertruck.

Remember that Hyundai concept 4×4 with robotic legs?

The South Korean automotive giant actually means to make it a reality and it has announced a development and test facility in Montana for that very purpose.

Hyundai has a planned investment goal of $20 million over the next five years for its New Horizons Studio, which will employ 50 people.

Artificial intelligence and machine learning hardware research have concentrated on building photonic synapses and neurons and combining them to do fundamental forms of neural-type processing. However, complex processing methods found in human brains—such as reinforcement learning and dendritic computation—are more challenging to replicate directly in hardware.

A new study contributes to closing the “hardware gap” by creating an “Optomemristor” device that responds to numerous electronic and photonic inputs at the same time. The diverse biophysical mechanisms that govern the functions of the brain’s neurons and synapses allow for complex learning and processing in the mammalian brain.

The chalcogenide thin-film technology interacts with both light and electrical impulses to mimic multifactor biological computations in mammalian brains while spending very little energy.

Would start with scanning and reverse engineering brains of rats, crows, pigs, chimps, and end on the human brain. Aim for completion by 12/31/2025. Set up teams to run brain scans 24÷7÷365 if we need to, and partner w/ every major neuroscience lab in the world.


If artificial intelligence is intended to resemble a brain, with networks of artificial neurons substituting for real cells, then what would happen if you compared the activities in deep learning algorithms to those in a human brain? Last week, researchers from Meta AI announced that they would be partnering with neuroimaging center Neurospin (CEA) and INRIA to try to do just that.

Through this collaboration, they’re planning to analyze human brain activity and deep learning algorithms trained on language or speech tasks in response to the same written or spoken texts. In theory, it could decode both how human brains —and artificial brains—find meaning in language.

By comparing scans of human brains while a person is actively reading, speaking, or listening with deep learning algorithms given the same set of words and sentences to decipher, researchers hope to find similarities as well as key structural and behavioral differences between brain biology and artificial networks. The research could help explain why humans process language much more efficiently than machines.

AI researchers are creating novel “benchmarks” to help models avoid real-world stumbles.


Trained on billions of words from books, news articles, and Wikipedia, artificial intelligence (AI) language models can produce uncannily human prose. They can generate tweets, summarize emails, and translate dozens of languages. They can even write tolerable poetry. And like overachieving students, they quickly master the tests, called benchmarks, that computer scientists devise for them.

That was Sam Bowman’s sobering experience when he and his colleagues created a tough new benchmark for language models called GLUE (General Language Understanding Evaluation). GLUE gives AI models the chance to train on data sets containing thousands of sentences and confronts them with nine tasks, such as deciding whether a test sentence is grammatical, assessing its sentiment, or judging whether one sentence logically entails another. After completing the tasks, each model is given an average score.

At first, Bowman, a computer scientist at New York University, thought he had stumped the models. The best ones scored less than 70 out of 100 points (a D+). But in less than 1 year, new and better models were scoring close to 90, outperforming humans. “We were really surprised with the surge,” Bowman says. So in 2019 the researchers made the benchmark even harder, calling it SuperGLUE. Some of the tasks required the AI models to answer reading comprehension questions after digesting not just sentences, but paragraphs drawn from Wikipedia or news sites. Again, humans had an initial 20-point lead. “It wasn’t that shocking what happened next,” Bowman says. By early 2021, computers were again beating people.

In recent years, developers have created a wide range of sophisticated robots that can operate in specific environments in increasingly efficient ways. The body structure of many among these systems is inspired by nature, animals, and humans.

Although many existing robots have bodies that resemble those of humans or other animal species, programming them so that they also move like the animal they are inspired by is not always an easy task. Doing this typically entails the development of advanced locomotion controllers, which can require considerable resources and development efforts.

Researchers at DeepMind have recently created a new technique that can be used to efficiently train robots to replicate the movements of humans or animals. This new tool, introduced in a paper pre-published on arXiv, is inspired from previous work that leveraged data representing real-world human and animal movements, collected using technology.

Machines are learning things fast and replacing humans at a faster rate than ever before. Fresh development in this direction is a robot that can taste food. And not only it can taste the food, it can do so while making the dish it is preparing! This further leads to the robot having the ability to recognise taste of the food in various stages of chewing when a human eats the food.

The robot chef was made by Mark Oleynik, a Russian mathematician and computer scientist. Researchers at the Cambride University trained the robot to ‘taste’ the food as it cooks it.

The robot had already been trained to cook egg omelets. The researchers at Cambridge University added a sensor to the robot which can recognise different levels of saltiness.

Jack in the Box has become the latest American food chain to experiment with automation, as it seeks to handle staffing challenges and improve the efficiency of its service.

Jack in the Box is one of the largest quick service restaurant chains in America, with more than 2,200 branches. With continued staffing challenges impacting its operating hours and costs, Jack in the Box saw a need to revamp its technology and establish new systems – particularly in the back-of-house – that improve restaurant-level economics and alleviate the pain points of working in a high-volume commercial kitchen.

A virtual robot arm has learned to solve a wide range of different puzzles —stacking blocks, setting the table, arranging chess pieces—without having to be retrained for each task. It did this by playing against a second robot arm that was trained to give it harder and harder challenges.

Self play: Developed by researchers at OpenAI, the identical robot arms—Alice and Bob—learn by playing a game against each other in a simulation, without human input. The robots use reinforcement learning, a technique in which AIs are trained by trial and error what actions to take in different situations to achieve certain goals. The game involves moving objects around on a virtual tabletop. By arranging objects in specific ways, Alice tries to set puzzles that are hard for Bob to solve. Bob tries to solve Alice’s puzzles. As they learn, Alice sets more complex puzzles and Bob gets better at solving them.