Toggle light / dark theme

Cells not replaced, but old cells that are still there are rejuvenated.


Dr David Sinclair explains the mechanism behind how to reprogramm the old cells rejuvenate to be young again. He also clarify the process is based on cell autonomous effect and does not involve or rely on any stem cells in this clip.

David Sinclair is a professor in the Department of Genetics and co-director of the Paul F. Glenn Center for the Biology of Aging at Harvard Medical School, where he and his colleagues study sirtuins—protein-modifying enzymes that respond to changing NAD+ levels and to caloric restriction—as well as chromatin, energy metabolism, mitochondria, learning and memory, neurodegeneration, cancer, and cellular reprogramming.

Dr David Sinclair has suggested that aging is a disease—and that we may soon have the tools to put it into remission—and he has called for greater international attention to the social, economic and political and benefits of a world in which billions of people can live much longer and much healthier lives.

Dr David Sinclair is the co-founder of several biotechnology companies (Life Biosciences, Sirtris, Genocea, Cohbar, MetroBiotech, ArcBio, Liberty Biosecurity) and is on the boards of several others.

A team of researchers at the Graduate School of Informatics, Nagoya University, have brought us one step closer to the development of a neural network with metamemory through a computer-based evolution experiment. This type of neural network could help experts understand the evolution of metamemory, which could help develop artificial intelligence (AI) with a human-like mind.

The research was published in the scientific journal Scientific Reports.

What is Metamemory?

Join my community at http://johncoogan.com (enter your email)

All images were generated by OpenAI’s DALL-E 2: https://openai.com/dall-e-2/

KEY SOURCES:
https://sitn.hms.harvard.edu/flash/2017/history-artificial-intelligence/


https://www.g2.com/articles/history-of-artificial-intelligence.
https://en.wikipedia.org/wiki/Timeline_of_artificial_intelligence.
https://courses.cs.washington.edu/courses/csep590/06au/projects/history-ai.pdf.

ABOUT JOHN COOGAN:
I am the co-founder of http://soylent.com and http://lucy.co, both of which were funded by Y Combinator (Summer 2012 and Winter 2018).

I’ve been an entrepreneur for the last decade across multiple companies. I’ve done a lot of work in Silicon Valley, so that’s mostly what I talk about. I’ve raised over 10 rounds of venture capital totaling over $100m in funding.

I work mostly in tech-enabled consumer packaged goods, meaning I use software to make the best products possible and then deliver them to the widest possible audience. I’m a big fan of machine learning, python programming, and motion graphics.

The will help you find new opportunities to use and further develop your machine learning skills.


Machine learning has proven to be a tool that performs well in a variety of application fields. From educational and training companies to security systems like facial recognition and online transaction prevention, it is used to improve the quality and accuracy of existing techniques.

Choosing the best tools for machine learning and navigating the space of tools for machine learning isn’t as simple as Google searching “machine learning tools”.

There are many factors to consider when choosing a tool for your needs: types of data you’re working with, type of analysis you need to perform, integration with other software packages you’re using, and more.

From action heroes to villainous assassins, biohybrid robots made of both living and artificial materials have been at the center of many sci-fi fantasies, inspiring today’s robotic innovations. It’s still a long way until human-like robots walk among us in our daily lives, but scientists from Japan are bringing us one step closer by crafting living human skin on robots. The method developed, presented June 9 in the journal Matter, not only gave a robotic finger skin-like texture, but also water-repellent and self-healing functions.

“The finger looks slightly ‘sweaty’ straight out of the culture medium,” says first author Shoji Takeuchi, a professor at the University of Tokyo, Japan. “Since the finger is driven by an , it is also interesting to hear the clicking sounds of the motor in harmony with a finger that looks just like a real one.”

Looking “real” like a human is one of the top priorities for that are often tasked to interact with humans in healthcare and service industries. A human-like appearance can improve communication efficiency and evoke likability. While current silicone skin made for robots can mimic human appearance, it falls short when it comes to delicate textures like wrinkles and lacks skin-specific functions. Attempts at fabricating living skin sheets to cover robots have also had limited success, since it’s challenging to conform them to dynamic objects with uneven surfaces.

Convolutional neural networks used to be the typical network architecture in modern computer vision systems. Transformers with attention mechanisms were recently presented for visual tasks, and they performed well. Convolutions and self-attention aren’t required for MLP-based (multi-layer perceptron) vision models to perform properly. As a result of these advancements, vision models are reaching new heights.

The input image is handled differently by different networks. In Euclidean space, image data is commonly represented as a regular grid of pixels. On the image, CNNs apply a sliding window and introduce shift-invariance and locality. The MLP vision transformer, which was released recently, treats the image as a series of patches.

Recognizing the items in an image is one of the most basic tasks of computer vision. The typically utilized grid or sequence structures in prior networks like ResNet and ViT are redundant and inflexible to process because the objects are usually not quadrate whose shape is irregular. An item can be thought of as a collection of parts, such as a human’s head, upper torso, arms, and legs. These sections are organically connected by joints, forming a graph structure. Furthermore, a graph is a generic data structure, with grids and sequences being special cases of graphs. Visual perception is more flexible and effective when an image is viewed as a graph.

Machine learning can get a boost from quantum physics.

On certain types of machine learning tasks, quantum computers have an exponential advantage over standard computation, scientists report in the June 10 Science. The researchers proved that, according to quantum math, the advantage applies when using machine learning to understand quantum systems. And the team showed that the advantage holds up in real-world tests.

“People are very excited about the potential of using quantum technology to improve our learning ability,” says theoretical physicist and computer scientist Hsin-Yuan Huang of Caltech. But it wasn’t entirely clear if machine learning could benefit from quantum physics in practice.