Toggle light / dark theme

In recent years, roboticists have been trying to improve how robots interact with different objects found in real-world settings. While some of their efforts yielded promising results, the manipulation skills of most existing robotic systems still lag behinds those of humans.

Fabrics are among the types of objects that have proved to be most challenging for to interact with. The main reasons for this are that pieces of cloth and other fabrics can be stretched, moved and folded in different ways, which can result in complex material dynamics and self-occlusions.

Researchers at Carnegie Mellon University’s Robotics Institute have recently proposed a new computational technique that could allow robots to better understand and handle fabrics. This technique, introduced in a paper set to be presented at the International Conference on Intelligent Robots and Systems and pre-published on arXiv, is based on the use of a and a simple machine-learning algorithm, known as a classifier.

The early-stage development of many age-targeting compounds often involves studies of their effects on the lifespan of the transparent nematode (worm) model Caenorhabditis elegans. A highly manual process, this exercise is time-consuming and only produces data on one endpoint – lifespan.

Durham University associate professors David Weinkove and Chris Saunter invented a technology that automates measurements of movement in many large populations of worms simultaneously. Crucially, this technology goes beyond measuring lifespan, also capturing information about how worms’ health declines as they age – their healthspan.

Longevity. Technology: Together, Weinkove and Saunter have co-founded a spinout company called Magnitude Biosciences, leveraging their innovative platform to test drugs and other interventions for their capacity to prolong healthspan. We caught up with Weinkove to learn more about the background to the company and where it goes from here.

Lex Fridman Podcast full episode: https://www.youtube.com/watch?v=I845O57ZSy4
Please support this podcast by checking out our sponsors:
- InsideTracker: https://insidetracker.com/lex to get 20% off.
- Indeed: https://indeed.com/lex to get $75 credit.
- Blinkist: https://blinkist.com/lex and use code LEX to get 25% off premium.
- Eight Sleep: https://www.eightsleep.com/lex and use code LEX to get special savings.
- Athletic Greens: https://athleticgreens.com/lex and use code LEX to get 1 month of fish oil.

GUEST BIO:
John Carmack is a legendary programmer, co-founder of id Software, and lead programmer of many revolutionary video games including Wolfenstein 3D, Doom, Quake, and the Commander Keen series. He is also the founder of Armadillo Aerospace, and for many years the CTO of Oculus VR.

PODCAST INFO:
Podcast website: https://lexfridman.com/podcast.
Apple Podcasts: https://apple.co/2lwqZIr.
Spotify: https://spoti.fi/2nEwCF8
RSS: https://lexfridman.com/feed/podcast/
Full episodes playlist: https://www.youtube.com/playlist?list=PLrAXtmErZgOdP_8GztsuKi9nrraNbKKp4
Clips playlist: https://www.youtube.com/playlist?list=PLrAXtmErZgOeciFP3CBCIEElOJeitOr41

SOCIAL:
- Twitter: https://twitter.com/lexfridman.
- LinkedIn: https://www.linkedin.com/in/lexfridman.
- Facebook: https://www.facebook.com/lexfridman.
- Instagram: https://www.instagram.com/lexfridman.
- Medium: https://medium.com/@lexfridman.
- Reddit: https://reddit.com/r/lexfridman.
- Support on Patreon: https://www.patreon.com/lexfridman

A new AI-enabled, optical fiber sensor device developed at Imperial College London can measure key biomarkers of traumatic brain injury simultaneously.

The “promising” results from tests on animal tissues suggest it could help clinicians to better monitor both and patients’ response to treatment than is currently possible, which indicate the high potential for future diagnostic trials in humans.

People who experience a serious blow to the head, such as during road traffic accidents, can suffer (TBI)—a leading cause of death and disability worldwide that can result in long-term difficulties with memory, concentration and solving problems.

Constructing a tiny robot out of DNA

DNA, or deoxyribonucleic acid, is a molecule composed of two long strands of nucleotides that coil around each other to form a double helix. It is the hereditary material in humans and almost all other organisms that carries genetic instructions for development, functioning, growth, and reproduction. Nearly every cell in a person’s body has the same DNA. Most DNA is located in the cell nucleus (where it is called nuclear DNA), but a small amount of DNA can also be found in the mitochondria (where it is called mitochondrial DNA or mtDNA).

I have been invited to participate in a quite large event in which some experts and I (allow me to not consider myself one) will discuss about Artificial Intelligence, and, in particular, about the concept of Super Intelligence.

It turns out I recently found out this really interesting TED talk by Grady Booch, just in perfect timing to prepare my talk.

No matter if you agree or disagree with Mr. Booch’s point of view, it is clear that today we are still living in the era of weak or narrow AI, very far from general AI, and even more from a potential Super Intelligence. Still, Machine Learning bring us with a great opportunity as of today. The opportunity to put algorithms to work together with humans to solve some of our biggest challenges: climate change, poverty, health and well being, etc.

Near-term quantum computers, quantum computers developed today or in the near future, could help to tackle some problems more effectively than classical computers. One potential application for these computers could be in physics, chemistry and materials science, to perform quantum simulations and determine the ground states of quantum systems.

Some quantum computers developed over the past few years have proved to be fairly effective at running . However, near-term quantum computing approaches are still limited by existing hardware components and by the adverse effects of background noise.

Researchers at 1QB Information Technologies (1QBit), University of Waterloo and the Perimeter Institute for Theoretical Physics have recently developed neural , a new strategy that could improve ground state estimates attained using quantum simulations. This strategy, introduced in a paper published in Nature Machine Intelligence, is based on machine-learning algorithms.

People have been dreaming of robot butlers for decades, but one of the biggest barriers has been getting machines to understand our instructions. Google has started to close the gap by marrying the latest language AI with state-of-the-art robots.

Human language is often ambiguous. How we talk about things is highly context-dependent, and it typically requires an innate understanding of how the world works to decipher what we’re talking about. So while robots can be trained to carry out actions on our behalf, conveying our intentions to them can be tricky.

If they have any ability to understand language at all, robots are typically designed to respond to short, specific instructions. More opaque directions like “I need something to wash these chips down” are likely to go over their heads, as are complicated multi-step requests like “Can you put this apple back in the fridge and fetch the chocolate?”