Toggle light / dark theme

For the next installment of the informal TechCrunch book club, we are reading the fourth story in Ted Chiang’s Exhalation. The goal of this book club is to expand our minds to new worlds, ideas, and vistas, and The Lifecycle of Software Objects doesn’t disappoint. Centered in a future world where virtual worlds and generalized AI have become commonplace, it’s a fantastic example of speculative fiction that forces us to confront all kinds of fundamental questions.

If you’ve missed the earlier parts in this book club series, be sure to check out:

Bill-gates-thinks-gene-editing-artificial-intelligence-save-world.


Microsoft co-founder Bill Gates has been working to improve the state of global health through his nonprofit foundation for 20 years, and today he told the nation’s premier scientific gathering that advances in artificial intelligence and gene editing could accelerate those improvements exponentially in the years ahead.

“We have an opportunity with the advance of tools like artificial intelligence and gene-based editing technologies to build this new generation of health solutions so that they are available to everyone on the planet. And I’m very excited about this,” Gates said in Seattle during a keynote address at the annual meeting of the American Association for the Advancement of Science.

Such tools promise to have a dramatic impact on several of the biggest challenges on the agenda for the Bill & Melinda Gates Foundation, created by the tech guru and his wife in 2000.

According to a new study from Oxford Economics, within the next 11 years there could be 14 million robots put to work in China alone.

Economists analyzed long-term trends around the uptake of automation in the workplace, noting that the number of robots in use worldwide increased threefold over the past two decades to 2.25 million.

While researchers predicted the rise of robots will bring about benefits in terms of productivity and economic growth, they also acknowledged the drawbacks that were expected to arise simultaneously.

Lifespan.io


 A new study published in mSystems, a journal from the American Society for Microbiology, shows that the skin and mouth microbiomes are better predictors of age than the gut microbiome.

A very broad study

The authors used a very large population that is highly impressive among studies of this kind. Previously, a team containing some of the same researchers had done a gut microbiome study of over four thousand people from multiple countries [1]. This time, the researchers took skin, saliva, and fecal samples from roughly 2,000, 2,500, and 4,500 people, respectively; this study was done with nearly 9,000 people in total, and the team stated that it was the most comprehensive microbiome study done to date. The team used a “random forest” machine learning approach to determine what microbiota were and were not predictive of age [2].

BF16, the new number format optimized for deep learning, promises power and compute savings with a minimal reduction in prediction accuracy.

BF16, sometimes called BFloat16 or Brain Float 16, is a new number format optimised for AI/deep learning applications. Invented at Google Brain, it has gained wide adoption in AI accelerators from Google, Intel, Arm and many others.

The idea behind BF16 is to reduce the compute power and energy consumption needed to multiply tensors together by reducing the precision of the numbers. A tensor is a three-dimensional matrix of numbers; multiplication of tensors is the key mathematical operation required for AI calculations.

Meet Surena IV, an adult-size humanoid built by University of Tehran roboticists.


A little over a decade ago, researchers at the University of Tehran introduced a rudimentary humanoid robot called Surena. An improved model capable of walking, Surena II, was announced not long after, followed by the more capable Surena III in 2015.

Now the Iranian roboticists have unveiled Surena IV. The new robot is a major improvement over previous designs. A video highlighting its capabilities shows the robot mimicking a person’s pose, grasping a water bottle, and writing its name on a whiteboard.

Surena is also shown taking a group selfie with its human pals.