Toggle light / dark theme

This fleshy, pink smiling face is made from living human skin cells, and was created as part of an experiment to let robots show emotion.

How would such a living tissue surface, whatever its advantages and disadvantages, attach to the mechanical foundation of a robot’s limb or “face”?

In humans and…


A team of scientists unveiled a robot face covered with a delicate layer of living skin that heals itself and crinkles into a smile in hopes of developing more human-like cyborgs.

Numerous electrophysiological experiments have reported that the prefrontal cortex (PFC) is involved in the process of working memory. PFC neurons continue firing to maintain stimulus information in the delay period without external stimuli in working memory tasks. Further findings indicate that while the activity of single neurons exhibits strong temporal and spatial dynamics (heterogeneity), the activity of population neurons can encode spatiotemporal information of stimuli stably and reliably. From the perspective of neural networks, the computational mechanism underlying this phenomenon is not well demonstrated. The main purpose of this paper is to adopt a new strategy to explore the neural computation mechanism of working memory. We used reinforcement learning to train a recurrent neural network model to learn a spatial working memory task.

Disruptive innovations in technology, such as humanoid robots and electric vehicles, will lead to significant changes in labor, economy, and society, posing both opportunities and challenges for the future.

Questions to inspire discussion.

What are the predictions about the future of electric vehicles?
—The video discusses accurate predictions made by Tony Seba and his team about the future of EVs, which the media has not reported on.

A new startup emerged out of stealth mode today to power the next generation of generative AI. Etched is a company that makes an application-specific integrated circuit (ASIC) to process “Transformers.” The transformer is an architecture for designing deep learning models developed by Google and is now the powerhouse behind models like OpenAI’s GPT-4o in ChatGPT, Antrophic Claude, Google Gemini, and Meta’s Llama family. Etched wanted to create an ASIC for processing only the transformer models, making a chip called Sohu. The claim is Sohu outperforms NVIDIA’s latest and greatest by an entire order of magnitude. Where a server configuration with eight NVIDIA H100 GPU clusters pushes Llama-3 70B models at 25,000 tokens per second, and the latest eight B200 “Blackwell” GPU cluster pushes 43,000 tokens/s, the eight Sohu clusters manage to output 500,000 tokens per second.

It is this foundation that AI is now disrupting, providing the none-expert with expert like qualities. But this progression is a fallacy. If we let a junior in a consulting firm, for example, use tools to create presentations that are better than what she could produce on her own, are we teaching her anything? Could she repeat the results with a paper and with a pen? How will she gain the needed knowledge, critical thinking, and expertise if creates or assists the work? It’s all very well that engineers can prompt the code they need, but does this make them good engineers?

The trend of heavily relying on AI automation to complete tasks is the face of the future. Its here to stay. But there is a challenge we must acknowledge. We need to bridge two extremes. On one extreme is the irresistible temptation to benefit as much as possible from the automation AI provides. On the other extreme is the need to let our employees battle through their work themselves so they improve their skills and grow to become the experts their industry needs. How can we do one without losing the other?

This article is not a rant aimed at stopping the progress of technology. There is no stopping it; we can only join it. The challenge is how to build experts and expertise in an AI-generated world. How can we benefit from the optimizations AI can provide without forgetting how to build boats, aqueducts, or manufacture paper if we want to learn from the experience of the Portuguese, the Romans, and the Chinese? The challenge is not this or that but this and that. We want to benefit from AI, and we need to build a generation of new experts. But how do we connect these two dots?

In an interview at the Aspen Ideas Festival on Tuesday, Mustafa Suleyman, CEO of Microsoft AI, made it very clear that he admires OpenAI CEO Sam Altman.

CNBC’s Andrew Ross Sorkin asked what the plan will be when Microsoft’s enormous AI future isn’t so closely dependent on OpenAI, using a metaphor of winning a bicycling race. But Suleyman sidestepped.

“I don’t buy the metaphor that there is a finish line. This is another false frame,” he said. “We have to stop framing everything as a ferocious race.”