Toggle light / dark theme

Researchers at Cambridge have shown that the Third Thumb, a robotic prosthetic, can be quickly mastered by the public, enhancing manual dexterity. The study stresses the importance of inclusive design to ensure technologies benefit everyone, with significant findings on performance across different demographics.

Cambridge researchers demonstrated that people can rapidly learn to control a prosthetic extra thumb, known as a “third thumb,” and use it effectively to grasp and handle objects.

The team tested the robotic device on a diverse range of participants, which they say is essential for ensuring new technologies are inclusive and can work for everyone.

JULIEN CROCKETT: Let’s start with the tension at the heart of AI: we understand and talk about AI systems as if they are both mere tools and intelligent actors that might one day come alive. Alison, you’ve argued that the currently popular AI systems, LLMs, are neither intelligent nor dumb—that those are the wrong categories by which to understand them. Rather, we should think of them as cultural technologies, like the printing press or the internet. Why is a “cultural technology” a better framework for understanding LLMs?

One of the trade-offs of today’s technological progress is the big energy costs necessary to process digital information. To make AI models using silicon-based processors, we need to train them with huge amounts of data. The more data, the better the model. This is perfectly illustrated by the current success of large language models, such as ChatGPT. The impressive abilities of such models are due to the fact that huge amounts of data were used for their training.

The more data we use to teach digital AI, the better it becomes, but also the more computational power is needed.

This is why to develop AI further; we need to consider alternatives to the current status quo in silicon-based technologies. Indeed, we have recently seen a lot of publications about Sam Altman, the CEO of OpenAI topic.

The next “Spider-Verse” film may have a new animation style: AI.

Sony Pictures Entertainment (SPE) CEO Tony Vinciquerra does not mince words when it comes to artificial intelligence. He likes the tech — or at the very least, he likes the economics.

“We are very focused on AI. The biggest problem with making films today is the expense,” Vinciquerra said at Sony’s Thursday (Friday in Japan) investor event. “We will be looking at ways to…produce both films for theaters and television in a more efficient way, using AI primarily.”

Digital twin models may enhance future autonomous systems.

Systems controlled by next-generation computing algorithms could give rise to better and more efficient machine learning products, a new study suggests.

Using machine learning tools to create a digital twin, or a virtual copy, of an electronic circuit that exhibits chaotic behavior, researchers found that they were successful at predicting how it would behave and using that information to control it.

Summary: ChatGPT Edu, powered by GPT-4o, is designed for universities to responsibly integrate AI into academic and campus operations. This advanced AI tool supports text and vision reasoning, data analysis, and offers enterprise-level security.

Successful applications at institutions like Columbia University and Wharton School highlight its potential. ChatGPT Edu aims to make AI accessible and beneficial across educational settings.

V/ Sebastian Raschka.

For weekend reading:

Chapter 6 (Finetuning LLMs for Classification) of Build an LLM from Scratch book is now finally available on the Manning website:


  • Introducing different LLM finetuning approaches
  • Preparing a dataset for text classification
  • Modifying a pretrained LLM for finetuning
  • Finetuning an LLM to identify spam messages
  • Evaluating the accuracy of a finetuned LLM classifier
  • Using a finetuned LLM to classify new data

In previous chapters, we coded the LLM architecture, pretrained it, and learned how to import pretrained weights from an external source, such as OpenAI, into our model. In this chapter, we are reaping the fruits of our labor by finetuning the LLM on a specific target task, such as classifying text, as illustrated in figure 6.1. The concrete example we will examine is classifying text messages as spam or not spam.