Toggle light / dark theme

Get the latest international news and world events from around the world.

Log in for authorized contributors

In the brain at rest, study indicates neurons rehearse future experience

Together with collaborators in Michigan’s Neural Circuits and Memory Lab led by Diba, Rice neuroscientist Caleb Kemere has been studying the process by which specialized neurons produce a representation of the world after a new experience.


Some dreams may, in fact, predict the future: New research has found that during sleep, some neurons not only replay the recent past but also anticipate future experience.

OpenAI Introduces ChatGPT Edu, Revolutionizing Higher Education

Summary: ChatGPT Edu, powered by GPT-4o, is designed for universities to responsibly integrate AI into academic and campus operations. This advanced AI tool supports text and vision reasoning, data analysis, and offers enterprise-level security.

Successful applications at institutions like Columbia University and Wharton School highlight its potential. ChatGPT Edu aims to make AI accessible and beneficial across educational settings.

EV charging points to hit 64 million globally by 2029

Even though the growth in private sales of electric vehicles (EVs) have slowed in the last year, new research published this week suggests that the number of charging points around the globe will skyrocket to 64 million by 2029.

The figures headline new research from British market research firm Juniper Research, which forecasts EV charging points will rise from 21.8 million globally in 2024 to 64 million by 2029.

According to Juniper, the growth in private EV sales have slowed in the last year due to various factors including range anxiety and reduced EV purchase subsidies for consumers.

A New Theory of Time — Lee Smolin

Is it possible that time is real, and that the laws of physics are not fixed? Lee Smolin, A C Grayling, Gillian Tett, and Bronwen Maddox explore the implications of such a profound re-think of the natural and social sciences, and consider how it might impact the way we think about surviving the future.

Listen to the podcast of the full event including audience Q\&A: http://www.thersa.org/__data/assets/f

Follow the RSA on Twitter: / thersaorg.
Like the RSA on Facebook: / thersaorg.

Our events are made possible with the support of our Fellowship. Support us by donating or applying to become a Fellow.

Donate: http://www.thersa.org/support-the-rsa.
Become a Fellow: http://www.thersa.org/fellowship/apply

6 Finetuning for Classification

V/ Sebastian Raschka.

For weekend reading:

Chapter 6 (Finetuning LLMs for Classification) of Build an LLM from Scratch book is now finally available on the Manning website:


  • Introducing different LLM finetuning approaches
  • Preparing a dataset for text classification
  • Modifying a pretrained LLM for finetuning
  • Finetuning an LLM to identify spam messages
  • Evaluating the accuracy of a finetuned LLM classifier
  • Using a finetuned LLM to classify new data

In previous chapters, we coded the LLM architecture, pretrained it, and learned how to import pretrained weights from an external source, such as OpenAI, into our model. In this chapter, we are reaping the fruits of our labor by finetuning the LLM on a specific target task, such as classifying text, as illustrated in figure 6.1. The concrete example we will examine is classifying text messages as spam or not spam.