Researchers at the Tokyo-based startup Sakana AI have developed a new technique that enables language models to use memory more efficiently, helping enterprises cut the costs of building applications on top of large language models (LLMs) and other Transformer-based models.
The technique, called ‘universal transformer memory,’ uses special neural networks to optimize LLMs to keep bits of information that matter and discard redundant details from their context.
From VentureBeat.
Leave a reply