Large language models (LLMs) such as GPT and Llama are driving exceptional innovations in AI, but research aimed at improving their explainability and reliability is constrained by massive resource requirements for examining and adjusting their behavior.
To tackle this challenge, a Manchester research team led by Dr. Danilo S. Carvalho and Dr. André Freitas have developed new software frameworks—LangVAE and LangSpace—that significantly reduce both hardware and energy resource needs for controlling and testing LLMs to build explainable AI. Their paper is published on the arXiv preprint server.
Their technique builds compressed language representations from LLMs, making it possible to interpret and control these models using geometric methods (essentially treating the model’s internal language patterns as points and shapes in space that can be measured, compared and adjusted), without altering the models themselves. Crucially, their approach reduces computer resource usage by more than 90% compared with previous techniques.






