Toggle light / dark theme

Lost in the middle: How LLM architecture and training data shape AI’s position bias

Research has shown that large language models (LLMs) tend to overemphasize information at the beginning and end of a document or conversation, while neglecting the middle.

This “position bias” means that if a lawyer is using an LLM-powered virtual assistant to retrieve a certain phrase in a 30-page affidavit, the LLM is more likely to find the right text if it is on the initial or final pages.

MIT researchers have discovered the mechanism behind this phenomenon.

Leave a Comment

Lifeboat Foundation respects your privacy! Your email address will not be published.