Menu

Blog

Nov 6, 2022

ROME: Locating and Editing Factual Associations in GPT (Paper Explained & Author Interview)

Posted by in categories: computing, neuroscience

Large Language Models have the ability to store vast amounts of facts about the world. But little is known, how these models actually do this. This paper aims at discovering the mechanism and location of storage and recall of factual associations in GPT models, and then proposes a mechanism for the targeted editing of such facts, in form of a simple rank-one update to a single MLP layer. This has wide implications both for how we understand such models’ inner workings, and for our ability to gain greater control over such models in the future.

OUTLINE:
0:00 — Introduction.
1:40 — What are the main questions in this subfield?
6:55 — How causal tracing reveals where facts are stored.
18:40 — Clever experiments show the importance of MLPs.
24:30 — How do MLPs store information?
29:10 — How to edit language model knowledge with precision?
36:45 — What does it mean to know something?
39:00 — Experimental Evaluation & the CounterFact benchmark.
45:40 — How to obtain the required latent representations?
51:15 — Where is the best location in the model to perform edits?
58:00 — What do these models understand about language?
1:02:00 — Questions for the community.

Paper: https://arxiv.org/abs/2202.05262
Follow-up paper on Mass-Editing Memory in a Transformer: https://arxiv.org/abs/2210.

Abstract:
We analyze the storage and recall of factual associations in autoregressive transformer language models, finding evidence that these associations correspond to localized, directly-editable computations. We first develop a causal intervention for identifying neuron activations that are decisive in a model’s factual predictions. This reveals a distinct set of steps in middle-layer feed-forward modules that mediate factual predictions while processing subject tokens. To test our hypothesis that these computations correspond to factual association recall, we modify feed-forward weights to update specific factual associations using Rank-One Model Editing (ROME). We find that ROME is effective on a standard zero-shot relation extraction (zsRE) model-editing task, comparable to existing methods. To perform a more sensitive evaluation, we also evaluate ROME on a new dataset of counterfactual assertions, on which it simultaneously maintains both specificity and generalization, whereas other methods sacrifice one or another. Our results confirm an important role for mid-layer feed-forward modules in storing factual associations and suggest that direct manipulation of computational mechanisms may be a feasible approach for model editing. The code, dataset, visualizations, and an interactive demo notebook are available at this https URL

Authors: Kevin Meng, David Bau, Alex Andonian, Yonatan Belinkov.

Links:
Homepage: https://ykilcher.com.
Merch: https://ykilcher.com/merch.
YouTube: https://www.youtube.com/c/yannickilcher.
Twitter: https://twitter.com/ykilcher.
Discord: https://ykilcher.com/discord.
LinkedIn: https://www.linkedin.com/in/ykilcher.

Comments are closed.