Toggle light / dark theme

An Information-Theoretic Approach for Detecting Edits in AI-Generated Text

Posted in robotics/AI

Abstract: We propose a method to determine whether a given article was written entirely by a generative language model or perhaps contains edits by a different author, possibly a human. Our process involves multiple tests for the origin of individual sentences or other pieces of text and combining these tests using a method sensitive to alternatives in which non-null effects are few and scattered across the text in unknown locations. Interestingly, this method is also useful for identifying pieces of text suspected to contain edits. We demonstrate the effectiveness of the method in detecting edits through extensive evaluations using real data and provide an analysis of the factors affecting its success. In particular, we discuss optimality properties under a theoretical framework for text editing saying that sentences are generated mainly by the language model, except perhaps for a few sentences that might have originated via a different mechanism. Our analysis raises several interesting research questions at the intersection of information theory and data science.

Leave a Comment