Toggle light / dark theme

Mixture-of-Recursions: Learning Dynamic Recursive Depths for Adaptive Token-Level Computation

View recent discussion. Abstract: Scaling language models unlocks impressive capabilities, but the accompanying computational and memory demands make both training and deployment expensive. Existing efficiency efforts typically target either parameter sharing or adaptive computation, leaving open the question of how to attain both simultaneously. We introduce Mixture-of-Recursions (MoR), a unified framework that combines the two axes of efficiency inside a single Recursive Transformer. MoR reuses a shared stack of layers across recursion steps to achieve parameter efficiency, while lightweight routers enable adaptive token-level thinking by dynamically assigning different recursion depths to individual tokens.

Leave a Comment

Lifeboat Foundation respects your privacy! Your email address will not be published.