Toggle light / dark theme

Heavy-tailed update distributions arise from information-driven self-organization in nonequilibrium learning

Like human decision-making under real-world constraints, artificial neural networks may balance free exploration in parameter space with task-relevant adaptation. In this study, we identify consistent signatures of criticality during neural network training and provide theoretical evidence that such scaling behavior arises naturally from information-driven self-organization: a dynamic balance between the maximum entropy principle that promotes unbiased exploration and mutual information constraint that relates updates with task objective. We numerically demonstrate that the power-law exponent of updates remains stable throughout training, supporting the presence of self-organized criticality.

Leave a Comment

Lifeboat Foundation respects your privacy! Your email address will not be published.

/* */