{"id":222616,"date":"2025-09-28T08:03:43","date_gmt":"2025-09-28T13:03:43","guid":{"rendered":"https:\/\/lifeboat.com\/blog\/2025\/09\/energy-based-transformers-ebts-use-gradient-descent-to-gradually-predict-the-next-token"},"modified":"2025-09-28T08:03:43","modified_gmt":"2025-09-28T13:03:43","slug":"energy-based-transformers-ebts-use-gradient-descent-to-gradually-predict-the-next-token","status":"publish","type":"post","link":"https:\/\/lifeboat.com\/blog\/2025\/09\/energy-based-transformers-ebts-use-gradient-descent-to-gradually-predict-the-next-token","title":{"rendered":"Energy-Based Transformers (EBTs) Use Gradient Descent To Gradually Predict the Next Token"},"content":{"rendered":"<p><a class=\"aligncenter blog-photo\" href=\"https:\/\/lifeboat.com\/blog.images\/energy-based-transformers-ebts-use-gradient-descent-to-gradually-predict-the-next-token2.jpg\"><\/a><\/p>\n<p>A new type of transformer can check its work. Instead of guessing the next output token in one shot like a typical transformer, it starts with a rough version of the token and improves it step by step.<\/p>\n<p><strong>What\u2019s new: <\/strong>Alexi Gladstone and colleagues at University of Virginia, University of Illinois Urbana-Champaign, Amazon, Stanford, and Harvard proposed the <a href=\"https:\/\/arxiv.org\/abs\/2507.02092?utm_campaign=The%20Batch&utm_source=hs_email&utm_medium=email&_hsenc=p2ANqtz--bIpWoAA0d8Ugha6WmwlzJEFeLwluYNZSx-7AAH9r5Kdq3UTcUJwY1X4RnbL0IOgx_32-d\" rel=\"noopener\">Energy-Based Transformer<\/a> (EBT). Early experiments show that it scales more efficiently than transformers at relatively small sizes.<\/p>\n<p><strong>Energy-based model basics:<\/strong> For a given input context paired with a candidate response (for example, a prompt and potential next token), an energy-based model produces a number called \u201cenergy\u201d that represents how likely the potential next token would follow the prompt. During training, the model learns to assign low energy if a context\/potential-response pair is very likely and high energy if it\u2019s not.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>A new type of transformer can check its work. Instead of guessing the next output token in one shot like a typical transformer, it starts with a rough version of the token and improves it step by step. What\u2019s new: Alexi Gladstone and colleagues at University of Virginia, University of Illinois Urbana-Champaign, Amazon, Stanford, and [\u2026]<\/p>\n","protected":false},"author":709,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1497],"tags":[],"class_list":["post-222616","post","type-post","status-publish","format-standard","hentry","category-energy"],"_links":{"self":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/222616","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/users\/709"}],"replies":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/comments?post=222616"}],"version-history":[{"count":0,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/222616\/revisions"}],"wp:attachment":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/media?parent=222616"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/categories?post=222616"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/tags?post=222616"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}