{"id":202375,"date":"2024-12-25T10:14:03","date_gmt":"2024-12-25T16:14:03","guid":{"rendered":"https:\/\/lifeboat.com\/blog\/2024\/12\/new-llm-technique-slashes-memory-costs-up-to-75-percent"},"modified":"2024-12-25T10:14:03","modified_gmt":"2024-12-25T16:14:03","slug":"new-llm-technique-slashes-memory-costs-up-to-75-percent","status":"publish","type":"post","link":"https:\/\/lifeboat.com\/blog\/2024\/12\/new-llm-technique-slashes-memory-costs-up-to-75-percent","title":{"rendered":"New LLM Technique Slashes Memory Costs up to 75 Percent"},"content":{"rendered":"<p><a class=\"aligncenter blog-photo\" href=\"https:\/\/lifeboat.com\/blog.images\/new-llm-technique-slashes-memory-costs-up-to-75-percent.jpg\"><\/a><\/p>\n<p>Researchers at the Tokyo-based startup Sakana AI have developed a new technique that enables language models to use memory more efficiently, helping enterprises cut the costs of building applications on top of large language models (LLMs) and other Transformer-based models.<\/p>\n<p>The technique, called \u2018universal transformer memory,\u2019 uses special neural networks to optimize LLMs to keep bits of information that matter and discard redundant details from their context.<\/p>\n<p>From <a href=\"https:\/\/venturebeat.com\/ai\/new-llm-optimization-technique-slashes-memory-costs-up-to-75\/\" target=\"_blank\" rel=\"noreferrer noopener\">VentureBeat<\/a>.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Researchers at the Tokyo-based startup Sakana AI have developed a new technique that enables language models to use memory more efficiently, helping enterprises cut the costs of building applications on top of large language models (LLMs) and other Transformer-based models. The technique, called \u2018universal transformer memory,\u2019 uses special neural networks to optimize LLMs to keep [\u2026]<\/p>\n","protected":false},"author":662,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[6],"tags":[],"class_list":["post-202375","post","type-post","status-publish","format-standard","hentry","category-robotics-ai"],"_links":{"self":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/202375","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/users\/662"}],"replies":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/comments?post=202375"}],"version-history":[{"count":0,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/202375\/revisions"}],"wp:attachment":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/media?parent=202375"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/categories?post=202375"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/tags?post=202375"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}