{"id":188388,"date":"2024-04-30T00:24:09","date_gmt":"2024-04-30T05:24:09","guid":{"rendered":"https:\/\/lifeboat.com\/blog\/2024\/04\/meaningless-fillers-enable-complex-thinking-in-large-language-models"},"modified":"2024-04-30T00:24:09","modified_gmt":"2024-04-30T05:24:09","slug":"meaningless-fillers-enable-complex-thinking-in-large-language-models","status":"publish","type":"post","link":"https:\/\/lifeboat.com\/blog\/2024\/04\/meaningless-fillers-enable-complex-thinking-in-large-language-models","title":{"rendered":"Meaningless fillers enable complex thinking in large language models"},"content":{"rendered":"<p><a class=\"aligncenter blog-photo\" href=\"https:\/\/lifeboat.com\/blog.images\/meaningless-fillers-enable-complex-thinking-in-large-language-models2.jpg\"><\/a><\/p>\n<p>1\/ Researchers have found that AI models can solve complex tasks like \u201c3SUM\u201d by using simple dots like \u201c\u2026\u201d instead of sentences.<\/p>\n<hr>\n<p>\n<strong>Researchers have found that specifically trained LLMs can solve complex problems just as well using dots like \u201c\u2026\u201d instead of full sentences. This could make it harder to control what\u2019s happening in these models.<\/strong><\/p>\n<p>The researchers trained Llama language models to solve a difficult math problem called \u201c3SUM\u201d, where the model has to find three numbers that add up to zero.<\/p>\n<p>Usually, AI models solve such tasks by explaining the steps in full sentences, <a href=\"https:\/\/the-decoder.com\/deeper-insights-for-ai-language-models-chain-of-thought-prompting-as-a-key-factor\/\">known as \u201cchain of thought\u201d prompting<\/a>. But the researchers replaced these natural language explanations with repeated dots, called filler tokens.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>1\/ Researchers have found that AI models can solve complex tasks like \u201c3SUM\u201d by using simple dots like \u201c\u2026\u201d instead of sentences. Researchers have found that specifically trained LLMs can solve complex problems just as well using dots like \u201c\u2026\u201d instead of full sentences. This could make it harder to control what\u2019s happening in these [\u2026]<\/p>\n","protected":false},"author":359,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[2229,6],"tags":[],"class_list":["post-188388","post","type-post","status-publish","format-standard","hentry","category-mathematics","category-robotics-ai"],"_links":{"self":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/188388","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/users\/359"}],"replies":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/comments?post=188388"}],"version-history":[{"count":0,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/188388\/revisions"}],"wp:attachment":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/media?parent=188388"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/categories?post=188388"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/tags?post=188388"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}