{"id":211220,"date":"2025-04-12T02:14:38","date_gmt":"2025-04-12T07:14:38","guid":{"rendered":"https:\/\/lifeboat.com\/blog\/2025\/04\/deepseek-r1-thoughtology-lets-think-about-llm-reasoning"},"modified":"2025-04-12T02:14:38","modified_gmt":"2025-04-12T07:14:38","slug":"deepseek-r1-thoughtology-lets-think-about-llm-reasoning","status":"publish","type":"post","link":"https:\/\/lifeboat.com\/blog\/2025\/04\/deepseek-r1-thoughtology-lets-think-about-llm-reasoning","title":{"rendered":"DeepSeek-R1 Thoughtology: Let\u2019s think about LLM reasoning"},"content":{"rendered":"<p style=\"padding-right: 20px\"><a class=\"aligncenter blog-photo\" href=\"https:\/\/lifeboat.com\/blog.images\/deepseek-r1-thoughtology-lets-think-about-llm-reasoning.jpg\"><\/a><\/p>\n<p>Large Reasoning Models like DeepSeek-R1 mark a fundamental shift in how LLMs approach complex problems. Instead of directly producing an answer for a given input, DeepSeek-R1 creates detailed multi-step reasoning chains, seemingly \u2018thinking\u2019 about a problem before providing an answer. This reasoning process is publicly available to the user, creating endless opportunities for studying the reasoning behaviour of the model and opening up the field of Starting from a taxonomy of DeepSeek-R1\u2019s basic building blocks of reasoning, our analyses on DeepSeek-R1 investigate the impact and controllability of thought length, management of long or confusing contexts, cultural and safety concerns, and the status of DeepSeek-R1 vis-\u00e0-vis cognitive phenomena, such as human-like language processing and world modelling. Our findings paint a nuanced picture. Notably, we show DeepSeek-R1 has a \u2018sweet spot\u2019 of reasoning, where extra inference time can impair model performance. Furthermore, we find a tendency for DeepSeek-R1 to persistently ruminate on previously explored problem formulations, obstructing further exploration. We also note strong safety vulnerabilities of DeepSeek-R1 compared to its non-reasoning counterpart, which can also compromise safety-aligned LLMs.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Large Reasoning Models like DeepSeek-R1 mark a fundamental shift in how LLMs approach complex problems. Instead of directly producing an answer for a given input, DeepSeek-R1 creates detailed multi-step reasoning chains, seemingly \u2018thinking\u2019 about a problem before providing an answer. This reasoning process is publicly available to the user, creating endless opportunities for studying the [\u2026]<\/p>\n","protected":false},"author":709,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[6],"tags":[],"class_list":["post-211220","post","type-post","status-publish","format-standard","hentry","category-robotics-ai"],"_links":{"self":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/211220","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/users\/709"}],"replies":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/comments?post=211220"}],"version-history":[{"count":0,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/211220\/revisions"}],"wp:attachment":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/media?parent=211220"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/categories?post=211220"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/tags?post=211220"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}