{"id":147274,"date":"2022-09-30T13:23:56","date_gmt":"2022-09-30T18:23:56","guid":{"rendered":"https:\/\/lifeboat.com\/blog\/2022\/09\/posits-a-new-kind-of-number-improves-the-math-of-ai"},"modified":"2022-09-30T13:23:56","modified_gmt":"2022-09-30T18:23:56","slug":"posits-a-new-kind-of-number-improves-the-math-of-ai","status":"publish","type":"post","link":"https:\/\/lifeboat.com\/blog\/2022\/09\/posits-a-new-kind-of-number-improves-the-math-of-ai","title":{"rendered":"Posits, a New Kind of Number, Improves the Math of AI"},"content":{"rendered":"<p><a class=\"aligncenter blog-photo\" href=\"https:\/\/lifeboat.com\/blog.images\/posits-a-new-kind-of-number-improves-the-math-of-ai.jpg\"><\/a><\/p>\n<p>Training the large neural networks behind many modern AI tools requires real computational might: For example, <a href=\"https:\/\/spectrum.ieee.org\/large-language-models-meta-openai\" target=\"_self\" class=\"\">OpenAI\u2019s most advanced language model, GPT-3<\/a>, required an astounding million billion billions of operations to <a href=\"https:\/\/spectrum.ieee.org\/mlperf-rankings-2022\" target=\"_self\" class=\"\">train<\/a>, and cost about US $5 million in compute time. Engineers think they have figured out a way to ease the burden by using a different way of representing numbers.<\/p>\n<p>Back in 2017, <a href=\"http:\/\/www.johngustafson.net\/\" rel=\"noopener noreferrer\" target=\"_blank\" class=\"\">John Gustafson<\/a>, then jointly appointed at <a href=\"https:\/\/www.a-star.edu.sg\/acrc\" rel=\"noopener noreferrer\" target=\"_blank\" class=\"\">A*STAR Computational Resources Centre<\/a> and the National University of Singapore, and <a href=\"https:\/\/scholar.google.com\/citations?user=BE2yVIYAAAAJ&hl=en\" rel=\"noopener noreferrer\" target=\"_blank\" class=\"\">Isaac Yonemoto<\/a>, then at Interplanetary Robot and Electric Brain Co., developed a <a href=\"http:\/\/www.johngustafson.net\/pdfs\/BeatingFloatingPoint.pdf\" rel=\"noopener noreferrer\" target=\"_blank\" class=\"\">new way of representing numbers<\/a>. These numbers, called posits, were proposed as an improvement over the standard floating-point arithmetic processors used today.<\/p>\n<p>Now, a team of researchers at the <a href=\"https:\/\/www.ucm.es\/english\" target=\"_blank\" class=\"\">Complutense University of Madrid<\/a> have <a href=\"https:\/\/ieeexplore.ieee.org\/document\/9817027\/references#references\" rel=\"noopener noreferrer\" target=\"_blank\" class=\"\">developed the first processor core<\/a> implementing the posit standard in hardware and showed that, bit-for-bit, the accuracy of a basic computational task increased by up to four orders of magnitude, compared to computing using standard floating-point numbers. They presented their results at last week\u2019s <a href=\"https:\/\/arith2022.arithsymposium.org\/program.html\" rel=\"noopener noreferrer\" target=\"_blank\" class=\"\">IEEE Symposium on Computer Arithmetic<\/a>.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Training the large neural networks behind many modern AI tools requires real computational might: For example, OpenAI\u2019s most advanced language model, GPT-3, required an astounding million billion billions of operations to train, and cost about US $5 million in compute time. Engineers think they have figured out a way to ease the burden by using [\u2026]<\/p>\n","protected":false},"author":661,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[2229,6],"tags":[],"class_list":["post-147274","post","type-post","status-publish","format-standard","hentry","category-mathematics","category-robotics-ai"],"_links":{"self":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/147274","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/users\/661"}],"replies":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/comments?post=147274"}],"version-history":[{"count":0,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/147274\/revisions"}],"wp:attachment":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/media?parent=147274"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/categories?post=147274"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/tags?post=147274"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}