{"id":126830,"date":"2021-08-28T16:03:24","date_gmt":"2021-08-28T23:03:24","guid":{"rendered":"https:\/\/lifeboat.com\/blog\/2021\/08\/cerebras-upgrades-trillion-transistor-chip-to-train-brain-scale-ai"},"modified":"2021-08-28T16:03:24","modified_gmt":"2021-08-28T23:03:24","slug":"cerebras-upgrades-trillion-transistor-chip-to-train-brain-scale-ai","status":"publish","type":"post","link":"https:\/\/lifeboat.com\/blog\/2021\/08\/cerebras-upgrades-trillion-transistor-chip-to-train-brain-scale-ai","title":{"rendered":"Cerebras Upgrades Trillion-Transistor Chip to Train \u2018Brain-Scale\u2019 AI"},"content":{"rendered":"<p><a class=\"aligncenter blog-photo\" href=\"https:\/\/lifeboat.com\/blog.images\/cerebras-upgrades-trillion-transistor-chip-to-train-brain-scale-ai2.jpg\"><\/a><\/p>\n<p>Much of the recent progress in <a href=\"https:\/\/singularityhub.com\/tag\/artificial-intelligence\/\">AI<\/a> has come from building ever-larger neural networks. A new chip powerful enough to handle \u201cbrain-scale\u201d models could turbo-charge this approach.<\/p>\n<p>Chip startup <a href=\"https:\/\/cerebras.net\/\">Cerebras<\/a> leaped into the limelight in2019when it came out of stealth to reveal a 1.2-trillion-transistor chip. The size of a dinner plate, the chip is called the Wafer Scale Engine and was the <a href=\"https:\/\/singularityhub.com\/2019\/08\/26\/this-giant-ai-chip-is-the-size-of-an-ipad-and-holds-1-2-trillion-transistors\/\">world\u2019s largest computer chip<\/a>. Earlier this year Cerebras unveiled the Wafer Scale Engine 2 (<a href=\"https:\/\/f.hubspotusercontent30.net\/hubfs\/8968533\/WSE-2%20Datasheet.pdf\">WSE-2<\/a>), which more than doubled the number of transistors to 2.6 trillion.<\/p>\n<p>Now the company has outlined a series of innovations that mean its latest chip can train a neural network with up to 120 trillion parameters. For reference, OpenAI\u2019s revolutionary GPT-3 language model contains <a href=\"https:\/\/singularityhub.com\/2020\/06\/18\/openais-new-text-generator-writes-even-more-like-a-human\/\">175 billion parameters<\/a>. The <a href=\"https:\/\/venturebeat.com\/2021\/01\/12\/google-trained-a-trillion-parameter-ai-language-model\/\">largest neural network to date<\/a>, which was trained by Google, had 1.6 trillion.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Much of the recent progress in AI has come from building ever-larger neural networks. A new chip powerful enough to handle \u201cbrain-scale\u201d models could turbo-charge this approach. Chip startup Cerebras leaped into the limelight in2019when it came out of stealth to reveal a 1.2-trillion-transistor chip. The size of a dinner plate, the chip is called [\u2026]<\/p>\n","protected":false},"author":650,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1522,6],"tags":[],"class_list":["post-126830","post","type-post","status-publish","format-standard","hentry","category-innovation","category-robotics-ai"],"_links":{"self":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/126830","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/users\/650"}],"replies":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/comments?post=126830"}],"version-history":[{"count":0,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/126830\/revisions"}],"wp:attachment":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/media?parent=126830"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/categories?post=126830"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/tags?post=126830"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}