{"id":110715,"date":"2020-08-01T08:22:22","date_gmt":"2020-08-01T15:22:22","guid":{"rendered":"https:\/\/lifeboat.com\/blog\/2020\/08\/amd-radeon-instinct-mi100-acturus-teased-nvidia-ampere-destroyer"},"modified":"2020-08-01T08:22:22","modified_gmt":"2020-08-01T15:22:22","slug":"amd-radeon-instinct-mi100-acturus-teased-nvidia-ampere-destroyer","status":"publish","type":"post","link":"https:\/\/lifeboat.com\/blog\/2020\/08\/amd-radeon-instinct-mi100-acturus-teased-nvidia-ampere-destroyer","title":{"rendered":"AMD Radeon Instinct MI100 Acturus teased, NVIDIA Ampere destroyer?!"},"content":{"rendered":"<p><a class=\"aligncenter blog-photo\" href=\"https:\/\/lifeboat.com\/blog.images\/amd-radeon-instinct-mi100-acturus-teased-nvidia-ampere-destroyer3.jpg\"><\/a><\/p>\n<p>Even in a dual-socket AMD EPYC Rome\/Milan server and 4 x MI100 PCIe-based accelerators, we\u2019re looking at 128GB of HBM memory on offer with 4.9TB\/sec of bandwidth. We see a drop down to 136 TFLOPs here as well.<\/p>\n<p>We are looking at the purported AMD Radeon Instinct MI100 accelerator being around 13% faster in FP32 compute performance over NVIDIA\u2019s new Ampere A100 accelerator. The performance to value ratio is much better, with the MI100 being 2.4x better value over a V100S setup, and 50% better value over Ampere A100.<\/p>\n<p><center><a aria-label=\"open enlarged image\" href=\"https:\/\/www.tweaktown.com\/image.php?image=https:\/\/lifeboat.com\/blog.images\/amd-radeon-instinct-mi100-acturus-teased-nvidia-ampere-destroyer4.jpg\" target=\"_blank\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter\" src=\"https:\/\/lifeboat.com\/blog.images\/amd-radeon-instinct-mi100-acturus-teased-nvidia-ampere-destroyer4.jpg\" width=\"620\" height=\"349\" alt=\"AMD Radeon Instinct MI100 Acturus teased, NVIDIA Ampere destroyer?! 03 | TweakTown.com\" title=\"AMD Radeon Instinct MI100 Acturus teased, NVIDIA Ampere destroyer?! 03 | TweakTown.com\" class=\"\"><\/a><\/center><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Even in a dual-socket AMD EPYC Rome\/Milan server and 4 x MI100 PCIe-based accelerators, we\u2019re looking at 128GB of HBM memory on offer with 4.9TB\/sec of bandwidth. We see a drop down to 136 TFLOPs here as well. We are looking at the purported AMD Radeon Instinct MI100 accelerator being around 13% faster in FP32 [\u2026]<\/p>\n","protected":false},"author":513,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1523],"tags":[],"class_list":["post-110715","post","type-post","status-publish","format-standard","hentry","category-computing"],"_links":{"self":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/110715","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/users\/513"}],"replies":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/comments?post=110715"}],"version-history":[{"count":0,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/110715\/revisions"}],"wp:attachment":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/media?parent=110715"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/categories?post=110715"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/tags?post=110715"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}