{"id":150919,"date":"2022-11-23T17:23:13","date_gmt":"2022-11-23T23:23:13","guid":{"rendered":"https:\/\/lifeboat.com\/blog\/2022\/11\/what-metas-galactica-missteps-mean-for-gpt-4"},"modified":"2022-11-23T17:23:13","modified_gmt":"2022-11-23T23:23:13","slug":"what-metas-galactica-missteps-mean-for-gpt-4","status":"publish","type":"post","link":"https:\/\/lifeboat.com\/blog\/2022\/11\/what-metas-galactica-missteps-mean-for-gpt-4","title":{"rendered":"What Meta\u2019s Galactica missteps mean for GPT-4"},"content":{"rendered":"<p><a class=\"aligncenter blog-photo\" href=\"https:\/\/lifeboat.com\/blog.images\/what-metas-galactica-missteps-mean-for-gpt-42.jpg\"><\/a><\/p>\n<p><em>Check out the on-demand sessions from the Low-Code\/No-Code Summit to learn how to successfully innovate and achieve efficiency by upskilling and scaling citizen developers. <\/em><a href=\"https:\/\/attendees.bizzabo.com\/427958\/agenda?date=1667952000000\"><em>Watch now<\/em><\/a><em>.<\/em><\/p>\n<p>Like Rodin\u2019s The Thinker, there was plenty of thinking and pondering about the <a href=\"https:\/\/venturebeat.com\/ai\/why-ai-leaders-need-a-backbone-of-large-language-models\/\">large language model<\/a> (LLM) landscape last week. There were Meta\u2019s missteps over its Galactica LLM public demo and Stanford CRFM\u2019s <a href=\"https:\/\/venturebeat.com\/ai\/stanford-debuts-first-ai-benchmark-to-help-understand-llms\/\">debut<\/a> of its HELM benchmark, which followed weeks of <a href=\"https:\/\/thealgorithmicbridge.substack.com\/p\/gpt-4-rumors-from-silicon-valley\">tantalizing rumors<\/a> about the possible release of OpenAI\u2019s GPT-4 sometime over the next few months.<\/p>\n<p>The online chatter ramped up last Tuesday. That\u2019s when Meta AI and Papers With Code <a href=\"https:\/\/wandb.ai\/telidavies\/ml-news\/reports\/Galactica-Open-Source-120B-Param-Scientific-Language-Model-By-Papers-with-Code-Meta-AI--VmlldzoyOTc3MDM2\" target=\"_blank\" rel=\"noreferrer noopener\">announced<\/a> a new open-source LLM called Galactica, that it described in a paper <a href=\"https:\/\/arxiv.org\/abs\/2211.09085\">published on Arxiv<\/a> as \u201ca large language model for science\u201d meant to help scientists with \u201cinformation overload.\u201d<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Check out the on-demand sessions from the Low-Code\/No-Code Summit to learn how to successfully innovate and achieve efficiency by upskilling and scaling citizen developers. Watch now. Like Rodin\u2019s The Thinker, there was plenty of thinking and pondering about the large language model (LLM) landscape last week. There were Meta\u2019s missteps over its Galactica LLM public [\u2026]<\/p>\n","protected":false},"author":359,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[6],"tags":[],"class_list":["post-150919","post","type-post","status-publish","format-standard","hentry","category-robotics-ai"],"_links":{"self":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/150919","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/users\/359"}],"replies":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/comments?post=150919"}],"version-history":[{"count":0,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/150919\/revisions"}],"wp:attachment":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/media?parent=150919"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/categories?post=150919"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/tags?post=150919"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}