{"id":185687,"date":"2024-03-21T16:32:40","date_gmt":"2024-03-21T21:32:40","guid":{"rendered":"https:\/\/lifeboat.com\/blog\/2024\/03\/google-might-let-apple-use-gemini-but-apple-still-has-its-own-llm-coming"},"modified":"2024-03-21T16:32:40","modified_gmt":"2024-03-21T21:32:40","slug":"google-might-let-apple-use-gemini-but-apple-still-has-its-own-llm-coming","status":"publish","type":"post","link":"https:\/\/lifeboat.com\/blog\/2024\/03\/google-might-let-apple-use-gemini-but-apple-still-has-its-own-llm-coming","title":{"rendered":"Google might let Apple use Gemini, but Apple still has its own LLM coming"},"content":{"rendered":"<p><a class=\"blog-photo\" href=\"https:\/\/lifeboat.com\/blog.images\/logo.google-might-let-apple-use-gemini-but-apple-still-has-its-own-llm-coming2.jpg\"><\/a><\/p>\n<p>Apple quietly submitted a <a href=\"https:\/\/arxiv.org\/pdf\/2403.09611.pdf?utm_source=www.therundown.ai&utm_medium=newsletter&utm_campaign=apple-s-ai-model-revealed\" target=\"_blank\">research paper<\/a> last week related to its work on a multimodal large language model (MLLM) called MM1. Apple doesn\u2019t explain what the meaning behind the name is, but it\u2019s possible it could stand for MultiModal 1.<\/p>\n<p>Being multimodal, MM1 is capable of working with both text and images. Overall, its capabilities and design are similar to the likes of Google\u2019s Gemini or Meta\u2019s open-source LLM Llama 2.<\/p>\n<p>An earlier report from <a href=\"https:\/\/www.bloomberg.com\/news\/articles\/2024-03-18\/apple-in-talks-to-license-google-gemini-for-iphone-ios-18-generative-ai-tools?srnd=undefined&sref=9hGJlFio\" target=\"_blank\"><em>Bloomberg<\/em><\/a> said Apple was interested in incorporating Google\u2019s Gemini AI engine into the iPhone. The two companies are reportedly still in talks to let Apple license Gemini to power some of the generative AI features coming to iOS 18.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Apple quietly submitted a research paper last week related to its work on a multimodal large language model (MLLM) called MM1. Apple doesn\u2019t explain what the meaning behind the name is, but it\u2019s possible it could stand for MultiModal 1. Being multimodal, MM1 is capable of working with both text and images. Overall, its capabilities [\u2026]<\/p>\n","protected":false},"author":367,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1512,6],"tags":[],"class_list":["post-185687","post","type-post","status-publish","format-standard","hentry","category-mobile-phones","category-robotics-ai"],"_links":{"self":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/185687","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/users\/367"}],"replies":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/comments?post=185687"}],"version-history":[{"count":0,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/185687\/revisions"}],"wp:attachment":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/media?parent=185687"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/categories?post=185687"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/tags?post=185687"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}