{"id":184115,"date":"2024-03-01T15:24:53","date_gmt":"2024-03-01T21:24:53","guid":{"rendered":"https:\/\/lifeboat.com\/blog\/2024\/03\/anything-in-anything-out-a-new-modular-ai-model"},"modified":"2024-03-01T15:24:53","modified_gmt":"2024-03-01T21:24:53","slug":"anything-in-anything-out-a-new-modular-ai-model","status":"publish","type":"post","link":"https:\/\/lifeboat.com\/blog\/2024\/03\/anything-in-anything-out-a-new-modular-ai-model","title":{"rendered":"Anything-in anything-out: A new modular AI model"},"content":{"rendered":"<p><a class=\"aligncenter blog-photo\" href=\"https:\/\/lifeboat.com\/blog.images\/anything-in-anything-out-a-new-modular-ai-model3.jpg\"><\/a><\/p>\n<p>Researchers at EPFL have developed a new, uniquely modular machine learning model for flexible decision-making. It is able to input any mode of text, video, image, sound, and time-series and then output any number, or combination, of predictions.<\/p>\n<p>We\u2019ve all heard of <a href=\"https:\/\/techxplore.com\/tags\/large+language+models\/\" rel=\"tag\" class=\"\">large language models<\/a>, or LLMs\u2014massive scale <a href=\"https:\/\/techxplore.com\/tags\/deep+learning+models\/\" rel=\"tag\" class=\"\">deep learning models<\/a> trained on huge amounts of text that form the basis for chatbots like OpenAI\u2019s ChatGPT. Next-generation multimodal models (MMs) can learn from inputs beyond text, including video, images, and sound.<\/p>\n<p>Creating MM models at a smaller scale poses significant challenges, including the problem of being robust to non-random missing information. This is information that a model doesn\u2019t have, often due to some biased availability in resources. It is thus critical to ensure the model does not learn the patterns of biased missingness in making its predictions.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Researchers at EPFL have developed a new, uniquely modular machine learning model for flexible decision-making. It is able to input any mode of text, video, image, sound, and time-series and then output any number, or combination, of predictions. We\u2019ve all heard of large language models, or LLMs\u2014massive scale deep learning models trained on huge amounts [\u2026]<\/p>\n","protected":false},"author":661,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[6],"tags":[],"class_list":["post-184115","post","type-post","status-publish","format-standard","hentry","category-robotics-ai"],"_links":{"self":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/184115","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/users\/661"}],"replies":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/comments?post=184115"}],"version-history":[{"count":0,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/184115\/revisions"}],"wp:attachment":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/media?parent=184115"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/categories?post=184115"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/tags?post=184115"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}