{"id":174090,"date":"2023-10-13T08:22:26","date_gmt":"2023-10-13T13:22:26","guid":{"rendered":"https:\/\/lifeboat.com\/blog\/2023\/10\/visual-question-answering-with-frozen-large-language-models"},"modified":"2023-10-13T08:22:26","modified_gmt":"2023-10-13T13:22:26","slug":"visual-question-answering-with-frozen-large-language-models","status":"publish","type":"post","link":"https:\/\/lifeboat.com\/blog\/2023\/10\/visual-question-answering-with-frozen-large-language-models","title":{"rendered":"Visual Question Answering with Frozen Large Language Models"},"content":{"rendered":"<p><a class=\"aligncenter blog-photo\" href=\"https:\/\/lifeboat.com\/blog.images\/visual-question-answering-with-frozen-large-language-models.jpg\"><\/a><\/p>\n<p>In this article we\u2019ll use a Q-Former, a technique for bridging computer vision and natural language models, to create a visual question answering system. We\u2019ll go over the necessary theory, following the <a class=\"\" href=\"https:\/\/arxiv.org\/abs\/2301.12597\" rel=\"noopener ugc nofollow\" target=\"_blank\">BLIP-2 paper<\/a>, then implement a system which can be used to talk with a large language model about an image.<\/p>\n<p><strong class=\"\">Who is this useful for? <\/strong>Data scientists interested in computer vision, natural language processing, and multimodal modeling.<\/p>\n<p><strong class=\"\">How advanced is this post? <\/strong>Intermediate. You might struggle if you don\u2019t have some experience in both computer vision and natural language processing.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>In this article we\u2019ll use a Q-Former, a technique for bridging computer vision and natural language models, to create a visual question answering system. We\u2019ll go over the necessary theory, following the BLIP-2 paper, then implement a system which can be used to talk with a large language model about an image. Who is this [\u2026]<\/p>\n","protected":false},"author":556,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[6],"tags":[],"class_list":["post-174090","post","type-post","status-publish","format-standard","hentry","category-robotics-ai"],"_links":{"self":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/174090","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/users\/556"}],"replies":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/comments?post=174090"}],"version-history":[{"count":0,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/174090\/revisions"}],"wp:attachment":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/media?parent=174090"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/categories?post=174090"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/tags?post=174090"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}