{"id":187079,"date":"2024-04-09T22:25:39","date_gmt":"2024-04-10T03:25:39","guid":{"rendered":"https:\/\/lifeboat.com\/blog\/2024\/04\/paper-page-internlm-xcomposer2-4khd-a-pioneering-large-vision-language-model-handling-resolutions-from-336-pixels-to-4k-hd"},"modified":"2024-04-09T22:25:39","modified_gmt":"2024-04-10T03:25:39","slug":"paper-page-internlm-xcomposer2-4khd-a-pioneering-large-vision-language-model-handling-resolutions-from-336-pixels-to-4k-hd","status":"publish","type":"post","link":"https:\/\/lifeboat.com\/blog\/2024\/04\/paper-page-internlm-xcomposer2-4khd-a-pioneering-large-vision-language-model-handling-resolutions-from-336-pixels-to-4k-hd","title":{"rendered":"Paper page \u2014 InternLM-XComposer2-4KHD: A Pioneering Large Vision-Language Model Handling Resolutions from 336 Pixels to 4K HD"},"content":{"rendered":"<p><a class=\"aligncenter blog-photo\" href=\"https:\/\/lifeboat.com\/blog.images\/paper-page-internlm-xcomposer2-4khd-a-pioneering-large-vision-language-model-handling-resolutions-from-336-pixels-to-4k-hd3.jpg\"><\/a><\/p>\n<p>From sensetime, shanghai #AI lab, &amp; tsinghua U<\/p>\n<p>InternLM-XComposer2-4KHD<\/p>\n<p>A Pioneering Large Vision-Language Model Handling Resolutions from 336 Pixels to 4K HD <a href=\"https:\/\/huggingface.co\/papers\/2404\">https:\/\/huggingface.co\/papers\/2404<\/a>.<\/p>\n<p>The Large Vision-Language Model (LVLM) field has seen significant advancements, yet its progression\u2026<\/p>\n<hr>\n<p>Join the discussion on this paper page.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>From sensetime, shanghai #AI lab, &amp; tsinghua U InternLM-XComposer2-4KHD A Pioneering Large Vision-Language Model Handling Resolutions from 336 Pixels to 4K HD https:\/\/huggingface.co\/papers\/2404. The Large Vision-Language Model (LVLM) field has seen significant advancements, yet its progression\u2026 Join the discussion on this paper page.<\/p>\n","protected":false},"author":709,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[6],"tags":[],"class_list":["post-187079","post","type-post","status-publish","format-standard","hentry","category-robotics-ai"],"_links":{"self":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/187079","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/users\/709"}],"replies":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/comments?post=187079"}],"version-history":[{"count":0,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/187079\/revisions"}],"wp:attachment":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/media?parent=187079"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/categories?post=187079"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/tags?post=187079"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}