{"id":227677,"date":"2025-12-23T06:27:20","date_gmt":"2025-12-23T12:27:20","guid":{"rendered":"https:\/\/lifeboat.com\/blog\/2025\/12\/new-computer-vision-method-links-photos-to-floor-plans-with-pixel-level-accuracy"},"modified":"2025-12-23T06:27:20","modified_gmt":"2025-12-23T12:27:20","slug":"new-computer-vision-method-links-photos-to-floor-plans-with-pixel-level-accuracy","status":"publish","type":"post","link":"https:\/\/lifeboat.com\/blog\/2025\/12\/new-computer-vision-method-links-photos-to-floor-plans-with-pixel-level-accuracy","title":{"rendered":"New computer vision method links photos to floor plans with pixel-level accuracy"},"content":{"rendered":"<p><a class=\"aligncenter blog-photo\" href=\"https:\/\/lifeboat.com\/blog.images\/new-computer-vision-method-links-photos-to-floor-plans-with-pixel-level-accuracy2.jpg\"><\/a><\/p>\n<p>For people, matching what they see on the ground to a map is second nature. For computers, it has been a major challenge. A Cornell research team has introduced a new method that helps machines make these connections\u2014an advance that could improve robotics, navigation systems, and 3D modeling.<\/p>\n<p>The work, presented at the 2025 <a href=\"https:\/\/neurips.cc\/\" target=\"_blank\">Conference on Neural Information Processing Systems<\/a> and <a href=\"https:\/\/arxiv.org\/abs\/2511.18559\" target=\"_blank\">published<\/a> on the <i>arXiv<\/i> preprint server, tackles a major weakness in today\u2019s computer vision tools. Current systems perform well when comparing similar images, but they falter when the views differ dramatically, such as linking a street-level photo to a simple map or architectural drawing.<\/p>\n<p>The new approach teaches machines to find pixel-level matches between a photo and a floor plan, even when the two look completely different. Kuan Wei Huang, a doctoral student in computer science, is the first author; the co-authors are Noah Snavely, a professor at Cornell Tech; Bharath Hariharan, an associate professor at the Cornell Ann S. Bowers College of Computing and Information Science; and undergraduate Brandon Li, a computer science student.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>For people, matching what they see on the ground to a map is second nature. For computers, it has been a major challenge. A Cornell research team has introduced a new method that helps machines make these connections\u2014an advance that could improve robotics, navigation systems, and 3D modeling. The work, presented at the 2025 Conference [\u2026]<\/p>\n","protected":false},"author":427,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[6],"tags":[],"class_list":["post-227677","post","type-post","status-publish","format-standard","hentry","category-robotics-ai"],"_links":{"self":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/227677","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/users\/427"}],"replies":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/comments?post=227677"}],"version-history":[{"count":0,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/227677\/revisions"}],"wp:attachment":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/media?parent=227677"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/categories?post=227677"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/tags?post=227677"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}