{"id":142498,"date":"2022-07-19T14:42:53","date_gmt":"2022-07-19T19:42:53","guid":{"rendered":"https:\/\/lifeboat.com\/blog\/2022\/07\/a-system-to-retrieve-images-using-sketches-on-smart-devices"},"modified":"2022-07-19T14:42:53","modified_gmt":"2022-07-19T19:42:53","slug":"a-system-to-retrieve-images-using-sketches-on-smart-devices","status":"publish","type":"post","link":"https:\/\/lifeboat.com\/blog\/2022\/07\/a-system-to-retrieve-images-using-sketches-on-smart-devices","title":{"rendered":"A system to retrieve images using sketches on smart devices"},"content":{"rendered":"<p><a class=\"aligncenter blog-photo\" href=\"https:\/\/lifeboat.com\/blog.images\/a-system-to-retrieve-images-using-sketches-on-smart-devices3.jpg\"><\/a><\/p>\n<p>Researchers at the SketchX, University of Surrey have recently developed a meta learning-based model that allows users to retrieve images of specific items simply by sketching them on a tablet, smartphone, or on other smart devices. This framework was outlined in a paper set to be presented at the European Conference on Computer Vision (ECCV), one of the top three flagship computer vision conferences along with CVPR and ICCV.<\/p>\n<p>\u201cThis is the latest along the line of work on \u2018fine-grained image retrieval,\u2019 a problem that my research lab (SketchX, which I direct and founded back in 2012) pioneered back in 2015, with a paper published in CVPR 2015 titled \u2018Sketch Me That Shoe,\u2019\u201d Yi-Zhe Song, one of the researchers who carried out the study, told TechXplore. \u201cThe idea behind our paper is that it is often hard or impossible to conduct image retrieval at a fine-grained level, (e.g., finding a particular type of shoe at Christmas, but not any shoe).\u201d<\/p>\n<p>In the past, some researchers tried to devise models that can retrieve images based on text or voice descriptions. Text might be easier for <a href=\"https:\/\/techxplore.com\/tags\/users\/\" rel=\"tag\" class=\"\">users<\/a> to produce, yet it was found only to work at a coarse level. In other words, it can become ambiguous and ineffective when trying to describe details.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Researchers at the SketchX, University of Surrey have recently developed a meta learning-based model that allows users to retrieve images of specific items simply by sketching them on a tablet, smartphone, or on other smart devices. This framework was outlined in a paper set to be presented at the European Conference on Computer Vision (ECCV), [\u2026]<\/p>\n","protected":false},"author":599,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1523,1512],"tags":[],"class_list":["post-142498","post","type-post","status-publish","format-standard","hentry","category-computing","category-mobile-phones"],"_links":{"self":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/142498","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/users\/599"}],"replies":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/comments?post=142498"}],"version-history":[{"count":0,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/142498\/revisions"}],"wp:attachment":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/media?parent=142498"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/categories?post=142498"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/tags?post=142498"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}