{"id":169789,"date":"2023-08-16T03:23:30","date_gmt":"2023-08-16T08:23:30","guid":{"rendered":"https:\/\/lifeboat.com\/blog\/2023\/08\/drawing-stuff-ai-can-really-cook-how-far-can-it-go"},"modified":"2023-08-16T03:23:30","modified_gmt":"2023-08-16T08:23:30","slug":"drawing-stuff-ai-can-really-cook-how-far-can-it-go","status":"publish","type":"post","link":"https:\/\/lifeboat.com\/blog\/2023\/08\/drawing-stuff-ai-can-really-cook-how-far-can-it-go","title":{"rendered":"Drawing Stuff: AI Can Really Cook! How Far Can It Go?"},"content":{"rendered":"<p><a class=\"aligncenter blog-photo\" href=\"https:\/\/lifeboat.com\/blog.images\/drawing-stuff-ai-can-really-cook-how-far-can-it-go.jpg\"><\/a><\/p>\n<p>We\u2019ve seen a lot about large learning models in general, and a lot of that has been elucidated at this conference, but many of the speakers have great personal takes on how this type of process works, and what it can do!<\/p>\n<p>For example, here we have Yoon Kim talking about statistical objects, and the use of neural networks (transformer-based neural networks in particular) to use next-word prediction in versatile ways. He uses the example of the location of MIT:<\/p>\n<p>\u201cYou might have a sentence like: \u2018the Massachusetts Institute of Technology is a private land grant research university\u2019 \u2026 and then you train this language model (around it),\u201d he says. \u201cAgain, (it takes) a large neural network to predict the next word, which, in this case, is \u2018Cambridge.\u2019 And in some sense, to be able to accurately predict the next word, it does require this language model to store knowledge of the world, for example, that must store factoid knowledge, like the fact that MIT is in Cambridge. And it must store \u2026 linguistic knowledge. For example, to be able to pick the word \u2018Cambridge,\u2019 it must know what the subject, the verb and the object of the preceding or the current sentence is. But these are, in some sense, fancy autocomplete systems.\u201d<\/p>\n","protected":false},"excerpt":{"rendered":"<p>We\u2019ve seen a lot about large learning models in general, and a lot of that has been elucidated at this conference, but many of the speakers have great personal takes on how this type of process works, and what it can do! For example, here we have Yoon Kim talking about statistical objects, and the [\u2026]<\/p>\n","protected":false},"author":578,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[6,1491],"tags":[],"class_list":["post-169789","post","type-post","status-publish","format-standard","hentry","category-robotics-ai","category-transportation"],"_links":{"self":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/169789","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/users\/578"}],"replies":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/comments?post=169789"}],"version-history":[{"count":0,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/169789\/revisions"}],"wp:attachment":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/media?parent=169789"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/categories?post=169789"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/tags?post=169789"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}