{"id":229808,"date":"2026-01-26T02:06:16","date_gmt":"2026-01-26T08:06:16","guid":{"rendered":"https:\/\/lifeboat.com\/blog\/2026\/01\/ai-house-davos"},"modified":"2026-01-26T02:06:16","modified_gmt":"2026-01-26T08:06:16","slug":"ai-house-davos","status":"publish","type":"post","link":"https:\/\/lifeboat.com\/blog\/2026\/01\/ai-house-davos","title":{"rendered":"AI House Davos"},"content":{"rendered":"<p><\/p>\n<p><iframe style=\"display: block; margin: 0 auto; width: 100%; aspect-ratio: 4\/3; object-fit: contain;\" src=\"https:\/\/www.youtube.com\/embed\/pJyoqapCRZE?feature=oembed\" frameborder=\"0\" allow=\"accelerometer; autoplay; encrypted-media; gyroscope;\n   picture-in-picture\" allowfullscreen><\/iframe><\/p>\n<p>Embodied AI refers to AI integrated into physical systems that can perceive, reason, and act in the real world through sensors and actuators, like robots and autonomous vehicles. This fireside conversation explores how advances in AI like vision\u2013language\u2013action models are redefining what machines can understand and do, especially as we move from navigation to mobile manipulation. The speakers discuss how quickly today\u2019s rapid progress in AI might transfer to robotics and embodied systems, and how soon we can expect to see these technologies making a tangible impact on our daily lives.<\/p>\n<p>Speakers.<br \/> Yann LeCun (Advanced Machine Intelligence, Founder and Executive Chairman)<br \/> Marc Pollefeys (ETH Z\u00fcrich and Faculty, ETH AI Center, Professor) <\/p>\n<p>\u00a9 AI House Davos 2026<br \/> Founders &amp; Strategic Partners:<br \/> ETH AI Center, Merantix, G42, Hewlett Packard Enterprise, EPFL AI Center, The University of Tokyo.<\/p>\n<p>Presenting Partners:<br \/> KPMG.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Embodied AI refers to AI integrated into physical systems that can perceive, reason, and act in the real world through sensors and actuators, like robots and autonomous vehicles. This fireside conversation explores how advances in AI like vision\u2013language\u2013action models are redefining what machines can understand and do, especially as we move from navigation to mobile [\u2026]<\/p>\n","protected":false},"author":709,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[6,1491],"tags":[],"class_list":["post-229808","post","type-post","status-publish","format-standard","hentry","category-robotics-ai","category-transportation"],"_links":{"self":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/229808","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/users\/709"}],"replies":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/comments?post=229808"}],"version-history":[{"count":0,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/229808\/revisions"}],"wp:attachment":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/media?parent=229808"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/categories?post=229808"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/tags?post=229808"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}