{"id":146908,"date":"2022-09-24T01:24:29","date_gmt":"2022-09-24T06:24:29","guid":{"rendered":"https:\/\/lifeboat.com\/blog\/2022\/09\/musing-on-understanding-ai-hugo-de-garis-adam-ford-michel-de-haan"},"modified":"2022-09-24T01:24:29","modified_gmt":"2022-09-24T06:24:29","slug":"musing-on-understanding-ai-hugo-de-garis-adam-ford-michel-de-haan","status":"publish","type":"post","link":"https:\/\/lifeboat.com\/blog\/2022\/09\/musing-on-understanding-ai-hugo-de-garis-adam-ford-michel-de-haan","title":{"rendered":"Musing on Understanding &amp; AI \u2014 Hugo de Garis, Adam Ford, Michel de Haan"},"content":{"rendered":"<p><\/p>\n<p><iframe loading=\"lazy\" title=\"Musing on Understanding &amp; AI - Hugo de Garis, Adam Ford, Michel de Haan\" width=\"640\" height=\"360\" src=\"https:\/\/www.youtube.com\/embed\/tyrp9WDAdng?feature=oembed\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" referrerpolicy=\"strict-origin-when-cross-origin\" allowfullscreen><\/iframe><\/p>\n<p>Started out as an interview ended up being a discussion between Hugo de Garis and (off camera) Adam Ford + Michel de Haan.<br \/> 00:11 The concept of understanding under-recognised as an important aspect of developing AI<br \/> 00:44 Re-framing perspectives on AI \u2014 the Chinese Room argument \u2014 and how can consciousness or understanding arise from billions of seemingly discreet neurons firing? (Should there be a binding problem of understanding similar to the binding problem of consciousness?)<br \/> 04:23 Is there a difference between generality in intelligence and understanding? (and extentionally between AGI and artificial understanding?)<br \/> 05:08 Ah Ha! moments \u2014 where the penny drops \u2014 what\u2019s going on when this happens?<br \/> 07:48 Is there an ideal form of understanding? Coherence &amp; debugging \u2014 ah ha moments.<br \/> 10:18 Webs of knowledge \u2014 contextual understanding.<br \/> 12:16 Early childhood development \u2014 concept formation and navigation.<br \/> 13:11 The intuitive ability for concept navigation isn\u2019t complete.<br \/> Is the concept of understanding a catch all?<br \/> 14:29 Is it possible to develop AGI that doesn\u2019t understand? Is generality and understanding the same thing?<br \/> 17:32 Why is understanding (the nature of) understanding important?<br \/> Is understanding reductive? Can it be broken down?<br \/> 19:52 What would be the most basic primitive understanding be?<br \/> 22:11 If (strong) AI is important, and understanding is required to build (strong) AI, what sorts of things should we be doing to make sense of understanding?<br \/> Approaches \u2014 engineering, and copy the brain.<br \/> 24:34 Is common sense the same thing as understanding? How are they different?<br \/> 26:24 What concepts do we take for granted around the world \u2014 which when strong AI comes about will dissolve into illusions, and then tell us how they actually work under the hood?<br \/> 27:40 Compression and understanding.<br \/> 29:51 Knowledge, Gettier problems and justified true belief. Is knowledge different from understanding and if so how?<br \/> 31:07 A hierarchy of intel \u2014 data, information, knowledge, understanding, wisdom.<br \/> 33:37 What is wisdom? Experience can help situate knowledge in a web of understanding \u2014 is this wisdom? Is the ostensible appearance of wisdom necessarily wisdom? Think pulp remashings of existing wisdom in the form of trashy self-help literature.<br \/> 35:38 Is understanding mapping knowledge into a useful framework? Or is it making accurate \/ novel predictions?<br \/> 36:00 Is understanding like high resolution carbon copy like models that accurately reflect true nature or a mechanical process?<br \/> 37:04 Does understanding come in gradients of topologies? Is there degrees or is it just on or off?<br \/> 38:37 What comes first \u2014 understanding or generality?<br \/> 40:47 Minsky\u2019s \u2018Society of Mind\u2019<br \/> 42:46 Is vitalism alive in well in the AI field? Do people actually think there are ghosts in the machines?<br \/> 48:15 Anthropomorphism in AI literature.<br \/> 50:48 Deism \u2014 James Gates and error correction in super-symmetry.<br \/> 52:16 Why are the laws of nature so mathematical? Why is there so much symmetry in physics? Is this confusing the map with the territory?<br \/> 52:35 The Drake equation, and the concept of the Artilect \u2014 does this make Deism plausible? What about the Fermi Paradox?<br \/> 55:06 Hyperintelligence is tiny \u2014 the transcention hypothesis \u2014 therefore civs go tiny \u2014 an explanation for the fermi paradox.<br \/> 56:36 Why would *all* civs go tiny? Why not go tall, wide and tiny? What about selection pressures that seem to necessitate cosmic land grabs?<br \/> 01:01:52 The Great Filter and the The Fermi Paradox.<br \/> 01:02:14 Is it possible for an AGI to have a deep command of knowledge across a wide variety of topics\/categories without understanding being an internal dynamic? Is the turing test good enough to test for understanding? What kinds of behavioral tests could reliably test for understanding? (Of course without the luxury of peering under the hood)<br \/> 01:03:09 Does AlphaGo understand Go, or DeepBlue understand chess? Revisiting the Chinese Room argument.<br \/> 01:04:23 More on behavioral tests for AI understanding.<br \/> 01:06:00 Zombie machines \u2014 David Chalmers Zombie argument.<br \/> 01:07:26 Complex enough algorithms \u2014 is there a critical point of complexity beyond which general intelligence likely emerges? Or understanding emerges?<br \/> 01:08:11 Revisiting behavioral \u2018turing\u2019 tests for understanding.<br \/> 01:13:05 Shape sorters and reverse shape sorters.<br \/> 01:14:03 Would slightly changing the rules of Go confuse AlphaGo (after it had been trained)? Need for adaptivity \u2014 understanding concept boundaries, predicting where they occur, and the ability to mine outwards from these boundaries\u2026<br \/> 01:15:11 Neural nets and adaptivity.<br \/> 01:16:41 AlphaGo documentary \u2014 worth a watch. Progresses in AI challenges human dignity which is a concern, but the DeepMind and the AlphaGo documentary seemed to be respectful. Can we manage a transition from human labor to full on automation while preserving human dignity?<\/p>\n<p>Filmed in the dandenong ranges in victoria, australia.<\/p>\n<p>Many thanks for watching!<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Started out as an interview ended up being a discussion between Hugo de Garis and (off camera) Adam Ford + Michel de Haan. 00:11 The concept of understanding under-recognised as an important aspect of developing AI 00:44 Re-framing perspectives on AI \u2014 the Chinese Room argument \u2014 and how can consciousness or understanding arise from [\u2026]<\/p>\n","protected":false},"author":661,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[32,12,41,1965,2229,219,6],"tags":[],"class_list":["post-146908","post","type-post","status-publish","format-standard","hentry","category-education","category-existential-risks","category-information-science","category-mapping","category-mathematics","category-physics","category-robotics-ai"],"_links":{"self":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/146908","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/users\/661"}],"replies":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/comments?post=146908"}],"version-history":[{"count":0,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/146908\/revisions"}],"wp:attachment":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/media?parent=146908"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/categories?post=146908"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/tags?post=146908"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}