{"id":35405,"date":"2017-03-17T03:42:21","date_gmt":"2017-03-17T10:42:21","guid":{"rendered":"http:\/\/lifeboat.com\/blog\/2017\/03\/scattered-thoughts-on-self-awareness-and-ai"},"modified":"2017-04-24T18:44:46","modified_gmt":"2017-04-25T01:44:46","slug":"scattered-thoughts-on-self-awareness-and-ai","status":"publish","type":"post","link":"https:\/\/lifeboat.com\/blog\/2017\/03\/scattered-thoughts-on-self-awareness-and-ai","title":{"rendered":"Scattered thoughts on self-awareness and AI"},"content":{"rendered":"<p><a class=\"blog-photo\" href=\"https:\/\/lifeboat.com\/blog.images\/scattered-thoughts-on-self-awareness-and-ai.jpg\"><\/a><\/p>\n<p>A few ideas on self-awareness and self-aware AIs.<\/p>\n<hr>\n<p>I\u2019ve always been a fan of androids as intended in <a href=\"https:\/\/en.wikipedia.org\/wiki\/Data_(Star_Trek)\" target=\"_blank\">Star Trek<\/a>. More generally, I think the idea of an artificial intelligence with whom you can talk and to whom you can teach things is really cool. I admit it is just a little bit weird that I find the idea of teaching things to small children absolutely unattractive while finding thrilling the idea of doing the same to a machine, but that\u2019s just the way it is for me. (I suppose the fact a machine is unlikely to cry during the night and need to have its diaper changed every few hours might well be a factor at play here.) <\/p>\n<p>Improvements in the field of AI are pretty much commonplace these days, though we\u2019re not yet at the point where we could be talking to a machine in natural language and be unable to tell the difference with a human. I used to take for granted that, one day, we would have androids who are self-aware and have emotions, exactly like people, with all the advantages of being a machine\u2014such as mental multitasking, large computational power, and more efficient memory. While I still like the idea, nowadays I wonder if it is actually a feasible or sensible one.<\/p>\n<p>Don\u2019t worry\u2014I\u2019m not going to give you a sermon on the \u2018dangers\u2019 of AI or anything like that. That\u2019s the opposite of my stand on the matter. I\u2019m not making a moral argument either: Assuming you can build an android that has the entire spectrum of human emotions, this is morally speaking no different from having a child. You don\u2019t (and can\u2019t) ask the child beforehand if it wants to be born, or if it is ready to go through the emotional rollercoaster that is life; generally, you make a child because you want to, so it is in a way a rather selfish act. (Sorry, I am not of the school of thought according to which you\u2019re \u2018giving life to someone else\u2019. Before you make them, there\u2019s no one to give anything to. You\u2019re not doing anyone a favour, certainly not to your yet-to-be-conceived potential baby.) Similarly, building a human-like android is something you would do just because you can and because you want to.<\/p>\n<p><!-- Link: <a href=\"https:\/\/looking4troubles.wordpress.com\/2017\/03\/15\/scattered-thoughts-on-self-awareness-and-ai\/\">https:\/\/looking4troubles.wordpress.com\/2017\/03\/15\/scattered-...ss-and-ai\/<\/a> --><\/p>\n","protected":false},"excerpt":{"rendered":"<p>A few ideas on self-awareness and self-aware AIs. I\u2019ve always been a fan of androids as intended in Star Trek. More generally, I think the idea of an artificial intelligence with whom you can talk and to whom you can teach things is really cool. I admit it is just a little bit weird that [\u2026]<\/p>\n","protected":false},"author":418,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[30,6],"tags":[],"class_list":["post-35405","post","type-post","status-publish","format-standard","hentry","category-ethics","category-robotics-ai"],"_links":{"self":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/35405","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/users\/418"}],"replies":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/comments?post=35405"}],"version-history":[{"count":2,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/35405\/revisions"}],"predecessor-version":[{"id":48493,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/35405\/revisions\/48493"}],"wp:attachment":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/media?parent=35405"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/categories?post=35405"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/tags?post=35405"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}