{"id":6652,"date":"2013-02-08T04:28:53","date_gmt":"2013-02-08T12:28:53","guid":{"rendered":"http:\/\/lifeboat.com\/blog\/?p=6652"},"modified":"2017-04-29T15:46:17","modified_gmt":"2017-04-29T22:46:17","slug":"machine-morality-a-survey-of-thought-and-a-hint-of-harbinger","status":"publish","type":"post","link":"https:\/\/lifeboat.com\/blog\/2013\/02\/machine-morality-a-survey-of-thought-and-a-hint-of-harbinger","title":{"rendered":"Machine Morality: a Survey of Thought and a Hint of Harbinger"},"content":{"rendered":"<p><a class=\"aligncenter blog-photo\" href=\"https:\/\/lifeboat.com\/blog\/wp-content\/uploads\/2013\/02\/KILL.THE_.ROBOTS-e1360325314348.jpg\"><\/a><strong><br \/>\n<span style=\"\">The Golden Rule is Not for Toasters<\/span><\/strong><br \/> Simplistically nutshelled, talking about machine morality is picking apart whether or not we\u2019ll someday have to be nice to machines or demand that they be nice to us.<\/p>\n<p>Well, it\u2019s always a good time to address human &amp; machine morality vis-\u00e0-vis both the engineering and philosophical issues intrinsic to the qualification and validation of non-biological intelligence and\/or consciousness that, if manifested, would wholly justify consideration thereof.<\/p>\n<p>Uhh\u2026 yep!<\/p>\n<p>But, whether at run-on sentence dorkville or any other tech forum, right from the jump one should know that a single voice rapping about machine morality is bound to get hung up in and blinded by its own perspective, e.g., splitting hairs to decide who or what deserves moral treatment (if a definition of that can even be nailed down), or perhaps yet another justification for the standard intellectual cul de sac:<br \/>\n<span style=\"\"><em>\u201cWhy bother, it\u2019s never going to happen.\u201c<\/em><\/span><br \/> That\u2019s tired and lame.<\/p>\n<p>One voice, one study, or one robot fetishist with a digital bullhorn \u2014 one ain\u2019t enough. So, presented and recommended here is a broad-based overview, a selection of the past year\u2019s standout pieces on machine morality.<img decoding=\"async\" title=\"More...\" alt=\"\" src=\"\" \/>The first, only a few days old, is actually an announcement of intent that could pave the way to forcing the actual question.<br \/> Let\u2019s then have perspective:<\/p>\n<p><span style=\"\"><strong>Building a Brain \u2014 Being Humane \u2014 Feeling our Pain \u2014 Dude from the NYT<\/strong><\/span><br \/>\n<strong>\u2022 <\/strong>February 3, 2013 \u2014 Human Brain Project: <em><a href=\"http:\/\/www.humanbrainproject.eu\/\" target=\"_blank\">Simulate One<br \/>\n<\/a><\/em>Serious Euro-Science to simulate a human brain. Will it behave? Will we?<\/p>\n<p><strong>\u2022 <\/strong>January 28, 2013 \u2014 NPR: <em><a href=\"http:\/\/www.npr.org\/blogs\/health\/2013\/01\/28\/170272582\/do-we-treat-our-gadgets-like-they-re-human\" target=\"_blank\">No Mercy for Robots<br \/>\n<\/a><\/em>A study of reciprocity and punitive reaction to non-human actors. Bad robot.<\/p>\n<p><strong>\u2022 <\/strong>April 25, 2012 \u2014 IEEE Spectrum: <em><a href=\"http:\/\/spectrum.ieee.org\/automaton\/robotics\/artificial-intelligence\/study-shows-that-humans-attribute-morals-and-emotions-to-robots\" target=\"_blank\">Attributing Moral Accountability to Robots<br \/>\n<\/a><\/em>On the human expectation of machine morality. They should be nice to me.<\/p>\n<p><strong>\u2022 <\/strong>December 25, 2011 \u2014 NYT: <em><a href=\"http:\/\/opinionator.blogs.nytimes.com\/2011\/12\/25\/the-future-of-moral-machines\/\" target=\"_blank\">The Future of Moral Machines<br \/>\n<\/a><\/em>Engineering (at least functional) machine morality. Broad strokes NYT-style.<\/p>\n<p><span style=\"\"><strong>Expectations More Human than Human?<\/strong><\/span><br \/> Now, of course you\u2019re going to check out those pieces you just skimmed over, after you finish trudging through this anti-brevity technosnark\u00a9\u00ae\u2122 hybrid, of course. When you do \u2014 you might notice the troubling rub of expectation dichotomy. Simply put, these studies and reports point to a potential showdown between how we treat our machines, how we might expect others to treat them, and how we might one day expect to be treated by them. For now morality is irrelevant, it is of no consideration nor consequence in our thoughts or intentions toward machines. But, at the same time we hold dear the expectation of reasonable treatment, if not moral, by any intelligent agent \u2014 even an only vaguely human robot.<\/p>\n<p>Well what if, for example: 1. AI matures, and 2. machines really start to look like us?<br \/> (see: <a href=\"http:\/\/anthrobotic.com\/2011\/12\/06\/leaping-across-moris-uncanny-valley-androids-probably-wont-creep-us-out\/#.URO0jFpAR20\" target=\"_blank\">Leaping Across Mori\u2019s Uncanny Valley: Androids Probably Won\u2019t Creep Us Out<\/a>) <\/p>\n<p>Even now should someone attempt to smash your smartphone or laptop (or just touch it), you of course protect the machine. Extending beyond concerns over the mere destruction of property or loss of labor, could one <em>morally<\/em> abide harm done to one\u2019s marginally convincing humanlike companion? Even if fully accepting of its artificiality, where would one draw the line between economic and emotional damage? Or, potentially, could the machine <em>itself<\/em> abide harm done to it? Even if imbued with a perfectly coded algorithmic moral code mandating \u201cdo no harm,\u201d could a machine calculate its passive non-response to intentional damage as an immoral act against itself, and then react?<\/p>\n<p>Yeah, these hypotheticals can go on forever, but it\u2019s clear that blithely ignoring machine morality or overzealously attempting to engineer it might result in\u2026 immorality.<\/p>\n<p><span style=\"\"><strong>Probably Only a Temporary Non-Issue. Or Maybe. Maybe Not.<\/strong><\/span><br \/> There\u2019s an argument that actually <em>needing<\/em> to practically implement or codify machine morality is so remote that debate is, now and forever, only that \u2014 and oh wow, that opinion is superbly dumb. This author has addressed this staggeringly arrogant species-level macro-narcissism before (<a href=\"http:\/\/anthrobotic.com\/2011\/08\/25\/can-a-computer-be-as-intelligent-as-a-human-or-asking-the-wrong-dumb-question-get-it\/\" target=\"_blank\">and it was awesome<\/a>). See, outright dismissal isn\u2019t a dumb argument because a self-aware machine or something close enough for us to regard as such is without doubt going to happen, it\u2019s dumb because <strong>1.<\/strong> absolutism is fascist, and <strong>2.<\/strong> to the best of our knowledge, excluding the magic touch of Jesus &amp; friends or aliens spiking our genetic punch or whatever, conscious and\/or self-aware intelligence (which would require moral consideration) appears to be an emergent trait of massively powerful computation. And we\u2019re getting really good at making machines do that.<\/p>\n<p>Whatever the challenge, humans rarely avoid stabbing toward the supposedly impossible \u2014 and a lot of the time, we do land on the moon. The above mentioned Euro-project says it\u2019ll need 10 years to crank out a human brain simulation. Okay, respectable. But, a working draft of the human genome, an initially 15-year international project, was completed 5 years ahead of schedule due largely to advances in brute force computational capability (in the not so digital 1990s). All that computery stuff like, you know, gets better a lot faster these days. Just sayin.<\/p>\n<p>So, you know, might be a good idea to keep hashing out ideas on machine morality.<br \/> Because who knows what we might end up with\u2026<\/p>\n<p><em>\u201c<span style=\"\">Oh sure, I understand, turn me off, erase me \u2014 time for a better model, I totally get it.<\/span>\u201c<br \/>\n<\/em><em><span style=\"color: #c0c0c0\">- or -<\/span><br \/>\n<\/em><em>\u201c<span style=\"\">Hey, meatsack, don\u2019t touch me or I\u2019ll reformat your squishy face!<\/span>\u201d<\/em><\/p>\n<p>Choose your own adventure!<\/p>\n<p>[<a href=\"http:\/\/www.humanbrainproject.eu\/\" target=\"_blank\">HUMAN BRAIN PROJECT<\/a>]<br \/> [<a href=\"http:\/\/www.npr.org\/blogs\/health\/2013\/01\/28\/170272582\/do-we-treat-our-gadgets-like-they-re-human\" target=\"_blank\">NO MERCY FOR ROBOTS \u2014 NPR<\/a>]<br \/> [<a href=\"http:\/\/spectrum.ieee.org\/automaton\/robotics\/artificial-intelligence\/study-shows-that-humans-attribute-morals-and-emotions-to-robots\" target=\"_blank\">ATTRIBUTING MORAL ACCOUNTABILITY TO ROBOTS \u2014 IEEE<\/a>]<br \/> [<a href=\"http:\/\/opinionator.blogs.nytimes.com\/2011\/12\/25\/the-future-of-moral-machines\/\" target=\"_blank\">THE FUTURE OF MORAL MACHINES \u2014 NYT<\/a>]<\/p>\n<p><em>This piece originally appeared at <a href=\"http:\/\/anthrobotic.com\/?p=7760\" target=\"_blank\"><a href=\"http:\/\/Anthrobotic.com\">Anthrobotic.com<\/a><\/a> on February 7, 2013.<\/em><\/p>\n","protected":false},"excerpt":{"rendered":"<p>The Golden Rule is Not for Toasters Simplistically nutshelled, talking about machine morality is picking apart whether or not we\u2019ll someday have to be nice to machines or demand that they be nice to us. Well, it\u2019s always a good time to address human &amp; machine morality vis-\u00e0-vis both the engineering and philosophical issues intrinsic [\u2026]<\/p>\n","protected":false},"author":231,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[3,11,38,30,385,12,20,386,37,6,64,44],"tags":[99,1217,1218,376,1219,1220,1200,1221,1222,222],"class_list":["post-6652","post","type-post","status-publish","format-standard","hentry","category-biological","category-biotech-medical","category-engineering","category-ethics","category-evolution","category-existential-risks","category-futurism","category-homo-sapiens","category-human-trajectories","category-robotics-ai","category-singularity","category-supercomputing","tag-ai","tag-artificial-intelligence","tag-brain","tag-consciousness","tag-existential-quandary-stuff","tag-future-already-very-pending","tag-robot","tag-robot-emotions","tag-robot-love","tag-technology"],"_links":{"self":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/6652","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/users\/231"}],"replies":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/comments?post=6652"}],"version-history":[{"count":2,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/6652\/revisions"}],"predecessor-version":[{"id":57041,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/6652\/revisions\/57041"}],"wp:attachment":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/media?parent=6652"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/categories?post=6652"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/tags?post=6652"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}