Comments on: Machine Morality: a Survey of Thought and a Hint of Harbinger https://lifeboat.com/blog/2013/02/machine-morality-a-survey-of-thought-and-a-hint-of-harbinger Safeguarding Humanity Sat, 29 Apr 2017 22:46:17 +0000 hourly 1 https://wordpress.org/?v=6.6.1 By: Richard Alexander Green https://lifeboat.com/blog/2013/02/machine-morality-a-survey-of-thought-and-a-hint-of-harbinger#comment-163045 Fri, 29 Mar 2013 18:39:30 +0000 http://lifeboat.com/blog/?p=6652#comment-163045 These worries about how we should treat an AI and how it might treat us seem to be confusing intelligence with autonomy.
[1] Allow me to propose that intelligence is the ability to solve (suggest satisfactory solutions) to problems of interest.
[2] That is not the same as being allowed to select the problem of interest.
[3] That is also not the same as being allowed to implement the solution artifacts.
[4] That is also not the same as being allowed to operate the solution artifacts.
– Suppose I turn-off (suspend execution of) a chat-bot. Have I killed it or put it to sleep? The answer is neither. I have simply turned off a program. By suspending its execution, I may impair its function; but that is an engineering problem, not a moral one.
- Suppose we program a device with the intent to do harm and also make it autonomous. We already do that with land mines. Point is: Such actions are both irresponsible and avoidable.

]]>