Comments on: Security and Complexity Issues Implicated in Strong Artificial Intelligence, an Introduction https://lifeboat.com/blog/2011/02/security-and-complexity-issues-implicated-in-strong-artificial-intelligence-an-introduction Safeguarding Humanity Tue, 25 Apr 2017 11:50:08 +0000 hourly 1 https://wordpress.org/?v=5.5.3 By: John N Phillips https://lifeboat.com/blog/2011/02/security-and-complexity-issues-implicated-in-strong-artificial-intelligence-an-introduction#comment-82106 Wed, 30 Mar 2011 21:07:20 +0000 http://lifeboat.com/blog/?p=1599#comment-82106 I don’t want to be impolite to you technologically advanced innovators and writers, but you do scare me by making the future of humanity seem terribly precarious because of your achievements. Maybe your goal should become to use human ingenuity to back us out of the dangerous predicament we have entered, until we can think up safer ways. To say that’s not possible would be to confess to a rather grim reality.

]]>
By: Daniel https://lifeboat.com/blog/2011/02/security-and-complexity-issues-implicated-in-strong-artificial-intelligence-an-introduction#comment-80523 Sun, 27 Feb 2011 15:01:14 +0000 http://lifeboat.com/blog/?p=1599#comment-80523 @Matt, I didn’t read, anywhere, that creating general AI is simple, which it is obviously not. Same holds for the robot/agent/software’s laws. With regard to computer power, I don’t see where that is an issue. Sure, the computers nowadays are, somewhat, limited. But that changes nothing to the core operation of the (let’s call it) Agent.

The subject itself, to my opinion is odd. As is the division between strong and weak AI. But we do have to provide some rules for agents (like those from Asimov, but more specific). Agents by itself need a replication device, like through genetic programming, though, some functions are incredibly important to stay the same. Such as its ‘guidelines’ (ie laws), a second important function is death. Each agent needs to ‘listen’ to a certain frequency (call or whatever) whether it has to die or not, ie a ‘kill-switch’. So that , when something goes wrong, we can undo it (and its children, because they, also, would be ‘wrongly’ programmed).

What I am particularly afraid of, is applying unfinished technology to the market too fast. Like chatbots or telephone computers for ‘helpdesks’. Where these are (to my opinion) merely annoying, letting loose a not completed agent, with no way of tracking, can be a disaster.

Nonetheless, good article! Always interesting to think about.

]]>
By: Matt Mahoney https://lifeboat.com/blog/2011/02/security-and-complexity-issues-implicated-in-strong-artificial-intelligence-an-introduction#comment-80499 Sat, 26 Feb 2011 19:46:56 +0000 http://lifeboat.com/blog/?p=1599#comment-80499 It is dangerous to think that we could define a simple, stable goal system like http://asimovlaws.com/
AI is friendly if it does what everyone wants, with conflicts somehow resolved. Our global legal system, every law, regulation, and judicial or administrative decision at the national, state, and local level is a tiny subset of the definition of “friendly”. For AI, you would need to extend this definition to the level of bit operations. It won’t happen by magic. We have to design it.

Fortunately, the capabilities of seed AI is limited. Intelligence comes from knowledge and computing power. A self modifying program gains neither. Improvement has to come from acquiring hardware (robots building robots) and learning (evolution). The only stable goal system is reproductive fitness.

Computation (and therefore intelligence) is ultimately limited by available energy according to the laws of thermodynamics. Erasing a bit costs kT ln 2, or about 3 x 10^-21 J at room temperature. Molecular level computing costs about 10^-17 J per operation according to http://www.foresight.org/nano/Ecophagy.html which is 100 times more efficient than neurons or a million times more efficient than silicon. But this still limits the rate of self improvement to be not much faster than biology.

]]>