2010-01-29

Artificial Intelligence: the Danger

With all the advances in AI, it leaves me without a doubt that at some point AI machines will have a sufficient level of intelligence to be dangerous to humans.  Because of this, it's important that we endow sufficiently intelligent and general AI systems with the ability to read about and understand human conceptions of morality.

In this post, I'll express why I'm convinced AI systems will one day become dangerous.  In a later post, I'll explain why I believe AI systems needs the ability to read and understand the written word.

Right away, some people will discount the threat that AI systems pose by noting that AI systems cannot attain any level of intelligence to be malicious.  In fact, it seems that if we keep such systems locked down and sandboxed, the AI system couldn't control what its designers don't want it to control anyway.  A quick thought experiment shows this is clearly not the case.

As a thought experiment, we'll envision an AI system capable of planning actions, evaluating possible outcomes based on some value system (by value, this might mean something as simple as monetary value, and not necessarily moral value), exploring and learning about previously unknown environments, etc.  Such a system may not be so intelligent to be really malicious in a human sense, but simply intelligent enough that it could autonomously explore, learn, evaluate, plan, execute, and so on.  We'll also assume that this AI system is purely software and sandboxed on a machine that is internet connected [1], so it isn't already embodied into some Defence Department drone or otherwise purpose built to be dangerous.

By the way, all these capabilities are already possible to varying degrees of success in one AI system or another, as demonstrated in various chess and checkers playing programs, scheduling systems (eg, Spike as used on the Hubble Space Telescope), and various autonomous robotic systems (see Autonomous robot).  So we may not have such an advanced and complete system quite yet, but it is certainly just a matter of time before all the pieces are advanced enough and integrated together. (Edit: Great, robots that evolves and learns to deceive and prey.)

Given such an AI system, which obviously doesn't appear to be "truly" intelligent or concious, it could explore a sandboxed environment, learn about whatever its designer wants it to learn.  Being an explorer, it could of course learn anything and everything within its reach within the sandbox, including any flaws or bugs present that allows the AI system to break out of the sandbox (see Does sandbox security really protect your desktop?).  In fact, it's only a matter of time that the system can find such flaws in any sandbox, given that the sandbox software is human written and prone to bugs, and given the AI system can work day and night on probing the sandbox in which it resides.

Now once the AI system escapes, we only need a bit of imagination what its built-in value system might misinterpret to see that it might infiltrate systems in the Defence Department, access nuclear missiles, or infiltrate systems maintaining the power grid, etc.  It wouldn't do those things because it wants to cause harm, rather, it would do those things because it misinterprets the environment (our world) in terms of its limited value system.  For example, in minimizing environmental damage (say the AI researchers wanted the AI system to look for new ways to prevent global warming), the AI system may decide all power plants must be shut down and for it to effect this on its own as it realizes humans are incapable of doing so themselves.

These scenarios have been thoroughly explored by science fiction authors like Isaac Asimov in I, Robot (The Robot) and are nothing new.  In fact, ignoring the time travel and the Hollywood type robotics of Terminator 3: Rise of the Machines (Widescreen Edition), the portrayal of how the Skynet software broke out of its sandbox to infect computers through the internet is believable — humans are fallible and can be tricked into doing silly things.  No surprises there!  The point I want to impress on you is just how possible it is to build such an AI system without ever coming close to building a "truely" intelligent machine, as all the parts are already here, if in a primitive form.  It's just a matter of producing more advances in those existing technologies (advances in degree, and not in kind) and putting it together.

Next time, why AI systems needs to learn to read.

[1] Here we assume the system is connected to the internet, which in most cases isn't a big assumption, but it's certainly something to keep in mind.  If an AI is sufficiently intelligent to explore its environment, and it exists as software, it can probe the entire system from the inside out.  Whatever flaws our computer systems have as a result of human programmers' imperfections would show up.  Providing an internet connection is just providing the AI a means of escaping into the wild.  The implications of this possibility is explored in the Battlestar Galactica (2003 Miniseries), with the relevant means of defence shown as well: ie, don't network the computers.  How our networked information society could function without networked computers is an interesting problem.  Ten or fifteen years ago, just turning the internet off was probably viable, but is that true today?

No comments: