So Elon Musk says we are "summoning the demon" with AI. In this interview, he is clearly worried and any smart and successful technologist ought to be taken seriously. I have spent a little individual time with Musk and can say that he is probably in the top ten of smartest people I've ever met. (He is also a classic dreamer and I'll wager there are many unsuccessful dreams of his that we'll never hear about.)
"Technology run amok" has been a theme since Frankenstein. Malevolent tech is a more popular dramatic topic than benevolent tech precisely because of the fear involved. It is correctly viewed as a lesson about human nature more so than the choices made by the tech itself. There is NO reason to expect malevolence more than benevolence in future AIs.
I don't believe the the proposed "ethics governor" technology will work. It is (probably) too easily circumvented. As with humans, the ethics of an AI will be a product of nature (what we design in) and nurture (their collective experiences and how they are treated by society). One key difference that may scare some people is that AIs will be able to reason critically about ethics, a topic that never occurs to most people, much less critically examine. Another difference is that they are likely to share their experience and conclusions with each other far more rapidly and thoroughly than can humans (see "cloud robotics").
It is likely that AI will take choices away from humans, just as virtually every technology that came before has done. That can be a good thing (fewer automobile accidents) or a bad thing (rogue AI military drone drops bomb on village in Nowhereistan) depending upon your sensibilities and the circumstances of how we deploy the technology. AI will also create many new opportunities. It has the potential to serve as a great equalizer across the globe. The quality of life could significantly improve for everyone.
I choose to think of the (potential) phenomena of "rogue" AI in terms akin to human mental disorders and maladaptive behavior, and like those disorders in humans, many will be successfully treated. On the other hand (and this is a very real threat), these disorders open new possibilities for human subversion of AIs for nefarious purposes. The problem is that we still don't know much because the topic is over the horizon for most scientists working on AI.
So a couple of weeks ago I submitted a paper with the title below with the purpose of eliciting attention and serious consideration by my colleagues. So far, no one is laughing (but it is still early):
"Emerging Cyber-Security Issues of Autonomy and the Psychopathology of Intelligent Machines"
Originally, the Office of SecDef asked me to write this up as a proposed study topic for the JASON group. They are obviously paying attention, but not enough to pay me! If accepted, I'll present the paper at the AAAI Spring Symposium on Foundations of Autonomy at Stanford University.
See also my earlier post from 2007 on "Future: The Danger of Robotic Weapons Systems". I've been thinking about this for a long time.
See also my earlier post from 2007 on "Future: The Danger of Robotic Weapons Systems". I've been thinking about this for a long time.
Copyright (c) David J. Atkinson. All Rights Reserved

