From Isaac Asimov’s three laws to Elon Musk’s “Summoning the Demon”, ethics of AI is a popular conversation starter. However, there is now a sudden urgency felt by the general population. Here we look at the economics of Ethics and see why this topic has reached crisis pitch.
Elon Musk: With artificial intelligence we’re summoning the demon. You know those stories where there’s the guy with the pentagram, and the holy water, and he’s like — Yeah, he’s sure he can control the demon? Doesn’t work out.
Isaac Asimov: You know that it is impossible for a robot to harm a human being; that long before enough can go wrong to alter that First Law, a robot would be completely inoperable. It’s a mathematical impossibility.
Somewhere between these two quotes, Programming and Artificial Intelligence fundamentally changed from something rigorous and safe to something squishy and dangerous. For those current with reading, the change happened around the time immediately preceding and leading to the defeat of Go champions Lee Sedol and Ke Jie. Computers came to develop an understanding of visual pattern matching and fuzzy logic that is somewhat hardwired into humans. This is all subconscious to us and similarly untouchable in computers. Now we just train the models, but we don’t understand them. They are called convoluted for a reason.
The fundamental difference between AI systems now and then is how intuitive the behaviour will be. Previously it would be unimaginable for a system to behave in an entirely unexpected way. Maybe you would have some surprises lying in your data set, but certainly not in your algorithms. Now Convoluted Neural Networks are so complicated that special tools are needed to visualize even a fraction of the model. Here lies the crux of ethics in our new AI driven age: we need to understand it to be able to control it, and we haven’t been good at that yet.
Reducing the surprise factor is paramount to any discussion of diversity or inclusiveness with respect to AIs. Yes we can identify the resulting fallout when a device is unable to recognize black faces. However, from a developer’s standpoint it may be much more difficult to diagnose why that particular algorithm is so racist. Talk about including women and minorities on design teams is a start. However, for inclusion to have a measurable effect on outcomes it would also be necessary to give those people explicit roles related to inclusive design and training of models. Models are like a meat processor, what comes out is just a mashed up version of what went in.
With all of this in mind, we may be able to address these issues as they come up, but that will take a committed effort from all parties involved. So far that has not happened. The “demon summoning” is underway, and for the most part it is a lonely ritual.