« The True Value of a Good Education: Adaptability to a Changing Environment | Main | The Business Value of Resilience »

February 04, 2019



Let's stop talking about "Artificial Intelligence'. To me, you should not be allowed to use that term until computers get to the phase where they can use deep learning technology (in it's infancy) with meaning associated with various icons and tokens in the world to integrate the other "smart" technologies we have out there that you mention: machine learning, computer vision, natural language text and speech processing, and robotic process automation. I would add "comparison technology" to the list. Also, and I agree with Peter Gruelich on this one, once you get into deep learning technologies, computers then act as if they have "values" which are basically choices that we make in our lives as humans based on the meanings associated with various words and images.

Computers are already doing a poor job of interpreting language. Before the concerns about Amazon Echo recording our conversations, we used Alexa to build our shopping list. It was probably our best use of it. Sadly, we had to add each item, one by one to the list. Peanut butter, jelly, and bread was not an understandable option. When we started getting "random laughs" from our Alexa, it was time to disconnect.

Who is the arbiter of the meaning of an image of an "old person", for example. If driven primarily by business interests, the computer could decide that this is an inefficient worthless human, who provides no value after their working years, and should be eliminated. An AI system built by Americans could revere an image of an American flag, while an AI system built by Iran could despise it. The act of burning said flag, could result in a decision in each country that the opposing country holds terrorists. We already have enough humans making those decisions.

I think the complexities are too great to trust "deep learning" AI computers with the job of assigning value and making decisions impactful to people's lives. Given that most of the funding is from the military branch of government, it gives me even more concern, as a robot might decide that killing innocents is acceptable in the acquisition and elimination of a target, for example.

As humans, we can't even agree on a common set of values on how we can humanely treat each other, so once we add computers we just accelerate the process. And eventually, the AI decides it doeesn't need us.


Humans take at least 15-20 years to "learn and assimilate context", then based on our personality, we either try to dominate, manipulate, or adapt to our environment. While I'm happy to think of computers which would "adapt" to us, I suspect the goals of the military and commercial interests will be to dominate and/or manipulate humans and their environment.

Over that 15-20 years, humans learn literally millions of contextual clues to living in the world and in society, from putting our hand over our heart to the Pledge of Allegiance, to standing when a bride walks down an aisle in a wedding, to pulling over to the side of the road when police or emergency vehicles approach. In our AI systems, how many are we missing? Like the Tesla that could not distinguish a gray truck from the sky, how much risk are we assuming with these systems? When they fail, will they simply shut down safely, or continue to attempt to perform their function out of control?

I prefer to speak in terms of "building experts" in domains like self-driving cars, or electronic commerce, or wedding planning, or law enforcement.


One final comment. The AI I was most completely impressed with in speech recognition recently was Fidelity Investments. I called their 1-800 number and the system prompted me to enter the 5 digit extension of the person I needed to talk to. It was already responding to my "answers" of yes and no, so I said "call Edmund in the Frisco office". The AI system found the correct office (which actually was in Plano, not Frisco, an adjacent suburb, and it rang the number of a colleague of Edmund, who was currently on the phone). The system was able to identify the office I was looking FROM GEOGRAPHICAL CONTEXT, and then was able to recognize that they could not connect me to the person, and route me to an appropriate substitute IN THE SAME OFFICE! THAT is the kind of smart technologyI want.

The comments to this entry are closed.

My Photo

February 2019

Sun Mon Tue Wed Thu Fri Sat
          1 2
3 4 5 6 7 8 9
10 11 12 13 14 15 16
17 18 19 20 21 22 23
24 25 26 27 28    
Blog powered by Typepad