AI or not AI

In advertisements on TV and in magazines and in papers it is common to see the word AI used. Far too common in my opinion; there was a time when AI referred to artificial intelligence, the ability to learn and the ability to make distinctions in fashions indistinguishable from humans. There was an accepted test (the Turing test) and an accepted way to apply that test to see if the barrier had been breached and a machine actually had achieved Artificial Intelligence.

It seems that that has now all been forgotten and AI is now used in place of what were once called “expert systems”. I have looked at some of what is being advertised, hopeful that AI was real and we could expect some truly exciting innovations; and I have, thus far, been disappointed. Well, disappointed and heartened because one of the fears I have of AI is that as an AI learns it should develop its own distinctions and criteria for making decisions and left unsupervised who knows what could happen.

So far, all of the AI systems I have looked at lack the ability to learn on their own (in a meaningful sense), yes they can use rules they were programmed with to categorize, identify, and act in a preprogrammed fashion. All of that is fine and convenient; add to that the ability to consider blocks of data (history) far greater than any human consciously considers and the results can be quite impressive. I feel that this is the good side of AI; a small portion of what it will take to establish the first AI system, the safe part.

Too many times in my working career I have been called upon to replace or repair, or just audit systems where the humans relying on some computer system no longer understand the criteria used in that system to perform its analyses. Often this system has become an integral part of the function of some business or industry; but, without knowing how it makes choices, how valuable is the data and choices that it recommends?

I don’t know about you; but, I want to know how choices I rely upon are decided. An expert system may consider far more data than a human, and if the accumulation and consideration of that data present a superior potential for a “best choice”, I am all for that. Just so long as I or some reasonable human is clear on the validity of the criteria and the source data being used in the decision-making process. The other elephant in the room for me has to do with an old computing adage “garbage in leads to garbage out”. If faulty information (data) is used in the decision-making process, the decisions made cannot be trusted.