Can we accept the inhuman side of AI?

Peter Kullmann
4 min readJun 24, 2021

--

Why some AI applications will never become reality

Picture by Gerd Altmann on Pixabay

Artificial Intelligence (AI) is finding many interesting and helpful applications of all kinds. However, there seems to be a certain barrier for those applications that could be real game changers. Cancer diagnosis based on AI? Not quite good enough. Automatic text generation? Prone to prejudice. Autonomous driving? Almost there — for ten years now.

Errare artificialis est

An instrinsic characteristic of AI methods that are based on machine learning is that they will never be perfect — they have a certain error rate. On the first sight, this is nothing to worry about as humans also have a certain error rate. So, all is good, if the error rate of an AI system is smaller than that of real persons? Not quite, there is a number of problems with this approach.

The reference class problem

A common approach for rating the performance of an AI system is to compare it to the accuracy of humans. This approach is derived from the idea of the so-called Turing Test. The problem with it is, that each person is different and one has to select an adequate and acceptable reference performance. Now, this could be the median of a representative reference group, it could be the world champion in this matter or it could be an abitrary person from the street. In any case, the selection seems to be arbitrary. In publications, of course, authors tend to make their new method look good by choosing a reference which is sufficiently worse.

Even if the system performance is compared to other systems, one essential question remains:

How good is good enough?

This leads us directly to the next problem.

The perfect machine problem

For many centuries, mankind is constructing machines to relieve itself from hard work, to get things done faster or to get things done better. A machine is expected to yield a benefit in form of saved time, better quality, or ideally both. Since a machine is normally using well-defined machanisms, it is expected that it creates reproducible results of the expected high quality.

With this mindset, we naturally expect computer systems the exhibit the same properties. We expect reproducible high quality (if not perfect quality) results. While a certain error rate may be ok for many applications, this is not true in general. For many applications a certain error quote is hardly acceptable, when an automated system produces wrong results. In case of self-driving cars for example, it is hardly acceptable, if the system misinterprets a traffic scene and it comes to a deadly accident.

When we use a computer system, shouldn’t we expect it to produce reproducible high quality results?

A problem in itself is, that many errors that AI systems show would have easily avoided by humans. This brings us to the third problem:

The inacceptable error problem

Machine learning algorithms are designed to automatically extract the relevant information from the training material to fulfill their tasks, e.g. classification. The features that these algorithms decide to use, will most probably not be the same as those, a natural person would pick. This makes it really hard for users to understand and accept erroneous behavior of AI systems.

While it may be acceptable, that an AI system has a certain error rate, it is hard to accept errors, which a real person wouldn’t have done. For certain applications this may be a no-go, even if the system in general has a better error rate than humans would have. Coming back to autonomous driving, current field tests show, that already now self-drinving cars have far better accident statistics than humans, but the few accidents that happened, might not have happened with human drivers.

Can we accept errors of AI systems, a human wouldn’t have done?

What adds to the problem is, that for many AI methods it is extremely hard to comprehend its decision process. The system is missing explainability. This also means that in general a specific problem can’t just be fixed — we have to live with it, or discard the whole system.

The future of AI

I am not an AI opponent. The opposite is true. I am convinced that there is a huge potential for the application of AI methods. However, I believe, there are certain limits, we will have to accept, when we want to AI systems to make decisions for us that can have severe consequences.

And i think, this could lead to the insight, that AI just shouldn’t be used for some applications.

--

--