This article was originally adapted from a podcast, which you can check out here.
Research indicates that, in many domains, algorithms trained on high-quality historical data predict the future better than human forecasters can.
Despite this, people are susceptible to an unfortunate cognitive bias called algorithm aversion. Algorithm aversion is a costly preference for a forecast from a human instead of a higher accuracy forecast from a statistical model or a machine learning model.
People are especially averse to relying on forecasts from algorithms after they’ve seen them perform, even in situations where they’ve seen the algorithm outperform a human-forecaster alternative.
In research published in 2015 by Berkeley Dietvorst and his colleagues at the University of Pennsylvania, it was observed that this erroneous aversion is caused by people losing confidence more quickly in an algorithm forecaster relative to a human forecaster when the algorithm and the human make the same mistake.
Now that we’re aware of this unfair cognitive bias against machines, my take-home message for you today is to check yourself when you find yourself being wary of an algorithm. If you can demonstrate to yourself using validation data that the algorithm performs above human accuracy and you’re deploying the algorithm in a scenario where the training data are representative of the production data, then you should feel comfortable trusting the model’s predictions. If you’re working with other professionals, perhaps clients, who are sceptical that your model can be trusted, perhaps you can gently let them know that they may be experiencing a commonplace, but nevertheless unfounded, aversion to algorithms.
If you’d like to learn more about this phenomenon, check out the paper Algorithm Aversion: People Erroneously Avoid Algorithms after Seeing Them Err from the Journal of Experimental Psychology. There is a freely available version of the paper from the Penn Libraries Scholarly Commons.