The man-machine interaction will be one of the biggest problems we have to solve in the future. Algorithms are already behind many industry-standard processes, and they are steadily becoming synonymous with the rise of digitization. They calculate weather, optimize logistics, develop new products, and create virtual gaming worlds behind the scene. And as we continue down this path towards trillions of IoT devices and sensors and an exponential growth of data to around 35 ZB by 2025, we will probably see more data-crunching algorithms and automated systems. In other words, we will increase complexity – bringing greater risk of errors running through our systems and piling up into chaotic and potentially damaging effects.
We know from the past that these worries are not without merit. Despite tight controls, a poorly executed automated sell order by a big U.S. asset manager triggered the 2010 “flash crash.” Plus, in 2012, Knight Capital lost about $440 million during 30 to 45 minutes of trading.
Are our algorithms the ghosts in our machines, as the Financial Times phrased it? Will it happen again – and will they eventually reach into our supply chains too?
Humans do not think in terms of algorithms
Humans are a big part of the problem. This side of the overall equation creates the algorithms, its inner workings, and its underlying assumptions; however, we simply do not understand them. We design them to do what they should, of course, but our human brain finds it difficult to cope with programs that do only what they are told.
In our human world, following rules is a somewhat ambitious undertaking. Not only are our social rules rather vague and leave much room for interpretation, but we also find it hard or expect a system to follow a rule when it is apparent that bigger problems will emerge. And we usually do not think of all the exceptions from a given rule while designing an algorithm.
We expect algorithms to behave the same as us, only smarter. As David Kahneman in his book “Thinking Fast and Slow” made clear, humans run on biases and prejudices, which requires significant training – and probably more – to get beyond.
Is it possible that we are replacing human stupidity with robotic stupidity, as the Financial Times suggested recently?
It is evident that algorithms drove down the cost of trading in the financial industry to nearly zero. For businesses, this means huge cost advantages of letting algorithms – instead of numerous human professionals – perform the job. And of course, they are faster as well.
The problem starts when several algorithms impact their markets in a manner that human analysts cannot understand or assess. As the Financial Times reports, a London-based hedge fund earlier this year closed down, in part, because the investor found the “algorithmically-driven market … incompatible with our fundamental, research-oriented” way of doing business.
Losing the human edge
Relying too much on automation may pose an additional risk: Humans may lose their skills and become unable to take over when the system fails. As Nicholas Carr pointed out in his blog motivated by the fatal crash of Tesla’s computer-driven car, there is a trade-off between computer automation and human skill and attentiveness. He correlates this notion with increased automation in the aviation industry, which is creating problems when it fails and human pilots have to manually control the plane. The sudden takeover situation lets pilots make more mistakes than usual. The balance between automation and human control is complex, and full automation is obviously not the right answer in all circumstances.
Automated supply chains
Running algorithms in the supply chain is reducing costs and boosting efficiency. But will rising automation make these increasingly sophisticated networks more vulnerable to algorithmic surprises? The answer is a clear yes, but only if we go on using algorithms the way we did in the past. The experience in the financial and airline industries reveals that running algorithms with any human interaction has certain risks – at least for now.
There is little doubt that self-learning systems, such as AlphaGo, will become a bigger part of the solution. Although the software still has to prove if it can repeat outcomes, it is still an incredible success for other creative problems. The algorithm behind it does not only capture the essence of human creativity, but also adds new aspects to it in the form of new moves in the game.
However far we will get with such an algorithm, as Michael Luca stated in his article in the Harvard Business Review, we might still need close interaction with humans. First of all, algorithms use the data at hand and not necessarily the best data available – something where experienced humans can certainly help. Also, the human capability to train other humans can be expanded to algorithms as well.
AlphaGo successfully trained with the European Go champion to finally beat the world champion, and, in return, the European champion could improve his ratings as well. So in the world of trade, commerce, and retail, the very human view of buying behavior, fashion, and trends may still be needed to improve algorithms. Although algorithms are very powerful to identify patterns that even humans would miss, we still need to view the limitations of algorithms very closely and manage algorithms and their impacts on our business decisions accordingly.
The success of the self-learning algorithm AlphaGo is paving the way for a future of artificial intelligence. However, Tesla’s fatal car accident shows that we have a long way to go. For certain, we have to rethink how we build algorithms and how we let them interact with each other and humans.