As I've pointed out elsewhere (Arguments from Nature, December 2010), the non-survival of the unfit (as implied by his phrase) is not logically equivalent to the survival of the fittest, and Darwinian analogies always need to be taken with a pinch of salt. However, Mark raises an important point about the limitations of algorithms, and the need for constant review and adaptation, to maintain what he calls algorithmic efficacy.
His examples fall into three types. Firstly there are algorithms designed to anticipate and outwit human and social processes, from financial trading to fraud. Clearly these need to be constantly modified, otherwise the humans will learn to outwit the algorithms. And secondly there are algorithms designed to compete with other algorithms. In both cases, these algorithms need to keep ahead of the competition and to avoid themselves becoming predictable. Following an evolutionary analogy, the mutual adaptation of fraud and anti-fraud tactics resembles the co-evolution of predator and prey.
Mark also mentions a third type of algorithm, where the element of competition and the need for constant change is less obvious. His main example of this type is in the area of predictive maintenance, where the algorithm is trying to predict the behaviour of devices and networks that may fail in surprising and often inconvenient ways. It is a common human tendency to imagine that these devices are inhabited by demons -- as if a printer or photocopier deliberately jams or runs out of toner because it somehow knows when one is in a real hurry -- but most of us don't take this idea too seriously.
Where does surprise come from? Bateson suggests that it comes from an interaction between two contrary variables: probability and stability --
"There would be no surprises in a universe governed either by probability alone or by stability alone."-- and points out that because adaptations in Nature are always based on a finite range of circumstances (data points), Nature can always present new circumstances (data) which undermine these adaptations. He calls this the caprice of Nature.
"This is, in a sense, most unfair. ... But in another sense, or looked at in a wider perspective, this unfairness is the recurrent condition for evolutionary creativity."
The problem with adaptation being based solely on past experience also arises with machine learning, which generally uses a large but finite dataset to perform inductive reasoning, in a way that is non-transparent to the human. This probably works okay for preventative maintenance on relatively simple and isolated devices, but as devices and their interconnections get more complex, we shouldn't be too surprised if algorithms, whether based on human mathematics or machine learning, sometimes get caught out by the caprice of Nature. Or by so-called Black Swans.
This potential unreliability is particularly problematic in two cases. Firstly, when the algorithms are used to make critical decisions affecting human lives - as in justice or recruitment systems. (See for example, Zeynap Tufekci's recent TED talk.) And secondly, when preventative maintenance has safety implications - from aeroengineering to medical implants.
One way of mitigating this risk might be to maintain multiple algorithms, developed by different teams using different datasets, in order to detect additional weak signals and generate "second opinions". And get human experts to look at the cases where the algorithms strongly disagree.
This would suggest that we maybe shouldn't be too hasty to kill off algorithms with poor efficacy, but sometimes keep them in the interests of algorithmic biodiversity. (There - now I'm using the evolutionary metaphor.)
Gregory Bateson, "The New Conceptual Frames for Behavioural Research". Proceedings of the Sixth Annual Psychiatric Institute (Princeton NJ: New Jersey Neuro-Psychiatric Institute, September 17, 1958). Reprinted in G. Bateson, A Sacred Unity: Further Steps to an Ecology of Mind (edited R.E. Donaldson, New York: Harper Collins, 1991) pp 93-110
Mark Palmer, The emerging Darwinian approach to analytics and augmented intelligence (TechCrunch, 4 September 2016)
Zeynap Tufekci, Machine intelligence makes human morals more important (TED Talks, Filmed June 2016)
Related Posts
The Transparency of Algorithms (October 2016)
No comments:
Post a Comment