"Many of us routinely - and even blindly - rely on the advice of algorithms in all aspects of our lives, from choosing the fastest route to the airport to deciding how to invest our retirement savings. But should we trust them as much as we do?"
Dr Packin's main point is about the fallibility of algorithms, and the excessive confidence people place in them. @AnnCavoukian reinforces this point.
Always look under the hood, especially with AI: we need to, not only examine the algorithms used, but the training datasets that shape the algorithms. We have to find ways to avoid the tyranny of algorithms!— Ann Cavoukian, Ph.D. (@AnnCavoukian) June 15, 2019
But there is another reason to be wary of the advice of the algorithm, summed up by the question: Whom does the algorithm serve?
Because the algorithm is not working for you alone. There are many people trying to get to the airport, and if they all use the same route they may all miss their flights. If the algorithm is any good, it will be advising different people to use different routes. (Most well-planned cities have more than one route to the airport, to avoid a single point of failure.) So how can you trust the algorithm to give you the fastest route? However much you may be paying for the navigation service, someone else may be paying a lot more. For the road less travelled.
The same is obviously true of investment advice. The best time to buy a stock is just before everyone else buys, and the best time to sell a stock is just after everyone else buys. Which means that there are massive opportunities for unethical behaviour when advising people where / when to invest their retirement savings, and there is no reason to suppose that the people programming the algorithms are immune from this temptation, or that regulators are able to protect investors properly.
So remember the Weasley Doctrine: "Never trust anything that can think for itself if you can't see where it keeps its brain."
Nizan Geslevich Packin, Why Investors Should Be Wary of Automated Advice (Wall Street Journal, 14 June 2019)
Related posts: Towards Chatbot Ethics (May 2019), Whom does the technology serve? (May 2019)