Hans writes
"if people had not deluded themselves into ignoring obvious risks, they would have put more realistic calculations in their spreadsheets, resulting in better judgment that would, AFAIK, have prevented the credit crisis"
Hans is not alone in making this kind of assertion, which appears to be largely based on an optimistic belief in the power of technology to solve problems. But my understanding of economics is that it is a pessimistic (or "dismal") science, in which periodic crises are pretty much unavoidable. Like forest fires or domestic arguments, they can be delayed or suppressed for a while, but that simply makes them bigger when they eventually break out. Certain politicians claimed that "we will never return to the old boom and bust" but, as we now discover, presided over a much larger boom and bust [Channel Four News]. The idea that there is any technocratic solution to fundamental economic problems seems like hubris.
I am also worried by Hans' appeal to "better judgement". Of course we'd all like to think that SOA produces better judgements, so I tried looking for evidence of this, but an internet search for "SOA" and "better judgement" mostly found pages in which the words "better judgement" were preceded by the words "against" or "despite". Oh dear.
But even if SOA does produce better judgements, there is still a problem: better for whom? The service-oriented world is essentially a distributed one, with no central judge, no supreme court. So there is no fixed standard of value.
There was an interesting example just yesterday on a BBC Radio programme called More or Less. Paul Wilmott (described by the BBC as a "financial mathematics guru") explained how a typical bonus system motivates financial traders to copy one another in order to maximize their expected bonus, even though this inhibits portfolio diversification and therefore reduces the overall stability of the bank. So what counts as a better judgement for the trader does not necessarily count as better for the bank and its customers. If you provide better information to the trader, this may well help the trader maximize his bonus, but doesn't necessarily produce better results for anyone else. (The argument can also be found on Paul's blog, in a post called Science in Finance V: Diversification.)
Of course it's easy enough to see what the problem is here - it is that the bonus system is wrong. But if you design computer system improvements without considering the inevitable imperfections in human systems, you are very likely to get a nasty shock.
In any case, even if you had a bonus system that perfectly aligned the motivation of the trader with the interests of the bank, and even if you could produce a computer system that perfectly calculated the True Value of every asset in the world, traders wouldn't use these calculations because they wouldn't generate enough money. What traders really want is a recursive system that calculates what every other trader thinks the value is, in order to spot market movements a few moments before everyone else. But why would anyone expect the widespread possession of such systems to make the global economy any less unstable?
There are many possible positive contributions that SOA might make to the operation and supervision of complex trading systems. But if we want these systems to produce good results for the right people, we have to do some really hard systems thinking, and not just hope that speeding things up is going to solve all the problems.
No comments:
Post a Comment