The system in question includes the OnStar operator as well as the device in the stolen car. Gary experienced the OnStar operator as unhelpful and obstructive, and some members of the police force apparently shared Gary's expectations and frustrations. In particular, the OnStar operator was not willing to release information over the telephone to a person who claimed to be the owner, nor even to a person who claimed to be a police officer, but initiated a time-consuming verification procedure, by which time the opportunity to catch the thief and recover the vehicle had disappeared. (The thief is thought to have tampered with the OnStar device, because the vehicle was not found at the location reported by the OnStar system.)
"Their first response should be to aid their verified customers instead of fearing a law suit." (How OnStar turned me off)
A comment to Gary's blog offers an alternative perspective.
"Seems like OnStar was right on the money to me. Otherwise, people could use OnStar to track their wife or kids - or use it to track somebody down and kill them. It is all about privacy. Today, most consumers would prefer a little more privacy and verification/security."
While I sympathize with Gary's position, I think the comment makes a fair point. There is a complex set of conflicting requirements here. OnStar has a set of procedures that reflect some policies about privacy and security, and it is may well have been these policies that inhibited the kind of response that Gary expected.
So I took a look at OnStar's own description of its system. The "Safe and Sound" plan includes a service called Stolen Vehicle Location Assistance, and quotes a satisfied customer whose car was found within an hour of its theft being reported. While OnStar does not promise to locate every stolen car within any given time period, the emphasis seems to be on finding vehicles that have been abandoned by the thief, rather than vehicles that are still being driven away.
If Gary had read the small print on the OnStar contract, would he have been able to work out OnStar's likely behaviour in advance? Not necessarily - policies like these are often not declared as part of the contract, but may only become visible when a particular situation arises.
This example prompts me to make four general points for SOA. There are some common patterns about the way systems and services are governed by policies, and it doesn't seem to make much difference whether the services are software-based or clerical/bureaucratic.
Firstly, that during a crisis, your freedom of action may be constrained by policies in unexpected ways. And it is often impossible to get such policies changed or overridden in the heat of the moment.
Secondly, policies (even badly implemented ones) are usually there for a reason. Even if you have the authority to make hasty and piecemeal changes to policies, it's not always a good idea. (Indeed, it is the badly implemented policies that are the most likely to cause worse problems when you try and change them quickly.)
Thirdly, you may not be told in advance what the policies are - you may have to infer them from the behaviour of the counterparty - in a crisis. (There is a popular belief that we learn more about other people during a crisis.) (When there is a major crisis, does your service provider allocate scarce resources equitably between its customers, or do some categories of customer get special treatment?)
And fourthly, the aggregate behaviour of complex networks of interacting services depends on the composition of the policies governing these services. So you have to find non-crisis time to think about policies and their (emergent, composite) effects.
Technorati Tags: OnStar policy SOA service-oriented
No comments:
Post a Comment