Sunday, October 25, 2009

Towards an Architecture of Privacy

@futureidentity (Robin Wilton) posted some interesting ideas about Identity versus attributes on his blog.

"For an awful lot of service access decisions, it's not actually important to know who the service requester is - it's usually just important to know some particular thing about them. Here are a couple of examples:

  • If someone wants to buy a drink in a bar, it's not important who they are, what's important is whether they are of legal age;
  • If someone needs a blood transfusion, it's more important to know their blood type than their identity."

However, there is an important difference between Robin's two examples. Blood transfusion is a transaction with longer-lasting consequences. If a batch of blood is contaminated, there seems to be is a legitimate regulatory requirement to trace forwards (who received this blood) and backwards (who donated this blood), in order to limit the consequences of this contamination event and to prevent further occurrences.

There is a strong demand for increasing traceability. In manufacturing, we want to trace every manufactured item to a specific batch, and associate each batch with specific raw materials and employees. In food production, we want to trace every portion back to the farm, so that salmonella outbreaks can be blamed on the farmer. See Information Sharing and Joined-Up Services 1, 2.

Transactions that were previously regarded as isolated ones are now increasingly joined-up. The eggs that go into the custard tart you buy in the works canteen used to be anonymous, but in future they won't be. See Labelling as Service 1, 2.

There is also a strong demand for increased auditability. So it is not enough for the barman to check the drinker's age, the barman must keep a permanent record of having diligently carried out the check. It is apparently not enough for the hotel or bank clerk to look at my passport, they must retain a photocopy of my passport in order to remove any suspicion of collusion. (The bank not only mistrusts its customers, it also mistrusts its employees.)

There is a large (and growing) class of situations where so-called joined-up-thinking seems to require the negation of privacy. I am certainly not saying that this reasoning should always trump the needs of privacy. But privacy campaigners need to understand that all transactions belong within some system of systems, and that this provides the context for the forces they are battling against, rather than pretending that transactions can be regarded as purely isolated events. The point is that authorization is not an isolated event, but is embedded in a larger system, and it is this larger system that apparently requires greater disclosure and retention.

@j4ngis asks how long chains to use for traceability. What "length" of traceability is sound and meaningful? How do we connect all these traces? And also backward and forward in the "chain". For how long should records be kept?
  • Should we also know the batch number for the food that was given to the chicken that laid the egg you included in the cake?
  • Do we have to know the identity of the blood donor after six months? 10 years? 100 years?
The trouble is that there is no rational basis for drawing the line. It is always possible that some contamination in the chicken feed might affect the eggs and thereby the custard tart. It is always possible that the hyperactivity of certain schoolchildren, or the testosterone levels of certain adults, might be traced back to some contamination in the food chain. It is always possible that some obscure data correlation might one day save lives or protect children. And given the vanishing costs of data management, even a faint possibility of future benefit appears to provide sufficient reason for collecting and storing the data.

Robin clearly supposes that attribute-based authorization is a "Good Thing". I am sympathetic to this view, but I don't know how this view can stand up against the kind of sustained attack from a certain flavour of joined-up systems thinking that can almost always postulate the possibility (however faint) of saving lives or protecting children or catching criminals, if only we can retain everything and trace everything.

For my part, I have a vague desire for anonymity and privacy, a vague sense of the harm that might come to me as a result of breaches to my privacy, and a surge of annoyance when I am required to provide all sorts of personal data for what I see as unreasonable purposes, but I cannot base an architecture on any of these feelings.

Traditional arguments for data protection may seem to be merely rearguard resistance to integrated and joined-up systems. Traditional architectures for data protection look increasingly obsolete. But what alternatives are there?

Update May 2016

Traceability requirements for Human Blood and Blood Components are specified in Directive 2005/61/EC of the European Parliament and of the Council 30 September 2005 (pdf - 63KB)

Robin's point was that blood type was more important than identity, and of course this is true. Donor and recipient identity must be retained for 30 years, but that doesn't mean sharing this information with everybody in the blood supply chain.


Chris Bird said...

Joined up systems require the T of VPEC-T to become a whole lot more explicit. Essentially it seems as if, since anonymity is becoming less possible, so we have to manage the trust boundaries properly. In the bar example, the barman has to prove to untrusting regulatory authority that s/he has checked the age of all drinkers, has monitored the state of impairment of all drinkers, etc. equally. So perhaps when doing Trust work in VPEC-T (not that things are really separated) we should ask ourselves, "What set of untrusted people can, post hoc, demand to see that the proper regulations were obeyed. That can fundamentally change the way systems work.

Anonymous said...

Thanks Richard -

You're definitely right that my two examples are different (and as ever, I could and should have spent more time picking them)...

Your point about traceability of blood transfusions is a fair one (though as a matter of routine I also have to issue my standard plea against taking the classic "unconscious patient in A&E" scenario as the optimum design point or the most sensible use-case to start from.

But when it comes to barman checking age, I have to push back. What risk are you trying to mitigate by having the barman keep an identifiable audit trail of who has been carded - and is your proposed mitigation really proportionate?

Richard Veryard said...

When I say "demand", I don't imply that we always have to accede to such demands, merely that we need to have some way of answering them.

Let's suppose that the barman's perceived risk is that he might end up in court, accused of serving an under-age drinker.

You might say that this perceived risk is exaggerated. You might want the barman to justify keeping records against a proper risk assessment, together with legal advice. Is this a serious risk, and what kind of audit trail would provide a legitimate defence?

However, most people and organizations are not prepared to carry out detailed risk assessments and hire expensive legal experts to anticipate every possible legal action. It may seem much easier and quicker to keep records indiscriminately than to decide which circumstances justify keeping records.

You can't ask the barman to be "proportionate". The barman is surely entitled to minimize his risk of adverse court action. The problem is in a wider system where a barman (or his employer) may feel vulnerable, and where keeping records appears to provide some safeguard.

We live in a mistrustful culture in which people are called upon to provide audit trails in a wide variety of situations.

So there are strong institutional and cultural expectations that run counter to the demands of privacy.

Anonymous said...

I reckon I can (and even should) ask the barman to be proportionate... I can, because as long as I have a choice I might well decide to drink in a pub other than the Orwell Arms.... ;^) I should, because that's exactly what we should be doing to over-intrusive service-providers: sending a message that over-collection is offensive and inappropriate. I think there is consumer power there, if we choose to exercise it.

And I don't think society is against that notion in principle - I just think most people's risk assessment is under-informed (and not least because some of the information in question would have to come from the service providers, who have little interest in changing the status quo.