Showing posts with label identity. Show all posts
Showing posts with label identity. Show all posts

Saturday, June 04, 2016

As How You Drive

I have been discussing Pay As You Drive (PAYD) insurance schemes on this blog for nearly ten years.

The simplest version of the concept varies your insurance premium according to the quantity of driving - Pay As How Much You Drive. But for obvious reasons, insurance companies are also interested in the quality of driving - Pay As How Well You Drive - and several companies now offer a discount for "safe" driving, based on avoiding events such as hard braking, sudden swerves, and speed violations.

Researchers at the University of Washington argue that each driver has a unique style of driving, including steering, acceleration and braking, which they call a "driver fingerprint". They claim that drivers can be quickly and reliably identified from the braking event stream alone.

Bruce Schneier posted a brief summary of this research on his blog without further comment, but a range of comments were posted by his readers. Some expressed scepticism about the reliability of the algorithm, while others pointed out that driver behaviour varies according to context - people drive differently when they have their children in the car, or when they are driving home from the pub.

"Drunk me drives really differently too. Sober me doesn't expect trees to get out of the way when I honk."

Although the algorithm produced by the researchers may not allow for this kind of complexity, there is no reason in principle why a more sophisticated algorithm couldn't allow for it. I have long argued that JOHN-SOBER and JOHN-DRUNK should be understood as two different identities, with recognizably different patterns of behaviour and risk. (See my post on Identity Differentiation.)

However, the researchers are primarily interested in the opportunities and threats created by the possibility of using the "driver fingerprint" as a reliable identification mechanism.

  • Insurance companies and car rental companies could use "driver fingerprint" data to detect unauthorized drivers.
  • When a driver denies being involved in an incident, "driver fingerprint" data could provide relevant evidence.
  • The police could remotely identify the driver of a vehicle during an incident.
  • "Driver fingerprint" data could be used to enforce safety regulations, such as the maximum number of hours driven by any driver in a given period.

While some of these use cases might be justifiable, the researchers outline various scenarios where this kind of "fingerprinting" would represent an unjustified invasion of privacy, observe how easy it is for a third party to obtain and abuse driver-related data, and call for a permission-based system for controlling data access between multiple devices and applications connected to the CAN bus within a vehicle. (CAN is a low-level protocol, and does not support any security features intrinsically.)


Sources

Miro Enev, Alex Takakuwa, Karl Koscher, and Tadayoshi Kohno, Automobile Driver Fingerprinting Proceedings on Privacy Enhancing Technologies; 2016 (1):34–51

Andy Greenberg, A Car’s Computer Can ‘Fingerprint’ You in Minutes Based on How You Drive (Wired, 25 May 2016)

Bruce Schneier, Identifying People from their Driving Patterns (30 May 2016)

See also John H.L. Hansen, Pinar Boyraz, Kazuya Takeda, Hüseyin Abut, Digital Signal Processing for In-Vehicle Systems and Safety. Springer Science and Business Media, 21 Dec 2011

Wikipedia: CAN bus, Vehicle bus


Related Posts

Identity Differentiation (May 2006)

Pay As You Drive (October 2006) (June 2008) (June 2009)

Friday, June 04, 2010

Ghetto Wifi 2

Following my post on Ghetto Wifi, I passed a pub yesterday with a sign "Free wifi for customers only".

Thought of another perspective on this problem. How long do you remain a customer after you have bought a drink? One hour? One day? Is there some kind of dwindling right to use the facilities as time passes? If you bought a coffee within the last half hour then you are a full-status customer with full rights; if you haven't bought a coffee in the last two hours, then maybe your customer rights are weaker.


Let's look at some more examples. If you have lunch in a restaurant, and then do some shopping, can you then go back to the restaurant to use the toilet before driving home? (If you left a reasonable tip, then maybe the aura of "customerhood" still lingers.) If you have lunch in a pub, can you leave your car in the "customers only" car park for the rest of the day? Some establishments have strict limits - for example, some supermarkets have free parking for two hours, and then start clamping or ticketing people who stay longer.

Obviously there will always be some people to try to cheat - to use the facilities without buying anything at all. The point here isn't whether this is possible, or the extent to which the establishment tries to make you feel welcome (in order to convert you to future customer) or unwelcome (to reserve its facilities for genuine customers), but what moral rights you have in the first place.

Some people might think that the "customers only" goes without saying - you shouldn't need a notice - but presumably there are some people whose behaviour will be influenced by a reminder that these facilities are intended for genuine customers.

So it comes down to a question of what counts as a genuine customer, and for how long.

Sunday, October 25, 2009

Towards an Architecture of Privacy

@futureidentity (Robin Wilton) posted some interesting ideas about Identity versus attributes on his blog.

"For an awful lot of service access decisions, it's not actually important to know who the service requester is - it's usually just important to know some particular thing about them. Here are a couple of examples:

  • If someone wants to buy a drink in a bar, it's not important who they are, what's important is whether they are of legal age;
  • If someone needs a blood transfusion, it's more important to know their blood type than their identity."

However, there is an important difference between Robin's two examples. Blood transfusion is a transaction with longer-lasting consequences. If a batch of blood is contaminated, there seems to be is a legitimate regulatory requirement to trace forwards (who received this blood) and backwards (who donated this blood), in order to limit the consequences of this contamination event and to prevent further occurrences.

There is a strong demand for increasing traceability. In manufacturing, we want to trace every manufactured item to a specific batch, and associate each batch with specific raw materials and employees. In food production, we want to trace every portion back to the farm, so that salmonella outbreaks can be blamed on the farmer. See Information Sharing and Joined-Up Services 1, 2.

Transactions that were previously regarded as isolated ones are now increasingly joined-up. The eggs that go into the custard tart you buy in the works canteen used to be anonymous, but in future they won't be. See Labelling as Service 1, 2.

There is also a strong demand for increased auditability. So it is not enough for the barman to check the drinker's age, the barman must keep a permanent record of having diligently carried out the check. It is apparently not enough for the hotel or bank clerk to look at my passport, they must retain a photocopy of my passport in order to remove any suspicion of collusion. (The bank not only mistrusts its customers, it also mistrusts its employees.)

There is a large (and growing) class of situations where so-called joined-up-thinking seems to require the negation of privacy. I am certainly not saying that this reasoning should always trump the needs of privacy. But privacy campaigners need to understand that all transactions belong within some system of systems, and that this provides the context for the forces they are battling against, rather than pretending that transactions can be regarded as purely isolated events. The point is that authorization is not an isolated event, but is embedded in a larger system, and it is this larger system that apparently requires greater disclosure and retention.

@j4ngis asks how long chains to use for traceability. What "length" of traceability is sound and meaningful? How do we connect all these traces? And also backward and forward in the "chain". For how long should records be kept?
  • Should we also know the batch number for the food that was given to the chicken that laid the egg you included in the cake?
  • Do we have to know the identity of the blood donor after six months? 10 years? 100 years?
The trouble is that there is no rational basis for drawing the line. It is always possible that some contamination in the chicken feed might affect the eggs and thereby the custard tart. It is always possible that the hyperactivity of certain schoolchildren, or the testosterone levels of certain adults, might be traced back to some contamination in the food chain. It is always possible that some obscure data correlation might one day save lives or protect children. And given the vanishing costs of data management, even a faint possibility of future benefit appears to provide sufficient reason for collecting and storing the data.

Robin clearly supposes that attribute-based authorization is a "Good Thing". I am sympathetic to this view, but I don't know how this view can stand up against the kind of sustained attack from a certain flavour of joined-up systems thinking that can almost always postulate the possibility (however faint) of saving lives or protecting children or catching criminals, if only we can retain everything and trace everything.

For my part, I have a vague desire for anonymity and privacy, a vague sense of the harm that might come to me as a result of breaches to my privacy, and a surge of annoyance when I am required to provide all sorts of personal data for what I see as unreasonable purposes, but I cannot base an architecture on any of these feelings.

Traditional arguments for data protection may seem to be merely rearguard resistance to integrated and joined-up systems. Traditional architectures for data protection look increasingly obsolete. But what alternatives are there?


Update May 2016

Traceability requirements for Human Blood and Blood Components are specified in Directive 2005/61/EC of the European Parliament and of the Council 30 September 2005 (pdf - 63KB)

Robin's point was that blood type was more important than identity, and of course this is true. Donor and recipient identity must be retained for 30 years, but that doesn't mean sharing this information with everybody in the blood supply chain.

Wednesday, December 31, 2008

Subscribe and Follow - Towards Complex Identity

Very interesting post from JP ("Confused of Calcutta") about what he calls the customer perspective. I want to call it something else, not for the sake of disagreeing with JP, but in the hope of adding another layer.

But first, let me pull out some of JP's requests.

  • The ability to subscribe to a particular combination of topic and author ("JP only when talking about cricket") Or more advanced combinations ("JP except when talking about cricket").
  • The ability to control the granularity of information. ("I don't want to know every song JP is listening to, but it would be nice to have an occasional dip, plus the adhoc ability to drill down into his complete listening history.") Note: this is related to the cybernetic understanding of amplification and attenuation - see for example Cyril on Business Intelligence.

Let me also add some telecom requests from Martin Geddes.

  • The ability to speak to my wife if and only if she is not putting the baby to sleep.
  • The ability to speak to an operator who speaks Spanish.

The way I interpret all these requests is the construction of a new kind of complex identity. The person I want to subscribe to or follow, the person I want to speak to, is a constructed person with such-and-such characteristics. I don't want to make any assumptions about Martin's wife, whom I've never met, but my general idea is that a woman-when-putting-baby-to-sleep is not the same person as woman-when-talking-to-her-husband, just as man-interrupted-in-meeting is not the same person as man-sitting-in-hotel-room-talking-to-wife. Similarly, I could be interested in the person identified as "JP-talking-seriously-about-social-networking" but not in the person identified as "JP-talking-humorously-about-cricket".

What I'm doing here is replacing a simple common-sense notion of personal identity (Martin is always Martin, JP is always JP) with a much more fluid, almost postmodern notion of identity. Why might this more complex notion of identity be useful? Because it is infinitely extensible - I can construct abstract identities, and then construct information feeds that relate to these identities.

For example, my son was writing an essay on the Scottish play. Suppose he could get the following links via Facebook.
  • The ability to follow anyone who is currently writing an essay on the Scottish play.
  • The ability to follow the reading choices of anyone who is currently writing an essay on the Scottish play.
  • The ability to compare notes with anyone in my town, other than in my own school, who is writing an essay on the Scottish play.
Now, it is perfectly possible to formulate these requests without invoking a complex notion of identity. But I happen to think that complex identity provides an elegant way of conceptualizing a very broad range of requirements.

So where JP talks about the customer perspective, I propose to talk about the identity perspective. Comments?


And here's a sad footnote. Having found a really good set of posts by Cyril on Business Intelligence, I wanted to subscribe to his blog. But the most recent post on his blog was from his son, reporting Cyril's death in a freak accident. This kind of closure is unusual - people often die and leave identities scattered about the Internet. One ex-colleague of mine died last year: her profile remains frozen on Linked-In, as if she were still available to be contacted: but what does one do?

If I'm subscribed to someone's blog, or following someone on Twitter, and the posts/tweets suddenly stop, this could mean that the person is dead, but it could also mean that something else has happened in his/her life, or that s/he has decided to stop using this channel, or maybe they've just forgotten their password and can't get back into that account. (I know a few people who have created a second identity on Linked-In, because they can't access the first one any more.) So there is a sense in which the person I was following no longer exists, and has gone onto better things, or a better channel. The medium, as I think someone once said, is the message.

Sunday, June 15, 2008

Pay as you drive 2

Norwich Union has suspended its Pay-As-You-Drive insurance scheme [BBC News, 14 June 2008], announced here two years ago [Pay As You Drive]. I am disappointed at this news, because PAYD was my favourite example of differentiated pricing, using telematic information about driving patterns to determine the insurance premium paid by a driver.

This is not the end of differentiated pricing of course, and may not even be the end of PAYD. According to the BBC, there is one other insurance company in the UK (MoreTh>n) offering PAYD insurance, but all I could find on their website was a product called GreenWheels. This is not a PAYD insurance scheme, but uses similar technology to provide detailed information to the driver to help improve the driver's fuel and maintenance costs as well as safety.

Why did the Norwich Union scheme fail, and what does this tell us about the viability of such schemes in general? The primary reason is that the scheme failed to attract enough drivers to cover the fixed costs of administering the scheme. But in the longer term, a scheme like this was only going to be viable if the technology became cheaper and more widely available - in other words, compatible telematics devices fitted as standard to factory model automobiles. And this in turn was only going to happen if PAYD insurance was offered by several major insurance companies (perhaps across Europe rather than just in the UK, with appropriate "roaming charges" as with mobile phones) or if there was some other purpose for the technology and infrastructure. For example, providing information to support eco-friendly driving (Green Wheels). But the big opportunity was PAYD road pricing.

However PAYD road pricing is highly unpopular. There was a major campaign against PAYD road pricing in the UK, orchestrated by the road lobby. The UK government may have hoped that commercial PAYD products, including insurance, might have paved the way for PAYD road pricing, but this now looks less likely than ever.

Meanwhile, we already have some forms of road pricing. London already has a congestion charging scheme, and other cities are likely to follow. But these schemes, along with parking and traffic violations, are currently based on photographing the car plates, and this is of course vulnerable to identity theft. (You just need to put fake plates on your car, and someone else will get the penalty notice.)

At the core of PAYD needs to be a robust and reliable way of identifying vehicles and recording their behaviour, and this raises obvious privacy concerns. All differentiated service depends on an appropriate identity scheme, and such schemes are politically charged whenever governments get involved. So perhaps widespread adoption of differentiated service will have to wait until Identity 2.0 is more mature?

Saturday, April 14, 2007

The Bits Stop Here

One of the drivers for SOA, both commercial and public sector, is to extend and enrich the opportunities to provide services to customers/citizens over the internet.

But the more reliance we place on electronic identity, the more important it seems to be to link this back to some face-to-face identification by a trusted authority. And these processes are getting more tedious. Perhaps rightly so, as identity theft becomes ever easier and more prevalent.

For example, before I could open a savings account for my son recently, I needed a lengthy interview with a bank clerk, who apparently needed to take photocopies of my passport and utility bills. This routine is called 'Know Your Customer'.

[Wikipedia: Know Your Customer]

It's not good enough for the bank clerk merely to see these documents. A paper archive is needed for "compliance" - in other words, providing retrospective evidence that I haven't tricked or bribed the bank clerk to overlook some missing document. Is this because the bank doesn't entirely trust its own employees?

But the bank does trust the paperwork from other organizations with which it has (as far as I know) zero electronic interoperability (the passport authority and the utility companies). That's nice.

Until recently, UK citizens have been able to apply for passports remotely, but the Identity and Passport Service is going to introduce face-to-face interviews. At which I guess we are going to produce copies of bank statements and utility bills.

[BBC News: Interviews for passports 'vital', Robin Wilton: Face-to-face interviews for passport candidates, Tomorrow's Fish-and-Chip Paper: And talking of the Identity and Passport Service ...]

Meanwhile, the utility companies are trying to back out of this role in the network of trust, by producing electronic bills instead of paper ones. These are useless for identification purposes, because they can be too easily forged by amateurs. (Forging old fashioned utility bills does require a tiny amount of expertise.)

So there seem to be some infinite loops in the network of trust, with some pretty obvious vulnerabilities yielding countless opportunities for real crooks.

I was minded of this when I saw the problems faced by Tim Bray getting a new Canadian passport. Spent nine hours waiting in line.

[Tim Bray: Passport Hell, Emerging Chaos: How Long to Be Identified]

Maybe electronic identity (complete with biometrics and RFID) is going to save you a little time for each transaction, but if it takes that long to get/issue the credentials in the first place, then there is some catching up to do.

As it happens, there are some pretty bright people in the IT industry working on exactly this problem, and some pretty neat solutions emerging. But the managers and politicians running the organizations that actually handle identity on a daily basis (leaving sackloads of unshredded personal data on the sidewalk, losing laptops on a regular basis, that kind of thing) don't seem to have a clue about this, don't seem to realise that they are just making things worse.

Like I said, there is some catching up to do. SOA (with Identity 2.0) has the potential to solve a lot of problems, but the first step is for the people who are causing the problems in the first place to acknowledge that they need help.

Otherwise all these cool solutions will just remain interesting talking points for bloggers.

Monday, January 15, 2007

Information Sharing and Joined-Up Services 2

My colleague David Sprott has just posted a critique (Big Brother Database Dinosaur) of the latest UK Government proposals [Note 1] for putting citizen data into a large central database.

As many commentators have pointed out [Note 2], a large central database of this kind would have to be built to extremely high standards of data quality and data protection. Given the recent history of public sector IT, it is hard to be confident that such standards would be achieved or maintained. There is also the question of liability and possible compensation - for example if a citizen suffered financial or other loss as a result of incorrect data.

But in any case, as David points out from an SOA perspective, the proposal is architecturally unsound and technologically obsolescent. Robin Wilton (Sun Microsystems) comes to a similar conclusion from the perspective of federated identity.

Government ministers are busily backtracking on the "Big Brother" elements of the proposal [Note 3], but the policy paper confirms some of the details [Note 4].

David's comments refer mainly to the proposed consolidation of citizen information across various public sector agencies within the UK. But there is another information-sharing problem in the news at present - the fact that the UK criminal records database does not include tens of thousands of crimes committed by UK citizens in other countries. [Note 5]

Part of the difficulty seems to be in verifying the identity of these records. Information sharing requires some level of interoperability, and this includes minimum standards of identification. There are some serious issues here, including semantics, which can never be resolved merely by collecting large amounts of data into one place.

The problem of information sharing within one country is really no different from the problems of information sharing between countries. But at least in the latter case there is nobody saying we can solve all the problems by building a single international database. At least I hope not.

As I said on this blog in 2003 [Note 6], we need to innovate new mechanisms to manage information sharing. This is one of the opportunities and challenges for SOA in delivering joined-up services in a proper manner. Then centralization becomes irrelevant.

Note 1: BBC News January 14th 2007

Note 2: Fish & Chip Papers: Government uber-databases,

Note 3:
BBC News January 15th 2007. See also Fish & Chip Papers: Data sharing does not a Big Brother make.

Note 4: Daily Telegraph Microchips for mentally ill planned in shake-up.

Note 5: According to ACPO, some 27,500 case files were left in desk files at the Home Office instead of being properly examined and entered into the criminal records database. [BBC News]

Note 6: See my post from 2003 on Information Sharing and Joined-Up Services.

Monday, November 20, 2006

Service-oriented security 3

In a post called Preventing Identity Theft, venture capitalist David Cowan explains (referencing Kerckhoffs' principle via Bruce Schneier) why he regards protecting secrets as a lost cause. Instead of preventing people finding out your social security number, concentrate on preventing people abusing your social security number. Cowan enthuses about one of the companies in this space, in which he has invested.

Kerckhoffs' principle is that security should not depend on secrecy, apart from the key. Social security number is not a key - at least not in the sense understood by cryptographers. Another form of Kerckhoff's principle is Shannon's Maxim: "the enemy knows the system".

Gunnar Peterson uses a chess analogy for service-oriented security. The message is the king, and if you are not using WS-Security (and apparently only 28% of ESBs do) then it's WS-GameOver. This can be seen as another application of Kerckhoffs' principle: you don't make SOA secure by trying to obscure your web services - this just compromises reuse without actually improving security. You protect the payload not the design.

But what exactly is the payload here - the information or the transaction? By Cowan's argument, it may seem a waste of effort to protect the information. But of course if you are a commercial organization (say), you can't just leak people's private data with the excuse that everyone else is doing it so it's not worth protecting. The point is that you must protect the transaction as well, just in case the information is leaking somewhere else in the network.

Cowan thinks it is a good idea to provide a diverse set of security policies and mechanisms to the user. (See also his post on Doomsday Hackers and Evildoing Robots.) This supports my own belief in differentiated security.

Update

Not just commercial organizations leaking private data. See Henry Porter's comments in the Observer (Surveillance is really getting under my skin) about the ease with which the RFID chip on the new UK passport can be cracked, together with the casual unconcern of Governnment officials. FishNChipPapers comments:
"It is naive to believe ... you can build impregnable systems. Instead our government should be focusing on approaches, such as distributed, federated databases, lack of a common identifier to link into those databases etc to mitigate the very real risks."
Meanwhile, in his response to David Cowan, Chris Walsh asks whether the problem (together with the value of Cowan's investment) might be eliminated by legislation, "with a stroke of the pen". But I think Chris has answered this question himself by quoting Gerry Goffin and Carole King in the title of his post - "It's Too Late Baby".

Wikipedia: Differentiated security, Kerckhoffs' principle
Technorati Tags:

Wednesday, March 22, 2006

Handling Uncertainty

Eighth post in seven days. As regular readers will know, I don't always post this often. But I've got lots of material to share after attending the SPARK workshop, which I hope you find useful and/or thought-provoking. 

One of the most interesting discussions at the SPARK workshop was about business strategy for SOA, with some great contributions from Steve Davis of Disney Studios. The first insight was on handling uncertainty, using Identity Management as an example.

Handling Uncertainty

Who is going to dominate the identity space? There are several plausible scenarios.
  • Credit card companies (Visa)
  • Government agencies (national ID cards)
  • Telecoms companies
  • No single dominant position
So what is the appropriate strategic response to this uncertainty?
  1. Do nothing until the situation resolves itself.
  2. Pick a winning horse. Maybe try to influence the outcome.
  3. Develop a strategy that is independent of the outcome.
SOA provides useful support for the third approach. If identity is a significant source of uncertainty (which it is), then abstract away from any identity-related assumptions. Strip all identity from the applications, and prevent developers creating any local identity processing. Establish a standard set of identity-related services, separate from the basic applications. We may then be able to define two different business cases for building and using these shared services.
  1. Tactical efficiency. Economics of scale / scope.
  2. Strategic flexibility. Ability to accommodate any of the possible scenarios.
Steve argued that four scenarios was a good number - sufficient to provide reasonable variety, while small enough to be meaningful to management.

Tuesday, February 07, 2006

Context and Purpose

Adam Shostack's latest post reminds us that It Depends What The Meaning of "Credit Report" Is.

For what purpose were social security numbers originally created - was it perhaps something to do with social security?

Social security numbers have been widely reused and repurposed as general personal identifiers, especially in the context of financial services. For this reason, many people are thinking of identity theft as something executed for the purposes of financial fraud.

But someone called Pablo is apparently using Margaret's social security number for an entirely different purpose - to pose as a legal migrant. This interferes (not surprisingly) with Margaret's ability to claim unemployment benefit.

Any piece of data - and especially an identifier - changes its meaning when it is used for a different purpose in a different context. This is of course nothing new - but the opportunities to repurpose data are hugely amplified by the latest service-oriented protocols including XML and web services.

This story should remind us that we need to be purpose-agnostic, not just when we are designing service-oriented data systems, but also when we are thinking of security threats against such systems.

See also

Purpose-Agnostic (July 2005)
Collaboration and Context (January 2006)
Context and Presence (Category)

Tuesday, December 13, 2005

Data Ownership and Trust

This month, I've been looking at some complicated questions of data provenance, data protection and copyright, prompted by a tricky but fascinating client problem - how to convert a legacy archive into a network of information services. (Among other things, this involves dividing material according to the ownership of the content.)

So I was particularly interested to see the following three separate items appear in my blogreader today.


1. Identity Theft and Brand Damage

A UK charity had its donor list stolen by a hacking gang, which then proceeded to beg funds from the same donors. Source: Silicon.com via Emergent Chaos

This is being described as a security breach


2. Software as a Service

"If information about you is stored on your own computer, it's generally not available to others unless they are able to hack your machine or serve legal process on you. In contrast, if information about you is stored on Google's computers, the law generally treats it as Google's, not yours." Cindy Cohn via Tecosystems

This is being described as a privacy issue.


3. Platforms and Stacks

Alexa (part of Amazon) is exposing its index for commercial reuse, via a series of web services. Source: Jon Battelle via Simon Bisson

This is being described as a ground-breaking innovation


There are undoubtedly new business risks that emerge whenever we make a significant change in platform. (That's not to say we shouldn't change, merely that we need to do it with our eyes open. As Stephen O'Grady puts it, "the point here is not to be alarmist, but rather to build awareness".) The new technologies of interaction carry the potential of new forms of sociotechnical intimacy, which may take a little getting used to.

Most importantly, sociotechnical shifts like these may cause us to rethink whether we really own the data (or knowledge) we thought we owned. If an email platform can use email content to target advertising, if a communication platform can analyse message traffic to identify friendship clusters, what else is fair game?

Ultimately this comes down to an important strategic choice. Do we want intimate relationships with intelligent service providers, who can interpret (and customize) both content and context to provide deeper service value? Or do we want arms-length relationships with service providers that don't know us from Adam? Where does the platform stop and the true service begin?