Retailers have long used fragrances to affect the customer in-store experience. See for example Air/Aroma.
So perhaps we can use smell to alert consumers to dodgy websites? An artist and graphic designer, Leanne Wijnsma, has built what is basically an air-defreshener: a hexagonal resin block with a perfume reservoir inside, which connects over Wi-Fi to your computer. When it notices a possible data leak (like the user connecting to an unsecured Wi-Fi network, or browsing a webpage over an unsecure connection) — puff! It releases the smell of data.
James Vincent, What does a data leak smell like? This little device lets you find out (Verge, 31 Aug 2017)
That's all very well, but it only sniffs out the most obvious risks. If you want to smell the actual data leak, you'd need a device that released a data leak fragrance when (or perhaps I should say whenever) your employer or favourite online retailer is hacked. Or maybe a device that sniffed around a corporate website looking for vulnerabilities ...
I'm sure my regular readers don't need me to spell out the flaws in that idea.
Related posts
Pax Technica - On Risk and Security (November 2017)
UK Retail Data Breaches (December 2017)
Showing posts with label data protection. Show all posts
Showing posts with label data protection. Show all posts
Saturday, December 02, 2017
UK Retail Data Breaches
Some people talk as if data protection and security must be fixed before May 2018 because of GDPR. Wrong. Data protection and security must be fixed now.
The High Court has just found Morrisons to be liable for a leak of employee data by a disaffected employee in 2014. (The perpetrator got eight years in jail.)
http://www.theregister.co.uk/2017/12/01/morrisons_data_leak_ruling/
http://www.bbc.co.uk/news/uk-england-42193502
A hacker obtained employee details in September 2016, but Sports Direct failed to communicate the breach to the affected employees.
https://www.theregister.co.uk/2017/02/08/sports_direct_fails_to_inform_staff_over_hack_and_data_breach/
Second-hand gadget and video games retailer Cex has said up to two million customers have had their data stolen in an online breach
http://www.bbc.co.uk/news/technology-41095162
https://uk.webuy.com/guidance/
Up to 17 million users affected by data breach at restaurant search platform Zomato
https://www.infosecurity-magazine.com/news/zomato-breach-exposes-17-million/
https://www.zomato.com/blog/security-notice
Cyber thieves steal £2.5m
https://www.theguardian.com/business/2016/nov/08/tesco-bank-cyber-thieves-25m
https://www.theregister.co.uk/2016/11/10/tesco_bank_breach_analysis/
https://www.itproportal.com/features/lessons-from-the-tesco-bank-hack/
The Smell of Data (December 2017)
Morrisons (2014)
The High Court has just found Morrisons to be liable for a leak of employee data by a disaffected employee in 2014. (The perpetrator got eight years in jail.)
http://www.theregister.co.uk/2017/12/01/morrisons_data_leak_ruling/
http://www.bbc.co.uk/news/uk-england-42193502
Sports Direct (2016)
A hacker obtained employee details in September 2016, but Sports Direct failed to communicate the breach to the affected employees.
https://www.theregister.co.uk/2017/02/08/sports_direct_fails_to_inform_staff_over_hack_and_data_breach/
CEX (2017)
Second-hand gadget and video games retailer Cex has said up to two million customers have had their data stolen in an online breach
http://www.bbc.co.uk/news/technology-41095162
https://uk.webuy.com/guidance/
Zomato (2017)
Up to 17 million users affected by data breach at restaurant search platform Zomato
https://www.infosecurity-magazine.com/news/zomato-breach-exposes-17-million/
https://www.zomato.com/blog/security-notice
Tesco Bank (2016)
Cyber thieves steal £2.5m
https://www.theguardian.com/business/2016/nov/08/tesco-bank-cyber-thieves-25m
https://www.theregister.co.uk/2016/11/10/tesco_bank_breach_analysis/
https://www.itproportal.com/features/lessons-from-the-tesco-bank-hack/
Related posts
The Smell of Data (December 2017)
Labels:
data protection,
GDPR,
retail,
risk-trust-security,
security
Tuesday, June 27, 2017
Digital Disruption and Consumer Trust - Resolving the Challenge of GDPR
Presentation given to the "GDPR Making it Real" workshop organized by DAMA UK and BCS DMSG, 12 June 2017.
The presentation refers to two milestones. The second milestone is 25th May 2018, the date that companies will need to comply fully with the new data protection regulations. The first milestone is the agreement of a clear and costed plan to reach the second milestone. Some organizations are now getting close to the first milestone, while others still don't have much idea how much effort and resource will be required, or how this could affect their business. Good luck with that. Let me know if I can help.
The presentation refers to two milestones. The second milestone is 25th May 2018, the date that companies will need to comply fully with the new data protection regulations. The first milestone is the agreement of a clear and costed plan to reach the second milestone. Some organizations are now getting close to the first milestone, while others still don't have much idea how much effort and resource will be required, or how this could affect their business. Good luck with that. Let me know if I can help.
Labels:
data protection,
disruption,
GDPR,
privacy,
risk-trust-security,
trust
Sunday, August 07, 2016
Why does my bank need more personal data?
I recently went into a High Street branch of my bank and moved a bit of money between accounts. I could have done more, but I didn't have any additional forms of identification with me.
At the end, the cashier asked me for my nationality. British, as it happens. Why do you want to know? The cashier explained that this enabled a security control: if I ever bring my passport into a branch as a form of identification, the system can check that my passport matches my declared nationality.
Really? Really? If this is really a security measure, it's a pretty feeble one. Does my bank imagine I'm going to say I'm British and then produce a North Korean passport? Like a James Bond film?
After she had explained how the bank would use my nationality data, she then asked for my National Insurance number. I declined, choosing not to quiz her any further, and left the branch planning to write a stiff letter to the head of data protection at the bank's head office.
As a data expert, I am always a little suspicious of corporate motives for data collection. So the thought did occur to me that my bank might be planning to use my personal data for some purpose other than that stated.
Of course, my bank is perfectly entitled to collect data for marketing purposes, with my consent. But in this case, I was explicitly told that the data were being collected for a very narrowly defined security purpose.
So there are two possibilities. Either my bank doesn't understand security, or it doesn't understand data protection. (Of course there will be individuals who understand these things, but the bank as an organization appears to have failed to embed this understanding into its systems and working practices.) I shall be happy to provide advice and guidance on these topics.
At the end, the cashier asked me for my nationality. British, as it happens. Why do you want to know? The cashier explained that this enabled a security control: if I ever bring my passport into a branch as a form of identification, the system can check that my passport matches my declared nationality.
Really? Really? If this is really a security measure, it's a pretty feeble one. Does my bank imagine I'm going to say I'm British and then produce a North Korean passport? Like a James Bond film?
After she had explained how the bank would use my nationality data, she then asked for my National Insurance number. I declined, choosing not to quiz her any further, and left the branch planning to write a stiff letter to the head of data protection at the bank's head office.
As a data expert, I am always a little suspicious of corporate motives for data collection. So the thought did occur to me that my bank might be planning to use my personal data for some purpose other than that stated.
Of course, my bank is perfectly entitled to collect data for marketing purposes, with my consent. But in this case, I was explicitly told that the data were being collected for a very narrowly defined security purpose.
So there are two possibilities. Either my bank doesn't understand security, or it doesn't understand data protection. (Of course there will be individuals who understand these things, but the bank as an organization appears to have failed to embed this understanding into its systems and working practices.) I shall be happy to provide advice and guidance on these topics.
Labels:
data protection,
finance sector,
privacy,
risk-trust-security,
security
Sunday, October 25, 2009
Towards an Architecture of Privacy
@futureidentity (Robin Wilton) posted some interesting ideas about Identity versus attributes on his blog.
However, there is an important difference between Robin's two examples. Blood transfusion is a transaction with longer-lasting consequences. If a batch of blood is contaminated, thereseems to be is a legitimate regulatory requirement to trace forwards (who received this blood) and backwards (who donated this blood), in order to limit the consequences of this contamination event and to prevent further occurrences.
There is a strong demand for increasing traceability. In manufacturing, we want to trace every manufactured item to a specific batch, and associate each batch with specific raw materials and employees. In food production, we want to trace every portion back to the farm, so that salmonella outbreaks can be blamed on the farmer. See Information Sharing and Joined-Up Services 1, 2.
Transactions that were previously regarded as isolated ones are now increasingly joined-up. The eggs that go into the custard tart you buy in the works canteen used to be anonymous, but in future they won't be. See Labelling as Service 1, 2.
There is also a strong demand for increased auditability. So it is not enough for the barman to check the drinker's age, the barman must keep a permanent record of having diligently carried out the check. It is apparently not enough for the hotel or bank clerk to look at my passport, they must retain a photocopy of my passport in order to remove any suspicion of collusion. (The bank not only mistrusts its customers, it also mistrusts its employees.)
There is a large (and growing) class of situations where so-called joined-up-thinking seems to require the negation of privacy. I am certainly not saying that this reasoning should always trump the needs of privacy. But privacy campaigners need to understand that all transactions belong within some system of systems, and that this provides the context for the forces they are battling against, rather than pretending that transactions can be regarded as purely isolated events. The point is that authorization is not an isolated event, but is embedded in a larger system, and it is this larger system that apparently requires greater disclosure and retention.
@j4ngis asks how long chains to use for traceability. What "length" of traceability is sound and meaningful? How do we connect all these traces? And also backward and forward in the "chain". For how long should records be kept?
Robin clearly supposes that attribute-based authorization is a "Good Thing". I am sympathetic to this view, but I don't know how this view can stand up against the kind of sustained attack from a certain flavour of joined-up systems thinking that can almost always postulate the possibility (however faint) of saving lives or protecting children or catching criminals, if only we can retain everything and trace everything.
For my part, I have a vague desire for anonymity and privacy, a vague sense of the harm that might come to me as a result of breaches to my privacy, and a surge of annoyance when I am required to provide all sorts of personal data for what I see as unreasonable purposes, but I cannot base an architecture on any of these feelings.
Traditional arguments for data protection may seem to be merely rearguard resistance to integrated and joined-up systems. Traditional architectures for data protection look increasingly obsolete. But what alternatives are there?
Update May 2016
Traceability requirements for Human Blood and Blood Components are specified in Directive 2005/61/EC of the European Parliament and of the Council 30 September 2005 (pdf - 63KB)
Robin's point was that blood type was more important than identity, and of course this is true. Donor and recipient identity must be retained for 30 years, but that doesn't mean sharing this information with everybody in the blood supply chain.
"For an awful lot of service access decisions, it's not actually important to know who the service requester is - it's usually just important to know some particular thing about them. Here are a couple of examples:
- If someone wants to buy a drink in a bar, it's not important who they are, what's important is whether they are of legal age;
- If someone needs a blood transfusion, it's more important to know their blood type than their identity."
However, there is an important difference between Robin's two examples. Blood transfusion is a transaction with longer-lasting consequences. If a batch of blood is contaminated, there
There is a strong demand for increasing traceability. In manufacturing, we want to trace every manufactured item to a specific batch, and associate each batch with specific raw materials and employees. In food production, we want to trace every portion back to the farm, so that salmonella outbreaks can be blamed on the farmer. See Information Sharing and Joined-Up Services 1, 2.
Transactions that were previously regarded as isolated ones are now increasingly joined-up. The eggs that go into the custard tart you buy in the works canteen used to be anonymous, but in future they won't be. See Labelling as Service 1, 2.
There is also a strong demand for increased auditability. So it is not enough for the barman to check the drinker's age, the barman must keep a permanent record of having diligently carried out the check. It is apparently not enough for the hotel or bank clerk to look at my passport, they must retain a photocopy of my passport in order to remove any suspicion of collusion. (The bank not only mistrusts its customers, it also mistrusts its employees.)
There is a large (and growing) class of situations where so-called joined-up-thinking seems to require the negation of privacy. I am certainly not saying that this reasoning should always trump the needs of privacy. But privacy campaigners need to understand that all transactions belong within some system of systems, and that this provides the context for the forces they are battling against, rather than pretending that transactions can be regarded as purely isolated events. The point is that authorization is not an isolated event, but is embedded in a larger system, and it is this larger system that apparently requires greater disclosure and retention.
@j4ngis asks how long chains to use for traceability. What "length" of traceability is sound and meaningful? How do we connect all these traces? And also backward and forward in the "chain". For how long should records be kept?
- Should we also know the batch number for the food that was given to the chicken that laid the egg you included in the cake?
- Do we have to know the identity of the blood donor after six months? 10 years? 100 years?
Robin clearly supposes that attribute-based authorization is a "Good Thing". I am sympathetic to this view, but I don't know how this view can stand up against the kind of sustained attack from a certain flavour of joined-up systems thinking that can almost always postulate the possibility (however faint) of saving lives or protecting children or catching criminals, if only we can retain everything and trace everything.
For my part, I have a vague desire for anonymity and privacy, a vague sense of the harm that might come to me as a result of breaches to my privacy, and a surge of annoyance when I am required to provide all sorts of personal data for what I see as unreasonable purposes, but I cannot base an architecture on any of these feelings.
Traditional arguments for data protection may seem to be merely rearguard resistance to integrated and joined-up systems. Traditional architectures for data protection look increasingly obsolete. But what alternatives are there?
Update May 2016
Traceability requirements for Human Blood and Blood Components are specified in Directive 2005/61/EC of the European Parliament and of the Council 30 September 2005 (pdf - 63KB)
Robin's point was that blood type was more important than identity, and of course this is true. Donor and recipient identity must be retained for 30 years, but that doesn't mean sharing this information with everybody in the blood supply chain.
Labels:
data protection,
identity,
joined-up,
privacy,
trust
Tuesday, December 13, 2005
Data Ownership and Trust
This month, I've been looking at some complicated questions of data provenance, data protection and copyright, prompted by a tricky but fascinating client problem - how to convert a legacy archive into a network of information services. (Among other things, this involves dividing material according to the ownership of the content.)
So I was particularly interested to see the following three separate items appear in my blogreader today.
1. Identity Theft and Brand Damage
A UK charity had its donor list stolen by a hacking gang, which then proceeded to beg funds from the same donors. Source: Silicon.com via Emergent Chaos
This is being described as a security breach
2. Software as a Service
"If information about you is stored on your own computer, it's generally not available to others unless they are able to hack your machine or serve legal process on you. In contrast, if information about you is stored on Google's computers, the law generally treats it as Google's, not yours." Cindy Cohn via Tecosystems
This is being described as a privacy issue.
3. Platforms and Stacks
Alexa (part of Amazon) is exposing its index for commercial reuse, via a series of web services. Source: Jon Battelle via Simon Bisson
This is being described as a ground-breaking innovation
There are undoubtedly new business risks that emerge whenever we make a significant change in platform. (That's not to say we shouldn't change, merely that we need to do it with our eyes open. As Stephen O'Grady puts it, "the point here is not to be alarmist, but rather to build awareness".) The new technologies of interaction carry the potential of new forms of sociotechnical intimacy, which may take a little getting used to.
Most importantly, sociotechnical shifts like these may cause us to rethink whether we really own the data (or knowledge) we thought we owned. If an email platform can use email content to target advertising, if a communication platform can analyse message traffic to identify friendship clusters, what else is fair game?
Ultimately this comes down to an important strategic choice. Do we want intimate relationships with intelligent service providers, who can interpret (and customize) both content and context to provide deeper service value? Or do we want arms-length relationships with service providers that don't know us from Adam? Where does the platform stop and the true service begin?
So I was particularly interested to see the following three separate items appear in my blogreader today.
1. Identity Theft and Brand Damage
A UK charity had its donor list stolen by a hacking gang, which then proceeded to beg funds from the same donors. Source: Silicon.com via Emergent Chaos
This is being described as a security breach
2. Software as a Service
"If information about you is stored on your own computer, it's generally not available to others unless they are able to hack your machine or serve legal process on you. In contrast, if information about you is stored on Google's computers, the law generally treats it as Google's, not yours." Cindy Cohn via Tecosystems
This is being described as a privacy issue.
3. Platforms and Stacks
Alexa (part of Amazon) is exposing its index for commercial reuse, via a series of web services. Source: Jon Battelle via Simon Bisson
This is being described as a ground-breaking innovation
There are undoubtedly new business risks that emerge whenever we make a significant change in platform. (That's not to say we shouldn't change, merely that we need to do it with our eyes open. As Stephen O'Grady puts it, "the point here is not to be alarmist, but rather to build awareness".) The new technologies of interaction carry the potential of new forms of sociotechnical intimacy, which may take a little getting used to.
Most importantly, sociotechnical shifts like these may cause us to rethink whether we really own the data (or knowledge) we thought we owned. If an email platform can use email content to target advertising, if a communication platform can analyse message traffic to identify friendship clusters, what else is fair game?
Ultimately this comes down to an important strategic choice. Do we want intimate relationships with intelligent service providers, who can interpret (and customize) both content and context to provide deeper service value? Or do we want arms-length relationships with service providers that don't know us from Adam? Where does the platform stop and the true service begin?
Labels:
asymmetry,
CRM,
data protection,
identity,
platform,
privacy,
risk-trust-security,
SaaS,
security,
trust
Subscribe to:
Posts (Atom)