Showing posts with label trust. Show all posts
Showing posts with label trust. Show all posts

Saturday, October 16, 2021

Walking Wounded

Let us suppose we can divide the world into those who trust service companies to treat their customers fairly, and those who assume that service companies will be looking to exploit any customer weakness or lapse of attention.

For example, some loyal customers renew without question, even though the cost creeps up from one year to the next. (This is known as price walking.) While other customers switch service providers frequently to chase the best deal. (This is known as churn. B2C businesses generally regard this as a Bad Thing when their own customers do it, not so bad when they can steal their competitors' customers.)

Price walking is a particular concern for the insurance business. The UK Financial Conduct Authority (FCA) has recently issued new measures to protect customers from price walking.

Duncan Minty, an insurance industry insider who blogs on Ethics and Insurance, believes that claims optimization (which he calls Settlement Walking) raises similar ethical issues. This is where the insurance company tries to get away with a lower claim settlement, especially with those customers who are most likely to accept and least likely to complain. He cites a Bank of England report on machine learning, which refers among other things to propensity modelling. In other words, adjusting how you treat a customer according to how you calculate they will respond.

My work on data-driven personalization includes ethics as well as practical considerations. However, there is always the potential for asymmetry between service providers and consumers. And as Tim Harford points out, this kind of exploitation long predates the emergence of algorithms and machine learning.

 

Update

In the few days since I posted this, I've seen a couple of news items about autorenewals. There seems to be a trend of increasing vigilance by various regulators in different countries to protect consumers.

Firstly, the UK's Competition and Markets Authority (CMA) has unveiled compliance principles to curb locally some of the sharper auto-renewal practices of antivirus software firms. (via The Register).

Secondly, new banking rules in India for repeating payments. Among other things, this creates challenges for free trials and introductory offers. (via Tech Crunch)


Machine Learning in UK financial services (Bank of England / FCA, October 2019)

FCA confirms measures to protect customers from the loyalty penalty in home and motor insurance markets (FCA, 28 May 2021)

Tim Harford, Exploitative algorithms are using tricks as old as haggling at the bazaar (2 November 2018)

Joi Ito, Supposedly ‘Fair’ Algorithms Can Perpetuate Discrimination (Wired Magazine, 5 February 2019)

Duncan Minty, Is settlement walking now part of UK insurance? (18 March 2021), Why personalisation will erode the competitiveness of premiums (7 September 2021)

Manish Singh, Tech giants brace for impact in India as new payments rule goes into effect (TechCrunch, 1 October 2021)

Richard Speed, UK competition watchdog unveils principles to make a kinder antivirus business (The Register, 19 October 2021)

Related posts: The Support Economy (January 2005), The Price of Everything (May 2017), Insurance and the Veil of Ignorance (February 2019)

Related presentations: Boundaryless Customer Engagement (October 2015), Real-Time Personalization (December 2015)

Wednesday, December 04, 2019

Data Strategy - Assurance

This is one of a series of posts looking at the four key dimensions of data and information that must be addressed in a data strategy - reach, richness, agility and assurance.



In previous posts, I looked at Reach (the range of data sources and destinations), Richness (the complexity of data) and Agility (the speed and flexibility of response to new opportunities and changing requirements). Assurance is about Trust.

In 2002, Microsoft launched its Trustworthy Computing Initiative, which covered security, privacy, reliability and business integrity. If we look specifically at data, this mean two things.
  1. Trustworthy data - the data are reliable and accurate.
  2. Trustworthy data management - the processor is a reliable and responsible custodian of the data, especially in regard to privacy and security
Let's start by looking at trustworthy data. To understand why this is important (both in general and specifically to your organization), we can look at the behaviours that emerge in its absence. One very common symptom is the proliferation of local information. If decision-makers and customer-facing staff across the organization don't trust the corporate databases to be complete, up-to-date or sufficiently detailed, they will build private spreadsheets, to give them what they hope will be a closer version of the truth.

This is of course a data assurance nightmare - the data are out of control, and it may be easier for hackers to get the data out than it is for legitimate users. And good luck handling any data subject access request!

But in most organizations, you can't eliminate this behaviour simply by telling people they mustn't. If your data strategy is to address this issue properly, you need to look at the causes of the behaviour, understand what level of reliability and accessibility you have to give people, before they will be willing to rely on your version of the truth rather than theirs.

DalleMule and Davenport have distinguished two types of data strategy, which they call offensive and defensive. Offensive strategies are primarily concerned with exploiting data for competitive advantage, while defensive strategies are primarily concerned with data governance, privacy and security, and regulatory compliance.

As a rough approximation then, assurance can provide a defensive counterbalance to the offensive opportunities offered by reach, richness and agility. But it's never quite as simple as that. A defensive data quality regime might install strict data validation, to prevent incomplete or inconsistent data from reaching the database. In contrast, an offensive data quality regime might install strict labelling, with provenance data and confidence ratings, to allow incomplete records to be properly managed, enriched if possible, and appropriately used. This is the basis for the NetCentric strategy of Post Before Processing.

Because of course there isn't a single view of data quality. If you want to process a single financial transaction, you obviously need to have a complete, correct and confirmed set of bank details. But if you want aggregated information about upcoming financial transactions, you don't want any large transactions to be omitted from the total because of a few missing attributes. And if you are trying to learn something about your customers by running a survey, it's probably not a good idea to limit yourself to those customers who had the patience and loyalty to answer all the questions.

Besides data quality, your data strategy will need to have a convincing story about privacy and security. This may include certification (e.g. ISO 27001) as well as regulation (GDPR etc.) You will need to have proper processes in place for identifying risks, and ensuring that relevant data projects follow privacy-by-design and security-by-design principles. You may also need to look at the commercial and contractual relationships governing data sharing with other organizations.

All of this should add up to establishing trust in your data management - reassuring data subjects, business partners, regulators and other stakeholders that the data are in safe hands. And hopefully this means they will be happy for you to take your offensive data strategy up to the next level.

Next post: Developing Data Strategy



Leandro DalleMule and Thomas H. Davenport, What’s Your Data Strategy? (HBR, May–June 2017)

Richard Veryard, Microsoft's Trustworthy Computing (CBDI Journal, March 2003)

Wikipedia: Trustworthy Computing

Saturday, June 15, 2019

The Road Less Travelled

Are algorithms trustworthy, asks @NizanGP.
"Many of us routinely - and even blindly - rely on the advice of algorithms in all aspects of our lives, from choosing the fastest route to the airport to deciding how to invest our retirement savings. But should we trust them as much as we do?"

Dr Packin's main point is about the fallibility of algorithms, and the excessive confidence people place in them. @AnnCavoukian reinforces this point.


But there is another reason to be wary of the advice of the algorithm, summed up by the question: Whom does the algorithm serve?

Because the algorithm is not working for you alone. There are many people trying to get to the airport, and if they all use the same route they may all miss their flights. If the algorithm is any good, it will be advising different people to use different routes. (Most well-planned cities have more than one route to the airport, to avoid a single point of failure.) So how can you trust the algorithm to give you the fastest route? However much you may be paying for the navigation service (either directly, or bundled into the cost of the car/device), someone else may be paying a lot more. For the road less travelled.

The algorithm-makers may also try to monetize the destinations. If a particular road is used for getting to a sports venue as well as the airport, then the two destinations can be invited to bid to get the "best" routes for their customers - or perhaps for themselves. ("Best" may not mean fastest - it could mean the most predictable. And the venue may be ambivalent about this - the more unpredictable the journey, the more people will arrive early to be on the safe side, spreading the load on the services as well as spending more on parking and refreshments.)

In general, the algorithm is juggling the interests of many different stakeholders, and we may assume that this is designed to optimize the commercial returns to the algorithm-makers.

The same is obviously true of investment advice. The best time to buy a stock is just before everyone else buys, and the best time to sell a stock is just after everyone else buys. Which means that there are massive opportunities for unethical behaviour when advising people where / when to invest their retirement savings, and it would be optimistic to assume that the people programming the algorithms are immune from this temptation, or that regulators are able to protect investors properly.

And that's before we start worrying about the algorithms being manipulated by hostile agents ...

So remember the Weasley Doctrine: "Never trust anything that can think for itself if you can't see where it keeps its brain."



Nizan Geslevich Packin, Why Investors Should Be Wary of Automated Advice (Wall Street Journal, 14 June 2019)

Dozens of drivers get stuck in mud after Google Maps reroutes them into empty field (ABC7 New York, 26 June 2019) HT @jonerp

Related posts: Towards Chatbot Ethics (May 2019), Whom does the technology serve? (May 2019), Robust Against Manipulation (July 2019)


Updated 27 July 2019

Thursday, October 18, 2018

Why Responsibility by Design now?

Excellent article by @riptari, providing some context for Gartner's current position on ethics and privacy.

Gartner has been talking about digital ethics for a while now - for example, it got a brief mention on the Gartner website last year. But now digital ethics and privacy has been elevated to the Top Ten Strategic Trends, along with (surprise, surprise) Blockchain.

Progress of a sort, says @riptari, as people are increasingly concerned about privacy.

The key point is really the strategic obfuscation of issues that people do in fact care an awful lot about, via the selective and non-transparent application of various behind-the-scenes technologies up to now — as engineers have gone about collecting and using people’s data without telling them how, why and what they’re actually doing with it. 
Therefore, the key issue is about the abuse of trust that has been an inherent and seemingly foundational principle of the application of far too much cutting edge technology up to now. Especially, of course, in the adtech sphere. 
And which, as Gartner now notes, is coming home to roost for the industry — via people’s “growing concern” about what’s being done to them via their data. (For “individuals, organisations and governments” you can really just substitute ‘society’ in general.) 
Technology development done in a vacuum with little or no consideration for societal impacts is therefore itself the catalyst for the accelerated concern about digital ethics and privacy that Gartner is here identifying rising into strategic view.

Over the past year or two, some of the major players have declared ethics policies for data and intelligence, including IBM (January 2017), Microsoft (January 2018) and Google (June 2018). @EricNewcomer reckons we're in a "golden age for hollow corporate statements sold as high-minded ethical treatises".

According to the Magic Sorting Hat, high-minded vision can get organizations into the Ravenclaw or Slytherin quadrants (depending on the sincerity of the intention behind the vision). But to get into the Hufflepuff or Gryffindor quadrants, organizations need the ability to execute. So it's not enough for Gartner simply to lecture organizations on the importance of building trust.

Here we go round the prickly pear
Prickly pear prickly pear
Here we go round the prickly pear
At five o'clock in the morning.




Natasha Lomas (@riptari), Gartner picks digital ethics and privacy as a strategic trend for 2019 (TechCrunch, 16 October 2018)

Sony Shetty, Getting Digital Ethics Right (Gartner, 6 June 2017)


Related posts (with further links)

Data and Intelligence Principles from Major Players (June 2018)
Practical Ethics (June 2018)
Responsibility by Design (June 2018)
What is Responsibility by Design (October 2018)

Tuesday, March 20, 2018

Making the World More Open and Connected

Last year, Facebook changed its mission statement, from "Making The World More Open And Connected" to "Bringing The World Closer Together".

As I said in September 2005, interoperability is not just a technical question but a sociotechnical question (involving people, processes and organizations). (Some of us were writing about "open and connected" before Facebook existed.) But geeks often start with the technical interface, or what is sometimes called an API.

For many years, Facebook had an API that allowed developers to snoop on friends' data: this was shut down in April 2015. As Constine reported at the time, this was not just because the API was "kind of shady" but also to "deny developers the ability to build apps ... that could compete with Facebook’s own products". Sandy Paralikas (himself a former Facebook executive) made a similar point (as reported by Paul Lewis): Facebook executives were nervous about the commercial value of data being passed to other companies, and worried that the large app developers could be building their own social graphs.

In other words, the decision was not motivated by concern for user privacy but by the preservation of Facebook's hegemony.

When Tim Berners-Lee first talked about the Giant Global Graph in 2007, it seemed such a good idea. When Facebook launched the Open Graph in 2010, this was billed as "a taste of the future where everything can be more personalized". Like!




Philip Boxer and Richard Veryard, Taking Governance to the Edge (Microsoft Architecture Journal, August 2006)

Josh Constine, Facebook Is Shutting Down Its API For Giving Your Friends’ Data To Apps (TechCrunch, 28 April 2015)

Josh Constine and Frederic Lardinois, Everything Facebook Launched At f8 And Why (TechCrunch, 2 May 2014)

John Lanchester, You Are the Product (London Review of Books, 17 August 2017)

Paul Lewis, 'Utterly horrifying': ex-Facebook insider says covert data harvesting was routine (Guardian, 20 March 2018)

Caroline McCarthy, Facebook F8: One graph to rule them all (CNet, 21 April 2010)

Sandy Parakilas, We Can’t Trust Facebook to Regulate Itself (New York Times, 19 November 2017)

Wikipedia: Giant Global GraphOpen API,


Related Posts SOA Stupidity (September 2005), Social Networking as Reuse (November 2007), Security is Downstream from Strategy (March 2018), Connectivity Hunger (June 2018)

Tuesday, June 27, 2017

Digital Disruption and Consumer Trust - Resolving the Challenge of GDPR

Presentation given to the "GDPR Making it Real" workshop organized by DAMA UK and BCS DMSG, 12 June 2017.

The presentation refers to two milestones. The second milestone is 25th May 2018, the date that companies will need to comply fully with the new data protection regulations. The first milestone is the agreement of a clear and costed plan to reach the second milestone. Some organizations are now getting close to the first milestone, while others still don't have much idea how much effort and resource will be required, or how this could affect their business. Good luck with that. Let me know if I can help.


Friday, May 05, 2017

The Price of Everything

The relationship between the retailer and the customer can be beset by calculation on both sides. The retailer is trying to extract enough data about the customer to calculate the next best action, while the customer is trying to extract the best deal.

There is nothing new about customers comparing products and prices between neighbouring shops, and merchants selling similar goods can often be found in close proximity in order to attract more customers. (This is especially true for specialist and occasional purchases: in large cities, whole streets or districts may be associated with specific types of shop. London has Denmark Street for musical instruments, Hatton Garden for jewellery, Saville Row for made-to-measure suits, and so on.)

And as Tim Harford points out, exploitative algorithms are using tricks as old as haggling at the bazaar.

But nowadays the villain, apparently, is eCommerce. As a significant share of the retail business migrates from the high street to the Internet, many retailers are concerned about so-called showrooming. It may seem unfair that a customer can spend loads of time in the high street, wasting the time of the shop assistants and shop-soiling the goods, before purchasing the same goods online at a better price. To add insult to injury, some people not only practice showrooming, but then blog about how guilty it makes them feel.

There is a common belief that the Internet can generally undercut the High Street, and there are several reasons why this belief seems to make sense.
  • Internet businesses compete on price rather than service, so the prices must be good.
  • An internet store can provide economies of scale - serving the whole country or region from a single warehouse, instead of needing an outlet in each town.
  • An internet store can offer a much larger range of goods without increasing the cost of inventory - the so-called Long Tail phenomenon
  • An internet store typically has lower overheads - cheaper premises and fewer staff
  • An internet business may be run as a start-up, with less dead wood. So it is more agile and less bureaucratic. 
But there are some questionable assumptions here, as well as some counterbalancing concerns.
  • The economic and logistical costs of delivery and return can be significant, especially for low-ticket items. With clothing in particular, customers may order the same item in three different sizes, and then return the ones that don't fit.
  • Investors previously poured money into internet businesses, and the early strategic focus was on growth rather than profit. As internet business become more mature, investors will be looking to see some decent returns on their investment, and margins will be pushed up.
  • And then there is differential pricing ...
One of the key differences between traditional stores and online stores is in pricing. Although high street retailers often drop prices to clear stock - for example, supermarkets have elaborate relabelling systems to mark-down groceries before their sell-by date - they do not yet have sophisticated mechanisms for dynamic pricing. Whereas an online retailer can change the prices as often as it wishes, and therefore charge you whatever it thinks you will pay. According to Jerry Useem,
The price of the headphones Google recommends may depend on how budget-conscious your web history shows you to be.
I heard Ariel Ezrachi talking about this phenomenon at the PowerSwitch conference in Cambridge a few weeks ago. (I have not yet read his new book.)
There is an assumption is that the internet is a blessing when it comes to competition. Endless choice. Ability to reduce costs to close to zero. etc ... What you see online has very little to do with the ideas we have of market power, market dynamics, etc. everything is artificial. It looks like a regular market, with apples or fish. But because it’s all monitored, it’s not like that at all. What you see online is not a reflection of the market. You see the Truman Show — a reality designed just for you, a controlled ecosystem. (via Laura James's liveblog)

In his play Lady Windermere's Fan, Wilde offered the following contrast between the cynic and the sentimentalist.
Lord Darlington: What cynics you fellows are!
Cecil Graham: What is a cynic?
Lord Darlington: A man who knows the price of everything and the value of nothing.
Cecil Graham: And a sentimentalist, my dear Darlington, is a man who sees an absurd value in everything, and doesn’t know the market price of any single thing.

According to one of the participants at the PowerSwitch conference, some eCommerce sites quote higher prices for Apple users, based on the idea that they are less price-sensitive and can afford to pay more. In other words, the cynical Internet regards Apple users as sentimentalists.

If there is an alternative to this calculative thinking, it comes down to reestablishing trust. Perhaps then retailers and consumers alike can avoid an artificial choice between cynicism and sentimentalism.


Update (2020) added a link to a new paper by Frederik Borgesius, which looks at some of the legal as well as ethical implications of differential pricing.


Emma Brockes, I found something I like in a store. Is it wrong to buy it online for less? (Guardian, 3 May 2017)

Frederik Zuiderveen Borgesius, Price Discrimination, Algorithmic Decision-making, and European Non-discrimination Law (European Business Law Review, 2019/20)

Ariel Ezrachi and Maurice Stucke, Virtual Competition: The Promise and Perils of the Algorithm-Driven Economy (Harvard University Press, 2016) - more links via publisher's page

Tim Harford, Exploitative algorithms are using tricks as old as haggling at the bazaar (Financial Times, 5 October 2018)

Laura James, Power Switch - Conference Report (31 March 2017). Further links including video via Power Switch Conference (March 2017)

Joshua Kopstein, Is Amazon Price-Gouging You? (Vocativ, 4 May 2017) via @charlesarthur

Jerry Useem, How Online Shopping Makes Suckers of Us All (Atlantic, May 2017)

Price-bots can collude against consumers (Economist, 6 May 2017)

The Dilemma of Showrooming, (Daniels Fund Ethics Initiative, University of New Mexico)


Related posts: Online pricing practices to be regulated? (October 2009), Predictive Showrooming (December 2012), Showrooming and Multi-Sided Markets (December 2012), Showrooming in the Knowledge Economy (December 2012), The Price of Fish (January 2013), Power Switch Conference (March 2017), The Idea of Showrooming (July 2017), Shoshana Zuboff on Surveillance Capitalism (February 2019)


Updated 11 July 2020

Thursday, March 09, 2017

Inspector Sands to Platform Nine and Three Quarters

Last week was not a good one for the platform business. Uber continues to receive bad publicity on multiple fronts, as noted in my post on Uber's Defeat Device and Denial of Service (March 2017). And on Tuesday, a fat-fingered system admin at AWS managed to take out a significant chunk of the largest platform on the planet, seriously degrading online retail in the Northern Virginia (US-EAST-1) Region. According to one estimate, performance at over half of the top internet retailers was hit by 20 percent or more, and some websites were completely down.

What have we learned from this? Yahoo Finance tells us not to worry.
"The good news: Amazon has addressed the issue, and is working to ensure nothing similar happens again. ... Let’s just hope ... that Amazon doesn’t experience any further issues in the near future."

Other commentators are not so optimistic. For Computer Weekly, this incident
"highlights the risk of running critical systems in the public cloud. Even the most sophisticated cloud IT infrastructure is not infallible."

So perhaps one lesson is not to trust platforms. Or at least not to practice wilful blindness when your chosen platform or cloud provider represents a single point of failure.

One of the myths of cloud, according to Aidan Finn,
"is that you get disaster recovery by default from your cloud vendor (such as Microsoft and Amazon). Everything in the cloud is a utility, and every utility has a price. If you want it, you need to pay for it and deploy it, and this includes a scenario in which a data center burns down and you need to recover. If you didn’t design in and deploy a disaster recovery solution, you’re as cooked as the servers in the smoky data center."

Interestingly, Amazon itself was relatively unaffected by Tuesday's problem. This may have been because they split their deployment across multiple geographical zones. However, as Brian Guy points out, there are significant costs involved in multi-region deployment, as well as data protection issues. He also notes that this question is not (yet) addressed by Amazon's architectural guidelines for AWS users, known as the Well-Architected Framework.

Amazon recently added another pillar to the Well-Architected Framework, namely operational excellence. This includes such practices as performing operations with code: in other words, automating operations as much as possible. Did someone say Fat Finger?




Abel Avram, The AWS Well-Architected Framework Adds Operational Excellence (InfoQ, 25 Nov 2016)

Julie Bort, The massive AWS outage hurt 54 of the top 100 internet retailers — but not Amazon (Business Insider, 1 March 2017)

Aidan Finn, How to Avoid an AWS-Style Outage in Azure (Petri, 6 March 2017)

Brian Guy, Analysis: Rethinking cloud architecture after the outage of Amazon Web Services (GeekWire, 5 March 2017)

Daniel Howley, Why you should still trust Amazon Web Services even though it took down the internet (Yahoo Finance, 6 March 2017)

Chris Mellor, Tuesday's AWS S3-izure exposes Amazon-sized internet bottleneck (The Register, 1 March 2017)

Shaun Nichols, Amazon S3-izure cause: Half the web vanished because an AWS bod fat-fingered a command (The Register, 2 March 2017)

Cliff Saran, AWS outage shows vulnerability of cloud disaster recovery (Computer Weekly, 6 March 2017)

Thursday, January 05, 2012

Unruly Google and VPEC-T

Google has been hoist by its own petard: it seems obliged to ban its own browser from its own search engine for infringing its strict rules. Apparently the infringement resulted from some misbehaviour somewhere down the subcontract chain, unknown to Google itself or its prime subcontractor (which with fitting irony is called Unruly Media). A number of blogposts were created to promote Google Chrome, containing direct hotlinks to the Chrome download page. Google has recently penalized a number of other companies for such behaviour, including J C Penney, Forbes and Overstock. See also my 2006 post on BMW Search Requests.

A number of offending posts were discovered because they contained the magic words This post was sponsored by Google, and the Google search engine dutifully delivered a list of webpages containing these words. (This kind of transparency was foreseen by Isaac Asimov in a story called "All the troubles of the world", in which the computer Multivac was unable to conceal its own self-destructive behaviour.)

As a number of search engine analysts have pointed out, there are two problems with the sponsored pages. Besides containing the offending links, they are also pretty thin in terms of content. (Google has recently developed a search filter code-named Panda, which is intended to demote such low-value content, but this filter is extremely costly in computing power and is apparently only run sporadically.) Many of these pages credit Google Chrome for having helped a company in Vermont over the past five years, despite the fact that Google Chrome hasn't been available for that long. None of them explain why Google Chrome might be better than other browsers.

So here we have an interesting interaction between the elements of VPEC-T. 


Value - How is commercial sponsorship reconciled with high-value content? Does this incident expose a conflict of interest inside Google?

Policy - How does Google apply its strict rules to itself?

Events - How was this situation detected (with the aid of Google itself)? Will any future incidents be as easy to detect?

Content - What is the net effect on the content, on which Google's market position depends?

Trust - What kinds of trust have been eroded in this situation? How can trust be restored, and how long will it take?



Sources


Aaron Wall, Google caught buying paid links yet again (SEO Book 2 Jan 2012)

Danny Sullivan, Google’s Jaw-Dropping Sponsored Post Campaign For Chrome (SearchEngineLand 2 Jan 2012)

Charles Arthur, Will Google be forced to ban its own browser from its index? (Guardian 3 Jan 2012) Google shoves Chrome down search rankings after sponsored blog mixup (Guardian 4 Jan 2012)

 

Related post: Towards a VPEC-T analysis of Google (October 2011)

Friday, April 23, 2010

Architect Certification and Trust

@mattdeacon @wendydevolder @karianna @flowchainsensei @gojkoadzic @unclebobmartin .

Lots of good comments on Twitter and elsewhere about certification, in various contexts (enterprise architecture, agile, ...).

The purpose of a certificate is to enable you to trust the bearer with something. So we need to understand the nature of trust. In their book Trust and Mistrust, my friends Aidan Ward and John Smith identify four types of trust ...
  • authority
  • network
  • commodity
  • authentic
... and we can apply these four types to the different styles of certification that might be available.

In his attack on the World Agile Qualifications Board, @gojkoadzic quotes the Agile Alliance position on certification: employers should have confidence only in certifications that are skill-based and difficult to achieve. Yet, as Gojko continues, "most of the certificates issued today are very easy to achieve and take only a day or two of work, or even just attending the course".

If a certificate is issued by a reputable professional organization, then the value of the certificate is underwritten by the reputation of the issuing organization, so this counts as authority trust. In my post Is Enterprise Architecture a Profession? I have already stated my view that claims for professional status for enterprise architecture are at best premature, so there is no organization today that has sufficient authority to issue certificates of professional competence. However, if you can acquire a certificate simply by attending a short course and/or memorizing some document (such as TOGAF), then this is a commodity-based form of trust. Basically, such certificates will only be regarded as valuable if just enough people have them. (Which seems to be why some large consultancies have put all their practitioners through TOGAF training.)

Bob Marshall (@flowchainsensei) prefers vouching

Just found http://wevouchfor.org - Should keep me busy vouching (why oh why "certifying???") for capable folks for some time.

which is a form of network trust. If someone receives a lot of vouchers from his friends, that could either mean he is very popular or that he is involved in a lot of reciprocal back-scratching. (This kind of mutual recommendation is easily visible on Linked-In, where the list of incoming recommendations often exactly matches the list of outgoing recommendations.)

The trouble with all these mechanisms is that they are both one-sided and lacking context. The certificate purports to tell us about a person's strengths (but not weaknesses), in some unspecified or generic arena. This can only go so far in supporting a judgement about a person's qualifications (strengths and weaknesses) for a specific task in a specific context. What if anything would serve as an authentic token of trust?


Aidan Ward and John Smith, Trust and Mistrust - Radical Risk Strategies in Business Relationships. John Wiley, 2003

Tuesday, March 09, 2010

Multiple styles of EA

@tetradian has an interesting post on Big EA, Little EA and Personal EA., based loosely on Patti Ancram's classification of knowledge management.
  • Big KM is about top-down, structured and organizationally distinct “knowledge management”
  • Little KM is about safe-fail experiments embedded in the organizational structure
  • Personal KM is about access to tools and methods to ensure that knowledge, context, bits, fragments, thoughts, ideas are harvestable



As I see it, this classification identifies different styles that may possibly coexist, or perhaps different kinds of knowledge claim that may interact in interesting ways. (I don't like the word "layers" for this kind of classification, because it implies a particular structural pattern, which isn't appropriate here.)

I've used a slightly different division in the trust sphere, which might make sense here as well.
  • Authority EA - this is a kind of top-down command-and-control EA, representing the will-to-power of the enterprise as a whole, and ultimately answerable to the CEO. This is what Tom calls Big EA.
  • Commodity EA - this is where the EA is based on some kind of external product source - such as when the enterprise models are imported wholesale from IBM or SAP. This often resembles Big EA, but has some important differences.
  • Network EA - this is where EA is based on informal and emergent collaboration between people and organizations. Tom calls it Little EA, but the collaborations can be very extended indeed - just think about some of the mashup ecosystems around Google or Twitter.
  • Authentic EA - this is a personally engaged practice - what Tom calls Personal EA.

Once we have agreed that there are different styles, the really interesting question is not identifying and naming the styles, nor even saying that one style is somehow "better" than another style", but talking about how the different styles interact, and what are the implications for governance.

Sunday, October 25, 2009

Towards an Architecture of Privacy

@futureidentity (Robin Wilton) posted some interesting ideas about Identity versus attributes on his blog.

"For an awful lot of service access decisions, it's not actually important to know who the service requester is - it's usually just important to know some particular thing about them. Here are a couple of examples:

  • If someone wants to buy a drink in a bar, it's not important who they are, what's important is whether they are of legal age;
  • If someone needs a blood transfusion, it's more important to know their blood type than their identity."

However, there is an important difference between Robin's two examples. Blood transfusion is a transaction with longer-lasting consequences. If a batch of blood is contaminated, there seems to be is a legitimate regulatory requirement to trace forwards (who received this blood) and backwards (who donated this blood), in order to limit the consequences of this contamination event and to prevent further occurrences.

There is a strong demand for increasing traceability. In manufacturing, we want to trace every manufactured item to a specific batch, and associate each batch with specific raw materials and employees. In food production, we want to trace every portion back to the farm, so that salmonella outbreaks can be blamed on the farmer. See Information Sharing and Joined-Up Services 1, 2.

Transactions that were previously regarded as isolated ones are now increasingly joined-up. The eggs that go into the custard tart you buy in the works canteen used to be anonymous, but in future they won't be. See Labelling as Service 1, 2.

There is also a strong demand for increased auditability. So it is not enough for the barman to check the drinker's age, the barman must keep a permanent record of having diligently carried out the check. It is apparently not enough for the hotel or bank clerk to look at my passport, they must retain a photocopy of my passport in order to remove any suspicion of collusion. (The bank not only mistrusts its customers, it also mistrusts its employees.)

There is a large (and growing) class of situations where so-called joined-up-thinking seems to require the negation of privacy. I am certainly not saying that this reasoning should always trump the needs of privacy. But privacy campaigners need to understand that all transactions belong within some system of systems, and that this provides the context for the forces they are battling against, rather than pretending that transactions can be regarded as purely isolated events. The point is that authorization is not an isolated event, but is embedded in a larger system, and it is this larger system that apparently requires greater disclosure and retention.

@j4ngis asks how long chains to use for traceability. What "length" of traceability is sound and meaningful? How do we connect all these traces? And also backward and forward in the "chain". For how long should records be kept?
  • Should we also know the batch number for the food that was given to the chicken that laid the egg you included in the cake?
  • Do we have to know the identity of the blood donor after six months? 10 years? 100 years?
The trouble is that there is no rational basis for drawing the line. It is always possible that some contamination in the chicken feed might affect the eggs and thereby the custard tart. It is always possible that the hyperactivity of certain schoolchildren, or the testosterone levels of certain adults, might be traced back to some contamination in the food chain. It is always possible that some obscure data correlation might one day save lives or protect children. And given the vanishing costs of data management, even a faint possibility of future benefit appears to provide sufficient reason for collecting and storing the data.

Robin clearly supposes that attribute-based authorization is a "Good Thing". I am sympathetic to this view, but I don't know how this view can stand up against the kind of sustained attack from a certain flavour of joined-up systems thinking that can almost always postulate the possibility (however faint) of saving lives or protecting children or catching criminals, if only we can retain everything and trace everything.

For my part, I have a vague desire for anonymity and privacy, a vague sense of the harm that might come to me as a result of breaches to my privacy, and a surge of annoyance when I am required to provide all sorts of personal data for what I see as unreasonable purposes, but I cannot base an architecture on any of these feelings.

Traditional arguments for data protection may seem to be merely rearguard resistance to integrated and joined-up systems. Traditional architectures for data protection look increasingly obsolete. But what alternatives are there?


Update May 2016

Traceability requirements for Human Blood and Blood Components are specified in Directive 2005/61/EC of the European Parliament and of the Council 30 September 2005 (pdf - 63KB)

Robin's point was that blood type was more important than identity, and of course this is true. Donor and recipient identity must be retained for 30 years, but that doesn't mean sharing this information with everybody in the blood supply chain.

Thursday, April 09, 2009

Scale and Governance

Following my post Does Britain Need Smaller Banks? Fred Fickling objected to my point that large banks are more difficult to regulate.
"Not sure that the simple answer here is really the right one. Size & corp responsibility aren't necessarily inversely related."
Of course, some large companies can have a highly refined sense of ethics and social responsibility, although this doesn't always protect them from strong disagreements with external lobbyists, as Shell discovered when it wished to decommission the Brent Spar.

In any case, not all large companies can be trusted to regulate their own behaviour in the public interest. Companies pay tax where it suits them, move (or threaten to move) operations from one country to another, shrug off massive fines, and can sometimes behave as if they were above the law, as Fred concedes. In the long term, these legacy corporations may be doomed, but in the short term they can still cause a lot of disruption in the financial ecosystem.

Of course there are no simple answers. I certainly think some companies are too large, and I hope there will always be good opportunities for smaller companies to compete. What I'm calling for is a way of reasoning intelligently about the size of companies, which balances the interests of all stakeholders in a fair and governable manner. I think this is a proper topic for business architecture. And perhaps some of what we've learned from SOA about granularity and governance can be applied to this greater problem domain.

Monday, December 15, 2008

SOA Bank

In a discussion on the CBDI Forum Linked-In Group, Dave Ruzius proposes an SOA Bank.

'Did anyone ever consider a "SOA Bank" that would fund initial investments for SOA transformation and service development for a nice cut of the cost savings after x years?'


I love the idea of an "SOA Bank", but there's that voice in the back of my head saying "It will never work". The killer question is of course trust between the "lender" and the "borrower". Unfortunately, there is no agreed benchmark for measuring cost-savings. So there is too much scope for disagreeing (and even cheating) on the actual levels of cost-savings achieved, especially if there is real money at stake. This is ultimately a question of financial governance.

So let's look at the situations where it might work. In a public sector environment, or in a global multinational, you might have a central accounting function with enough power to make this kind of arrangement viable. Or even in an ecosystem dominated by a single player - perhaps a major manufacturer or franchise operation, providing funding for cost-saving initiatives by its suppliers or franchisees.

At the other extreme, you could have some kind of market mechanism. If the cut of the cost-savings is built into the price of using the service, then the SOA Bank will get its money back if and only if enough people are using the service.

I think we can expect organizations to get a lot more savvy about IT procurement generally, and if SOA gives them more negotiating options (for example, pay-as-you-go or payment-by-results) then we can expect people to take advantage of this. And we can expect suppliers to respond to this pressure - there are already many offerings in the "xxx-as-a-service" category, and I am sure more of these will emerge during 2009.

Funding is going to be squeezed all round, so SOA transformation programmes are going to be managed on the basis of just-in-time investment. You need to look carefully at the capabilities needed at each stage, rather than rushing around the SOA supermarket and just chucking everything with an SOA label into your shopping basket in case it might be useful.

Finally, if there was going to be an SOA Bank, whether supporting the demand-side or the supply-side, it would need to check SOA plans for credibility and viability before offering any funding. If this meant that over-ambitious or incoherent plans got sent back to the drawing board and only sensible plans got approved, this would surely be a good thing for SOA. Wouldn't it?

Tuesday, October 14, 2008

Laundry as Intelligence

During the "troubles" in Northern Ireland, the British Army operated a laundry - an apparently innocent laundry with some additional hidden functionality - checking dirty clothes for traces of explosive. [Washington Post via Bruce Schneier]

In my earlier post Services Like Laundry I said "I do not expect the laundryman to draw any inferences from the state of my clothes, or to pass these inferences to anyone". Clearly the actions of the British Army would count as a breach of this kind of expectation, but this would be justified by its effectiveness as a clever counter-terrorism measure. However, laundries might well test for various other substances, or test the DNA of any stains on the clothing, for a broad range of purposes other than counter-terrorism.

So what's the lesson from this? Basically, if you have any illicit substances or just embarrassing stains on your clothing, you can't trust the laundry service not to notice. And you certainly can't specify this not-noticing as part of the service contract - what are you going to do, draw the laundryman's attention to the thing he isn't supposed to notice?

And what if the laundryman noticed your shirts were getting a little frayed about the collar, and sold your address, together with details of your designer-label preferences, to a mail order shirt supplier? You might think this was unreasonable if you knew about it, but you would probably never find out. You might receive a mailshot from the shirt supplier, but you probably wouldn't connect it with the laundryman. (One possible mechanism for tracing abuse of your name and address is to give slightly variant names and addresses to every supplier, but lots of organizations nowadays have clever data cleansing software that wipes out these variations.)

How does this apply to other kinds of service? If I use CRM-as-a-service, how can I prevent my service provider picking up information about my customers? If I use a third-party delivery service, how can I prevent the service provider selling details of my customers and their purchasing habits to my competitors? How can I even specify this as part of the service level agreement?

WS Security standards cover some aspects of confidentiality and trust, but this merely relates to the security of a message in transit (at the technology level), and not to the broader questions of confidentiality and trust between two parties (at the enterprise level).

According to legend, the automatic telephone exchange was invented by an undertaker (Almon Strowger) who believed his business was being redirected to his competitors by corrupt telephone operators. (See Call Forwarding.) So this suggests a possible answer to this difficulty is to redesign the service architecture to reduce the enterprise vulnerability, supported by more sophisticated technology. For example, does your CRM provider really need unencrypted names and addresses, or can you pass your customer data through an encryption module?

So what's the lesson from this? You need an enterprise view of trust and security that is supported by (aligned with) a technology view of trust and security. The relationship between these two views is tough.

Wednesday, July 09, 2008

Services Not All Like Laundry

In my post on the Laundry Metaphor of Services, I said that all services are a bit like laundry, and some services are very much like laundry, but few services are totally like laundry.

We need a notion of business services that covers at least five types of service (plus composites, hybrids and cross-overs).

Product Service: "I give/get you something"
Examples: Catering, Information, Certificate

Typically the service is fulfilled in the form of one or more deliveries (events). A product service is typically triggered by a specific request by the service user, and fulfilled by a response by the service provider. The right to trigger product services may sometimes be delegated to the service provider - either by defining some business rules, or by delegating authority as part of a challenge-based service (see below). Charging is typically per product or per delivery. Or it may be governed by a prior access service (such as subscription).

Transformation Service: "I do something to your something"
Examples: Car Repair, Laundry, Haircut,

The service provider takes temporary charge of something that belongs to the service user, and returns it in an improved state (e.g. mended, cleaner, tidier, more fashionable).

Responsibility Service: "I take care of something for you"
Examples: Office Cleaning, Track Inspection & Maintenance

Typically the service provider takes ongoing responsibility for a defined entity or outcome over a defined time period. Beside the core service, the service provider may provide regular or adhoc reports about the state of the entity to the service user. Charging will often be based on a flat fee. An alternative form of charging may be based on time and materials, but this demands a higher degree of trust between the service provider and the service user.

Access Service: "I allow you to do something" Permit/Enable/Empower
Examples: Subscription, Licence, Fishing Permit, Rail Use / Landing Slot

The service is typically delivered in the form of a message that contains a key and/or allocates a resource. Access rights and allocations typically expire if not used. Access is typically tied to a specific identity (e.g. user, machine) and is not normally tradeable or transferable. May also include traded options and the like, which confer a defined right to some other service or trade.

Challenge Service: "I solve a problem for you."
Example: Diagnostics, Develop Software, Adapt Business On-Demand.

The service provider often (but not always) specifies the solution. The service provider may also specify the problem. This entails a high degree of trust. This may also involve some shared risk and professional indemnity. Charging is typically problematic, unless it can be done within an arbitrary “professional” fee structure.

Each type of service requires different kinds of charging and service level agreement, and relies on different forms of quality assurance and trust. In my post on the Business Service Architecture (Railway Edition), I mentioned the difficulty faced by the company in the UK with primary responsibility for the railway network (originally Railtrack, now Network Rail). Railtrack was delegating maintenance work (Responsibility Services) to engineering companies and subcontractors, while selling rail availability (Access Services) to train operating companies. Railtrack seriously miscalculated the complex algebraic relationship between two different kinds of service level agreement, resulting in a serious rail accident in Hatfield.

Not all like laundry then.

Saturday, June 21, 2008

Does Multi-Tenancy Matter?

Multi-tenancy basically means that the service provider is supporting several customers with the same resources. (There is some ambiguity about exactly which resources we are talking about - hence the embarrassingly public disagreement between Oracle and one of its reference SaaS users described by Phil Wainewright in Many degrees of multi-tenancy.)

Gianpaolo Carraro previously made the point that if I'm a service consumer, I shouldn't care about multi-tenancy - The multi-tenant emperor has not clothes (August 2006), and now adds I can't believe we're still talking about this (June 2008).

With services like laundry, I really shouldn't care if my dirty clothes are put into the same load as everyone else's, as long as the service provider can reliably sort them all out and return them correctly. Some providers may think that the cost savings from multi-tenancy of the washing machine doesn't justify the hassle of labelling and sorting the clothes, but that's surely their problem not mine.

But if I have doubts about their competence and reliability, and if I have to double-check everything (= increased transaction cost) because I feel there is an increased risk of error on his part, then it becomes my problem as well.

Gianpaolo uses the example of a restaurant kitchen. If someone on the next table orders the same dish at the same time, surely I don't care if the chef puts two slices of meat together into the same pan. Well I do care if it means that the chef is tempted to compromise, or pays insufficient attention to my special requirements. As a service consumer, I may have some theory about the likely behaviour and incentives of the service provider (Gianpaolo talks about people wanting to show off their architectural capacities). But this only matters to the extent that it affects what I end up with, or when.

SaaS inherits from SOA the principle of encapsulation - the idea of separating the specification (WHAT) from the implementation (HOW). But a lack of trust between service consumer and service provider, as well as possible incentive incompatibility, leads to a breach in encapsulation. For SOA and SaaS to work properly, you need a good line on managing quality and risk. But that's a story for another post.

Wednesday, May 16, 2007

Service Escrow - Iron Mountain

Last month I riffed with Gianpaolo Carraro about Service Escrow. A few days later, a company called Iron Mountain launched an SaaS escrow service, covering the source code, system documentation and data.
Gianpaolo (who is one of Microsoft's SaaS experts) refers to companies providing this service as SaaS undertakers. But it might be better to call them SaaS support. By acting as a safety net for SaaS providers and their customers, they may reduce the risk associated with SaaS and contribute indirectly to SaaS market growth.

I phoned Iron Mountain to find out more about their SaaS Escrow service, and the extent to which it differed from the traditional software escrow service. Here are some of the main points of our conversation.

Storage Model

Iron Mountain stores both the software (source code, object code and documentation - all of which belongs to the SaaS provider) and the data (which belongs to the SaaS user). Whereas traditional software escrow can often operate on a fairly slow cycle, with new software versions deposited in a fairly leisurely manner, SaaS escrow generally calls for live data backup.

If the escrow conditions are triggered, Iron Mountain will release the software and data to the SaaS user. To avoid any perceived conflict of interest, Iron Mountain does not operate the software - even on an emergency basis. But this means that the SaaS developers (or some appointed third party) must install and test the software on a new server, load and test the data, and then restore the service.

This is probably not something that can or should be done overnight - so it is not going to provide much protection against a sudden and unexpected failure of a business critical service. A more likely scenario is a gradual worsening of the relationship between the SaaS provider and the SaaS user, and a growing dissatisfaction with service levels and support arrangements, approaching the point where the SaaS provider is in breach of contract. SaaS escrow means that the SaaS user has a reasonable exit from an unsatisfactory relationship, and cannot be held to ransom because the SaaS provider controls a key business asset.

The economic benefits of escrow therefore fall under the economics of governance - making sure the SaaS user has proper control of the relationship.

Charging Model

Although the SaaS provider may benefit from the existence of SaaS escrow, the prime beneficiary is the SaaS user. Historically, it has always been the user who has negotiated and paid for escrow. However, Iron Mountain is increasingly seeing more complex arrangements whereby the software provider pays for the escrow and passes these charges onto the sofware user.

In the case of SaaS escrow, a typical arrangement would be that the SaaS provider pays for the deposit of the software, while the SaaS user pays for the deposit of the data (depending on the data volumes).

It is in the interests of the SaaS user that Iron Mountain always holds the latest version of the software. Iron Mountain therefore encourages the SaaS provider to deposit software upgrades as frequently as necessary - preferably online - and does not charge on the basis of frequency. (Remember that SaaS software may undergo a faster improvement cycle than traditional software packages.)

Management Process

SaaS escrow only works if the SaaS provider has adequate software configuration management and data management, so that it becomes a matter of routine to send controlled copies to Iron Mountain. There is a certain amount of ongoing verification and audit that needs to be carried out, involving all the parties to the escrow arrangement, and Iron Mountain sees this as an important aspect of its own role.

Remember that the SaaS user does not see the software unless and until the escrow conditions are triggered - so it isn't possible to test the escrow arrangements in a trial run exercise. Instead, it is necessary to test the process at a higher level of abstraction, to provide some reassurance to the SaaS user that the escrow would work adequately.

Future

This is early days for the SaaS escrow market, and I shall be interested to see how the market develops ...

Monday, April 16, 2007

Service Escrow

A small SaaS company bites the dust, reports Gianpaolo Carraro, Director of SaaS Architecture at Microsoft.


Obviously this is not just an SaaS problem. Companies fold all the time. If you are dependent on some external capability, you'd better have a plan for business continuity. And if you have entrusted your supplier with important assets (e.g. data) you'd better be able to get your assets back quick.

A pessimist might regard this risk as a reason to avoid SaaS altogether. But I don't think Microsoft employs many pessimists. For his part Gianpaolo sees this risk as an opportunity for some enterprising SaaS undertaker - selling services to those whose beloved supplier has gone to the great SaaS graveyard.

I prefer to see this as an architectural challenge:
  • How can we design a robust network of services, one not affected by a single point of failure? (This is of course a classic problem of distributed systems.)
  • Are there patterns of collaborative networks that will stand up to the loss of any single organization in the network?
  • What are the appropriate SLAs to support these patterns?
  • And what are the interoperability requirements to make this work?
Structural problems call for structural solutions. That's what architects are for.

Update

Gianpaolo's initial response here: SaaS Undertaker (April 2007). Shortly after my exchange with Gianpaolo, a company called Iron Mountain launched an escrow service. See my post Service Escrow - Iron Mountain (May 2007).

Saturday, April 14, 2007

The Bits Stop Here

One of the drivers for SOA, both commercial and public sector, is to extend and enrich the opportunities to provide services to customers/citizens over the internet.

But the more reliance we place on electronic identity, the more important it seems to be to link this back to some face-to-face identification by a trusted authority. And these processes are getting more tedious. Perhaps rightly so, as identity theft becomes ever easier and more prevalent.

For example, before I could open a savings account for my son recently, I needed a lengthy interview with a bank clerk, who apparently needed to take photocopies of my passport and utility bills. This routine is called 'Know Your Customer'.

[Wikipedia: Know Your Customer]

It's not good enough for the bank clerk merely to see these documents. A paper archive is needed for "compliance" - in other words, providing retrospective evidence that I haven't tricked or bribed the bank clerk to overlook some missing document. Is this because the bank doesn't entirely trust its own employees?

But the bank does trust the paperwork from other organizations with which it has (as far as I know) zero electronic interoperability (the passport authority and the utility companies). That's nice.

Until recently, UK citizens have been able to apply for passports remotely, but the Identity and Passport Service is going to introduce face-to-face interviews. At which I guess we are going to produce copies of bank statements and utility bills.

[BBC News: Interviews for passports 'vital', Robin Wilton: Face-to-face interviews for passport candidates, Tomorrow's Fish-and-Chip Paper: And talking of the Identity and Passport Service ...]

Meanwhile, the utility companies are trying to back out of this role in the network of trust, by producing electronic bills instead of paper ones. These are useless for identification purposes, because they can be too easily forged by amateurs. (Forging old fashioned utility bills does require a tiny amount of expertise.)

So there seem to be some infinite loops in the network of trust, with some pretty obvious vulnerabilities yielding countless opportunities for real crooks.

I was minded of this when I saw the problems faced by Tim Bray getting a new Canadian passport. Spent nine hours waiting in line.

[Tim Bray: Passport Hell, Emerging Chaos: How Long to Be Identified]

Maybe electronic identity (complete with biometrics and RFID) is going to save you a little time for each transaction, but if it takes that long to get/issue the credentials in the first place, then there is some catching up to do.

As it happens, there are some pretty bright people in the IT industry working on exactly this problem, and some pretty neat solutions emerging. But the managers and politicians running the organizations that actually handle identity on a daily basis (leaving sackloads of unshredded personal data on the sidewalk, losing laptops on a regular basis, that kind of thing) don't seem to have a clue about this, don't seem to realise that they are just making things worse.

Like I said, there is some catching up to do. SOA (with Identity 2.0) has the potential to solve a lot of problems, but the first step is for the people who are causing the problems in the first place to acknowledge that they need help.

Otherwise all these cool solutions will just remain interesting talking points for bloggers.