Saturday, October 16, 2021

Walking Wounded

Let us suppose we can divide the world into those who trust service companies to treat their customers fairly, and those who assume that service companies will be looking to exploit any customer weakness or lapse of attention.

For example, some loyal customers renew without question, even though the cost creeps up from one year to the next. (This is known as price walking.) While other customers switch service providers frequently to chase the best deal. (This is known as churn. B2C businesses generally regard this as a Bad Thing when their own customers do it, not so bad when they can steal their competitors' customers.)

Price walking is a particular concern for the insurance business. The UK Financial Conduct Authority (FCA) has recently issued new measures to protect customers from price walking.

Duncan Minty, an insurance industry insider who blogs on Ethics and Insurance, believes that claims optimization (which he calls Settlement Walking) raises similar ethical issues. This is where the insurance company tries to get away with a lower claim settlement, especially with those customers who are most likely to accept and least likely to complain. He cites a Bank of England report on machine learning, which refers among other things to propensity modelling. In other words, adjusting how you treat a customer according to how you calculate they will respond.

My work on data-driven personalization includes ethics as well as practical considerations. However, there is always the potential for asymmetry between service providers and consumers. And as Tim Harford points out, this kind of exploitation long predates the emergence of algorithms and machine learning.



Machine Learning in UK financial services (Bank of England / FCA, October 2019)

FCA confirms measures to protect customers from the loyalty penalty in home and motor insurance markets (FCA, 28 May 2021)

Tim Harford, Exploitative algorithms are using tricks as old as haggling at the bazaar (2 November 2018)

Joi Ito, Supposedly ‘Fair’ Algorithms Can Perpetuate Discrimination (Wired Magazine, 5 February 2019)

Duncan Minty, Is settlement walking now part of UK insurance? (18 March 2021), Why personalisation will erode the competitiveness of premiums (7 September 2021)

Related posts: The Support Economy (January 2005), The Price of Everything (May 2017), Insurance and the Veil of Ignorance (February 2019)

Related presentations: Boundaryless Customer Engagement (October 2015), Real-Time Personalization (December 2015)

Thursday, September 30, 2021

Uber Mathematics 4

As I have previously noted, drawing on research by Izabella Kaminski and others, there is something seriously problematic about the current business model of rideshare platforms such as Uber and Lyft. It seems that the only route to profitability is via some disruptive event in favour of their business. Although investors have poured eye-watering amounts of cash into these companies, this only makes sense as a bet on such an event occurring before the cash (or investor patience) runs out.

How does the magic of digital allow a centralized company with international overheads (Uber or Lyft) provide a service more cheaply and cost-effectively than a distributed network of local cab companies, many of which are run by one rather harassed guy in a small booth next to the train station? Well it doesn't. Even screwing the drivers can only go so far in stemming the losses.

So what is this disruptive event, that these companies and their investors are eagerly waiting for? One theory was that the rideshare platforms could only be profitable once they had eliminated all competition, by a combination of undercutting rivals and doing deals with local authorities - for example, offering to run other elements of the local transport network. Once a monopoly had been established, then they could start to push the prices up.

Obviously this strategem only works if the consumers accept the price rises, and the guy in the booth doesn't find a way to take back his business.

Another theory was that they were just waiting for self-driving cars. This would also explain why they weren't looking after their drivers properly, because they didn't expect to need them for the longer-term. However, not everyone was convinced by this. Aaron Benanav thought that they will have to wait rather longer than they originally reckoned, while Sameepa Shetty noted that Uber's ambitions in this regard were currently focused on small favourable pockets rather than being spread across the whole operation, and quoted an analyst who worried that investors were throwing good money after bad. 

At the end of 2020, we learned that Uber itself had come to the same conclusion, selling off the driverless car division to focus on profits.

So remind me, where are these profits coming from?




Uber sells self-driving cars to focus on profits (BBC News, 7 December 2020)

Aaron Benanav, Why Uber's business model is doomed (Guardian, 24 August 2020)

Julia Kollewe, Uber ditches effort to develop own self-driving car (Guardian, 8 December 2020)

Elaine Moore and Dave Lee, Does Uber deserve its $91bn valuation? (FT, 17 October 2021)

Sameepa Shetty, Uber’s self-driving cars are a key to its path to profitability (CNBC, 28 January 2020)

Gwyn Topham, Peak hype: why the driverless car revolution has stalled (Guardian, 3 January 2021)


Related Posts Uber Mathematics (Nov 2016), Uber Mathematics 2 (December 2016), Uber Mathematics 3 (Dec 2016)

Wednesday, September 22, 2021

Business Architecture Grid

In some of my posts, I have used the terms vertical and horizontal in relation to Business Architecture. The purpose of this post is to analyse what these terms actually mean.


Firstly, the word vertical is often associated with value chain thinking. Typically, there is a process that spans from the raw materials to the finished product or service, and this process may either be divided between different actors or controlled by a single actor. Where a single organization controls the end-to-end process, this is known as vertical integration.

Large organizations typically have several versions of these processes, or even several entirely different processes. Bunding these into a single organization only makes economic sense if they can share resources and other stuff. For example, if procurement is done jointly, this may give the organization more buying power.

So there are typically a number of cross-cutting concerns, which may be referred to as horizontal. TOGAF 9 (released in 2009) identified "rich domain knowledge of both horizontal, cross-cutting concerns, such as human resources (HR), finance, and procurement alongside vertical, industry-specific concerns" as fundamental to Business-Led SOA.

So one version of horizontal integration involves establishing shared capabilities or services, which may support multiple value chains. Even when these value chains are distributed across different companies across different market sectors, one actor may seek to dominate the provision of these shared services, whether by merger and acquisition, technological superiority, or sheer market power.

In a 2005 discussion on Efficiency and Robustness, Stu Berman commented

The US economy is remarkably resilient to a wide variety of shocks (9/11 is a good example, as is Katrina) due to our horizontal integration rather than vertical. It is easy to look at history and see where colossal economic problems occurred - Soviet central planning, Nixonian gas price controls. via Chandler Howell


I have always argued that Business Architecture requires multiple Viewpoints. One reason for this is that the Activity or Value Stream viewpoint concentrates on the vertical dimension, while the Capability or Service viewpoint concentrates on the horizontal dimension. (Further viewpoints are required, because business architecture is not simply a two-dimensional problem.)

Note that the terms vertical and horizontal also appear in discussions of technology architecture, where they mean something rather different.

 


Further discussion

Efficiency and Robustness: Central Planning (September 2005)

Business-Led SOA (February 2009)

Towards an Open Architecture for the Public Sector (May 2014)

 

See also

Philip Boxer, The Double Challenge (March 2006)

Richard Veryard, Six Viewpoints of Business Architecture (LeanPub, 2012)

Saturday, September 04, 2021

Metadata as a Process

Data sharing and collaboration between different specialist areas requires agreement and transparency about the structure and meaning of the data. This is one of the functions of metadata.

I've been reading a paper (by Professor Paul Edwards and others) about the challenges this poses in interdisciplinary scientific research. They identify four characteristic features of scientific metadata, noting that these features can be found within a single specialist discipline as well as cross-discipline.

  • Fragmentation - many people contributing, no overall control
  • Divergent - multiple conflicting versions (often in Excel spreadsheets)
  • Iterative - rarely right first time, lots of effort to repair misunderstandings and mistakes
  • Localized - each participant is primarily focused on their own requirements rather than the global picture

They make two important distinctions, which will be relevant to enterprise data management as well.

Firstly between product and process. Instead of trying to create a static, definitive set of data definitions and properties, which will completely eliminate the need for any human interaction between the data creator and data consumer, assume that an ongoing channel of communication will be required to resolve emerging issues dynamically. (Some of the more advanced data management tools can support this.)

Secondly between precision and lubrication. Tight coupling between two systems requires exact metadata, but interoperability might also be achievable with inexact metadata plus something else to reduce any friction. (Metadata as the new oil, perhaps?)

Finally, they observe that metadata typically falls into the category of almost standards.

Everyone agrees they are a good idea, most have some such standards, yet few deploy them completely or effectively.

Does that sound familiar? 



J Bates, The politics of data friction (Journal of Documentation, 2017)

Paul Edwards, A Vast Machine (MIT Press 2010). I haven't read this book yet, but I found a review by Danny Yee (2011)

Paul Edwards, Matthew Mayernik, Archer Batcheller, Geoffrey Bowker and Christine Borgman, Science Friction: Data, Metadata and Collaboration (Social Studies of Science 41/5, October 2011), pp. 667-690. 

Martin Thomas Horsch, Silvia Chiacchiera, Welchy Leite Cavalcanti and Björn Schembera, Research Data Infrastructures and Engineering Metadata. In Data Technology in Materials Modelling (Springer 2021) pp 13-30

Jillian Wallis, Data Producers Courting Data Reusers: Two Cases from Modeling Communities (International Journal of Digital Curation, 2014, 9/1, 2014) pp 98–109

Thursday, August 12, 2021

On the performativity of data

The philosopher J.L. Austin observed that sometimes words didn't merely describe reality, they enacted something. A commonly cited example is that when a suitably authorized person pronounces a couple married, it is the speaking of these words that makes them true. Austin called this a performative utterance; later writers usually refer to this as performativity.

In this post, I want to explore some ways in which data and information may be performative. 

 

In my previous post on Data as Pictures, I mentioned the self-fulfilling power of labels. For example, when a person is labelled and treated as a potential criminal, this may make it more difficult for them to live as a law-abiding citizen, and they are therefore steered towards a life of crime. Thus the original truth of the data becomes almost irrelevant, because the data creates its own truth. Or as Bowker and Star put it, "classifications ... have material force in the world" (p39).

Many years ago, I gave a talk at King's College London which included some half-formed thoughts on the philosophy of information. I included some examples where it might seem rational to use information even if you don't believe it.

Keynes attributed the waves of optimism and pessimism that sweep through a market to something he called animal spirits. Where there is little real information, even false information may be worth acting upon. So imagine that a Wall Street astrologer publishes a daily star chart of the US president, and this regularly affects the stock market. Not because many people actually believe in astrology, but because many people want to be one step ahead of the few people who do believe in astrology. Even if nobody takes astrology seriously, but they all think that other people might take it seriously, then they will collectively act as if they do take it seriously. Fiction functioning as truth.

(There was an astrologer in the White House during the Reagan administration, so this example didn't seem so far-fetched at that time.)

For my second example, I imagined the head of a sugar corporation going on television to warn the public about a possible shortage of sugar. Consumers typically respond to this kind of warning by stockpiling, leaving the supermarket shelves empty of sugar. So this is another example of a self-fulfilling prophecy - a speech act that created its own truth.

I then went on to imagine the converse. Suppose the head of the sugar corporation went on television to reassure the public that there was no possibility of a sugar shortage. A significant number of consumers could reason either that the statement is false, or that even if the statement is true many consumers won't believe it. So to be on the safe side, better buy a few extra bags of sugar. Result - sugar shortage.

So here we seem to have a case where two opposite statements can appear to produce exactly the same result.


Back in the 1980s I was talking about opinions, from a person with a known status or reputation, published or broadcast in what we now call traditional media. So what happens when these opinions are disconnected from the person and embedded in dashboards and algorithms? 

It's not difficult to find examples where data produces its own reality. If a recommendation algorithm identifies a new item as a potential best-seller, this item will be recommended to a lot of people and - not surprisingly - it becomes a best-seller. Obviously this doesn't work all the time, but it is hard to deny that these algorithms contribute significantly to the outcomes that they appear to predict. Meanwhile YouTube identifies people who may be interested in extreme political content, some of whom then become interested in extreme political content. And then there's Facebook's project to "connect the world". There are real-world effects here, generated by patterns of data.

 

Another topic to consider is the effects produced by measurement and targets. On the one hand, there is a view that measuring performance helps to motivate improvements, which is why you often see performance dashboards prominantly displayed in offices. On the other hand, there is a widespread concern that excessive focus on narrowly defined targets ("target culture") distorts or misdirects performance - for example, teachers teaching to the test. Hannah Fry's article contains several examples of this, which is sometimes known as Goodhart's Law. Either way, there is an expectation that measuring something has a real-world effect, whether positive or negative.

If you can think of any other examples of the performativity of data, please comment below. 



Geoffrey Bowker and Sarah Leigh Star, Sorting Things Out (MIT Press, 1999)

Hannah Fry, What Data Can't Do (New Yorker, 22 March 2021)

Richard Veryard, Speculation and Information: The Epistemology of Stock Market Fluctuations (Invited presentation, King's College London, 16 November 1988). Warning - the theory needs a complete overhaul, but the examples are interesting.

Wikipedia: Animal Spirits, Goodhart's Law, Performativity, Target Culture

Stanford Encyclopedia of Philosophy: J.L. Austin, Speech Acts

Related posts: Target Setting: What You Measure Is What You Get (April 2005), Ethical Communication in a Digital Age (November 2018), Algorithms and Governmentality (July 2019), Data as Pictures (August 2021), Can Predictions Create Their Own Reality (August 2021). Rob Barratt of Bodmin kindly contributed a poem on target culture in the comments below my Target Setting post.

Links added 27 August 2021

Tuesday, August 10, 2021

Data as pictures?

Many people believe that data should provide a faithful representation or picture of the real world. While this is often a helpful simplification, it can sometimes mislead.

Firstly, the picture theory isn't very good at handling probability and uncertainty. When faced with alternative pictures (facts), people may try to pick the most likely or attractive one, and then act as if this were the truth. 

As I see it, the problem of knowledge and uncertainty fundamentally disrupts our conventional assumptions about representation, in much the same way that quantum physics disrupts our assumptions about reality. See previous posts on Uncertainty.

Secondly, the picture theory misrepresents judgements (whether human or algorithmic) as descriptions. When a person is classified as a poor credit risk, or as a potential criminal or terrorist, this is a speculative judgement about the future, which is often sadly self-fulfilling. For example, when a person is labelled and treated as a potential criminal, this may make it more difficult for them to live as a law-abiding citizen, and they are therefore steered towards a life of crime. Data of this kind may therefore be performative, in the sense that it creates the reality that it claims to describe.

Thirdly, the picture theory assumes that any two facts must be consistent, and simple facts can easily be combined to produce more complex facts. Failures of consistency or composition can then only be explained (and fixed) in terms of data quality and governance. See my post on Three Responses to Inconsistency (December 2003).

Furthermore, a good picture is one that can be verified. Nothing wrong with verification, of course, but the picture theory can sometimes lead to a narrow-minded approach to validation and verification. There may also be an assumption of completeness, treating a dataset as if it provided a complete picture of some clearly delineated domain. (The world is determined by the facts, and by their being all the facts.)


However, although there are some serious limitations with the picture theory, it may sometimes be an acceptable simplification, or even an enabling prejudice. One of the dimensions of data strategy is reach - developing a broad data culture across the organization and its ecosystem by making more data and tools available to a wider community of people. And if some form of the picture theory helps people get started on the ladder towards data mastery, that may not be a bad thing after all. (Hopefully they can throw away the ladder after they have climbed up it.)



 

Daniel C. Dennett, A Difference That Makes a Difference: A Conversation (Edge, 22 November 2017) 

Aaron Sloman, What Did Bateson Mean? (originally posted January 2011, revised October 2018)


See also Architecture and Reality (November 2012), From Sedimented Principles to Enabling Prejudices (March 2013), Data Strategy - Reach (December 2019), On the performativity of data (August 2021)

Monday, June 21, 2021

Rolling Review

I have long argued for the benefits of an outside-in view of business architecture. In 2008-9, Tony Bidgood and I developed a worked example based on the fictional delivery company Springfield Parcels, to illustrate a range of different business improvement paradigms and modelling notations. One of the key insights from this material was the importance of modelling your customer's process as well as your own. See also my post on Customer Orientation (May 2009).

A few years later, I worked with a regulator that had a complex relationship with the industry (industries) that it was regulating. The regulator's processes consisted largely of a series of complex responses to external events and activities, so I built a business service architecture to provide an outside-in view.

In those not-so-far-off days, regulated companies submitted meticulously compiled dossiers of information to the regulator for review and adjudication. This was a highly sequential and slow process, each stage typically taking several months if not years.

This delay was always a particular challenge in industries such as pharmaceuticals, where regulatory approval is needed before a new drug can be put on the market.

The UK drug regulator, the Medicines and Healthcare products Regulatory Agency (MHRA) was already looking at changing this sequential process before the COVID-19 pandemic struck. With the new Rolling Review approach, the regulator is able to start pre-assessment during the late clinical trials, and substantially reduce the time to market for innovative new treatments. This helps to explain the remarkable speed with which the COVID vaccines were tested and approved. The BBC Horizon programme includes a contribution from Christian Schneider, the MHRA's Interim Chief Scientific Officer.

The principle of rolling review also applies internally within organizations, where innovations often need to be reviewed from various perspectives. Some stakeholders prefer to wait until all the details of the innovation have been worked out, so they can be reviewed holistically. Others prefer to be involved as early as possible, since this provides more chance to influence the overall design and approach. Late involvement may seem a more efficient use of expert resource, but often leaves the expert with little more than a go/nogo decision.

We often see this dilemma with privacy and security. Advocates of security-by-design and privacy-by-design like to get in early; however, it may be difficult to carry out a rigorous Data Protection Impact Assessment (DPIA) if you don't yet know any of the answers to the questions on the template.

The fundamental point here is that all these side processes - regulation, assessment or governance - are only meaningful in terms of the main processes that they are regulating, assessing or governing. So it doesn't make sense to optimize the side process in isolation. The point is to innovate quickly and safely, not to make life easier for the regulators. And the outside-in business service architecture is a great tool for focusing everyone's minds on this.



ICO, What is a DPIA?

Sandra Kanthal, Marvellous Medicine (BBC Radio 4 - Horizon, 20 June 2021)

MHRA, Rolling Review for Market Authorisation Applications (UK Government, 31 December 2020)

Richard Veryard and Tony Bidgood, A Brief Evaluation of Business Modeling and Improvement Methodologies (CBDI Journal, December 2008)