Showing posts with label dashboard. Show all posts
Showing posts with label dashboard. Show all posts

Monday, August 03, 2020

A Cybernetics View of Data-Driven

Cybernetics helps us understand dynamic systems that are driven by a particular type of data. Here are some examples:

  • Many economists see markets as essentially driven by price data.
  • On the Internet (especially social media) we can see systems that are essentially driven by click data.
  • Stan culture, where hardcore fans gang up on critics who fail to give the latest album a perfect score

In a recent interview with Alice Pearson of CRASSH, Professor Will Davies explains the process as follows:

For Hayek, the advantage of the market was that it was a space in which stimulus and response could be in a constant state of interactivity: that prices send out information to people, which they respond to either in the form of consumer decisions or investment decisions or new entrepreneurial strategies.

Davies argued that this is now managed on screens, with traders on Wall Street and elsewhere constantly interacting with (as he says) flashing numbers that are rising and falling.

The way in which the market is visualized to people, the way it presents itself to people, the extent to which it is visible on a single control panel, is absolutely crucial to someone's ability to play the market effectively.

Davies attributes to cybernetics a particular vision of human agency: to think of human beings as black boxes which respond to stimuluses in particular ways that can be potentially predicted and controlled. (In market trading, this thought leads naturally to replacing human beings with algorithmic trading.)

Davies then sees this cybernetic vision encapsulated in the British government approach to the COVID-19 pandemic.

What you see now with this idea of Stay Alert ... is a vision of an agent or human being who is constantly responsive and constantly adaptable to their environment, and will alter their behaviour depending on what types of cues are coming in from one moment to the next. ... The ideological vision being presented is of a society in which the rules of everyday conduct are going to be constantly tweaked in response to different types of data, different things that are appearing on the control panels at the Joint Biosecurity Centre.

The word alert originally comes from an Italian military term all'erta - to the watch. So the slogan Stay Alert implies a visual idea of agency. But as Alice Pearson pointed out, that which is supposed to be the focus of our alertness is invisible. And it is not just the virus itself that is invisible, but (given the frequency of asymptomatic carriers) which people are infectious and should be avoided.

So what visual or other signals is the Government expecting us to be alert to? If we can't watch out for symptoms, perhaps we are expected instead to watch out for significant shifts in the data - ambiguous clues about the effectiveness of masks or the necessity of quarantine. Or perhaps significant shifts in the rules.

Most of us only see a small fraction of the available data - Stafford Beer's term for this is attenuation, and Alice Pearson referred to hyper-attenuation. So we seem to be faced with a choice between on the one hand a shifting set of rules based on the official interpretation of the data - assuming that the powers-that-be have a richer set of data than we do, and a more sophisticated set of tools for managing the data - and on the other hand an increasingly strident set of activists encouraging people to rebel against the official rules, essentially setting up a rival set of norms in which for example mask-wearing is seen as a sign of capitulation to a socialist regime run by Bill Gates, or whatever.
 
Later in the interview, and also in his New Statesman article, Davies talks about a shifting notion of rules, from a binding contract to mere behavioural nudges.

Rules morph into algorithms, ever-more complex sets of instructions, built around an if/then logic. By collecting more and more data, and running more and more behavioural tests, it should in principle be possible to steer behaviour in the desired direction. ... The government has stumbled into a sort of clumsy algorithmic mentality. ... There is a logic driving all this, but it is one only comprehensible to the data analyst and modeller, while seeming deeply weird to the rest of us. ... To the algorithmic mind, there is no such thing as rule-breaking, only unpredicted behaviour.

One of the things that differentiates the British government from more accomplished practitioners of data-driven biopower (such as Facebook and WeChat) is the apparent lack of fast and effective feedback loops. If what the British government is practising counts as cybernetics at all, it seems to be a very primitive and broken version of first-order cybernetics.

When Norbert Wiener introduced the term cybernetics over seventy years ago, describing thinking as a kind of information processing and people as information processing organisms, this was a long way from simple behaviourism. Instead, he emphasized learning and creativity, and insisted on the liberty of each human being to develop in his freedom the full measure of the human possibilities embodied in him.
 
In a talk on the entanglements of bodies and technologies, Lucy Suchman draws on an article by Geoff Bowker to describe the universal aspirations of cybernetics.
 
Cyberneticians declared a new age in which Darwin's placement of man as one among the talks about how animals would now be followed by cybernetics' placement of man as one among the machines.
 
However, as Suchman reminds us
 
Norbert Wiener himself paid very careful attention to questions of labour, and actually cautioned against the too-broad application of models that were designed in relation to physical or computational systems to the social world.

Even if sometimes seeming outnumbered, there have always been some within the cybernetics community who are concerned about epistemology and ethics. Hence second-order (or even third-order) cybernetics.



Footnote (July 2021)

Thanks to Claire Song, I have been looking at Wiener's 1943 paper on Behaviour, Purpose and Teleology, and now realise I should be more precise about behaviourism. While I still hold that Wiener does not subscribe to classical behaviourism, he does seem to follow a form of teleological behaviourism, although this term is nowadays associated with Howard Rachlin.
 
Having also been looking at Simondon and Deleuze, I'm also noticing how Will Davies is hinting at the notion of modulation. But this is a topic for another day.



Ben Beaumont-Thomas, Hardcore pop fans are abusing critics – and putting acclaim before art (The Guardian, 3 August 2020)

Geoffrey Bowker, How to be universal: some cybernetic strategies, 1943-1970 (Social Studies of Science 23, 1993) pp 107-127
 
Philip Boxer & Vincent Kenny, The economy of discourses - a third-order cybernetics (Human Systems Management, 9/4 January 1990) pp 205-224
 

Will Davies, Coronavirus and the Rise of Rule-Breakers (New Statesman, 8 July 2020)
 
 
Arturo Rosenblueth, Norbert Wiener and Julian Bigelow, Behaviour, Purpose and Teleology (Philosophy of Science, Vol. 10, No. 1, Jan 1943) pp. 18-24

Lucy Suchman, Restoring Information’s Body: Remediations at the Human-Machine Interface (Medea, 20 October 2011) Recording via YouTube
 
Norbert Wiener, The Human Use of Human Beings (1950, 1954)

Stanford Encyclopedia of Philosophy: A cybernetic view of human nature

Saturday, February 15, 2020

The Dashboard Never Lies

The lie detector (aka #polygraph) is back in the news. The name polygraph is based on the fact that the device can record and display several things at once. Like a dashboard.

In the 1920s, a young American physiologist named John Larson devised a version for detecting liars, which measured blood pressure, respiration, pulse rate and skin conductivity. Larson called his invention, which he took with him to the police, a cardiopneumo psychogram, but polygraph later became the standard term. To this day, there is no reliable evidence that polygraphs actually work, but the great British public will no doubt be reassured by official PR that makes our masters sound like the heroes of an FBI crime series.
Poole

Over a hundred years ago, G.K. Chesteron wrote a short story exposing the fallacy of relying on such a machine. Even if the measurements are accurate, they can easily be misinterpreted.

There's a disadvantage in a stick pointing straight. The other end of the stick always points the opposite way. It depends whether you get hold of the stick by the right end.
Chesterton

There are of course many ways in which the data displayed on the dashboard can be wrong - from incorrect and incomplete data to muddled or misleading calculations. But even if we discount these errors, there may be many ways in which the user of the dashboard can get the wrong end of the stick.

As I've pointed out before, along with the illusion that what the data tells you is true, there are two further illusions: that what the data tells you is important, and that what the data doesn't tell you is not important.

No machine can lie, nor can it tell the truth.
Chesterton



G.K. Chesterton, The Mistake of the Machine (1914)

Hannah Devlin, Polygraph’s revival may be about truth rather than lies (The Guardian, 21 January 2020)

Megan Garber, The Lie Detector in the Age of Alternative Facts (Atlantic, 29 March 2018)

Stephen Poole, Is the word 'polygraph' hiding a bare-faced lie? (The Guardian, 23 January 2020)

Related posts: Memory and the Law (June 2008), How Dashboards Work (November 2009), Big Data and Organizational Intelligence (November 2018)

Saturday, August 03, 2019

Towards the Data-Driven Business

If we want to build a data-driven business, we need to appreciate the various roles that data and intelligence can play in the business - whether improving a single business service, capability or process, or improving the business as a whole. The examples in this post are mainly from retail, but a similar approach can easily be applied to other sectors.


Sense-Making and Decision Support

The traditional role of analytics and business intelligence is helping the business interpret and respond to what is going on.

Once upon a time, business intelligence always operated with some delay. Data had to be loaded from the operational systems into the data warehouse before they could be processed and analysed. I remember working with systems that generated management information based on yesterday's data, or even last month's data. Of course, such systems don't exist any more (!?), because people expect real-time insight, based on streamed data.

Management information systems are supposed to support individual and collective decision-making. People often talk about actionable intelligence, but of course it doesn't create any value for the business until it is actioned. Creating a fancy report or dashboard isn't the real goal, it's just a means to an end.

Analytics can also be used to calculate complicated chains of effects on a what-if basis. For example, if we change the price of this product by this much, what effect is this predicted to have on the demand for other products, what are the possible responses from our competitors, how does the overall change in customer spending affect supply chain logistics, do we need to rearrange the shelf displays, and so on. How sensitive is Y to changes in X, and what is the optimal level of Z?

Analytics can also be used to support large-scale optimization - for example, solving complicated scheduling problems.

 
Automated Action

Increasingly, we are looking at the direct actioning of intelligence, possibly in real-time. The intelligence drives automated decisions within operational business processes, often without a human-in-the-loop, where human supervision and control may be remote or retrospective. A good example of this is dynamic retail pricing, where an algorithm adjusts the prices of goods and services according to some model of supply and demand. In some cases, optimized plans and schedules can be implemented without a human in the loop.

So the data doesn't just flow from the operational systems into the data warehouse, but there is a control flow back into the operational systems. We can call this closed loop intelligence.

(If it takes too much time to process the data and generate the action, the action may no longer be appropriate. A few years ago, one of my clients wanted to use transaction data from the data warehouse to generate emails to customers - but with their existing architecture there would have been a 48 hour delay from the transaction to the email, so we needed to find a way to bypass this.)


Managing Complexity

If you have millions of customers buying hundreds of thousands of products, you need ways of aggregating the data in order to manage the business effectively. Customers can be grouped into segments, products can be grouped into categories, and many organizations use these groupings as a basis for dividing responsibilities between individuals and teams. However, these groupings are typically inflexible and sometimes seem perverse.

For example, in a large supermarket, after failing to find maple syrup next to the honey as I expected, I was told I should find it next to the custard. There may well be a logical reason for this grouping, but this logic was not apparent to me as a customer.

But the fact that maple syrup is in the same product category as custard doesn't just affect the shelf layout, it may also mean that it is automatically included in decisions affecting the custard category and excluded from decisions affecting the honey category. For example, pricing and promotion decisions.

A data-driven business is able to group things dynamically, based on affinity or association, and then allows simple and powerful decisions to be made for this dynamic group, at the right level of aggregation.

Automation can then be used to cascade the action to all affected products, making the necessary price, logistical and other adjustments for each product. This means that a broad plan can be quickly and consistently implemented across thousands of products.


Experimentation and Learning

In a data-driven business, every activity is designed for learning as well as doing. Feedback is used in the cybernetic sense - collecting and interpreting data to control and refine business rules and algorithms.

In a dynamic world, it is necessary to experiment constantly. A supermarket or online business is a permanent laboratory for testing the behaviour of its customers. For example, A/B testing where alternatives are presented to different customers on different occasions to test which one gets the best response. As I mentioned in an earlier post, Netflix declares themselves "addicted" to the methodology of A/B testing.

In a simple controlled experiment, you change one variable and leave everything else the same. But in a complex business world, everything is changing. So you need advanced statistics and machine learning, not only to interpret the data, but also to design experiments that will produce useful data.


Managing Organization

A traditional command-and-control organization likes to keep the intelligence and insight in the head office, close to top management. An intelligent organization on the other hand likes to mobilize the intelligence and insight of all its people, and encourage (some) local flexibility (while maintaining global consistency). With advanced data and intelligence tools, power can be driven to the edge of the organization, allowing for different models of delegation and collaboration. For example, retail management may feel able to give greater autonomy to store managers, but only if the systems provide faster feedback and more effective support. 


Transparency

Related to the previous point, data and intelligence can provide clarity and governance to the business, and to a range of other stakeholders. This has ethical as well as regulatory implications.

Among other things, transparent data and intelligence reveal their provenance and derivation. (This isn't the same thing as explanation, but it probably helps.)




Obviously most organizations already have many of the pieces of this, but there are typically major challenges with legacy systems and data - especially master data management. Moving onto the cloud, and adopting advanced integration and robotic automation tools may help with some of these challenges, but it is clearly not the whole story.

Some organizations may be lopsided or disconnected in their use of data and intelligence. They may have very sophisticated analytic systems in some areas, while other areas are comparatively neglected. There can be a tendency to over-value the data and insight you've already got, instead of thinking about the data and insight that you ought to have.

Making an organization more data-driven doesn't always entail a large transformation programme, but it does require a clarity of vision and pragmatic joined-up thinking.


Related posts: Rhyme or Reason: The Logic of Netflix (June 2017), Setting off towards the Data-Driven Business (August 2019)


Updated 13 September 2019

Tuesday, November 06, 2018

Big Data and Organizational Intelligence

Ten years ago, the editor of Wired Magazine published an article claiming the end of theory. With enough data, the numbers speak for themselves.

The idea that data (or facts) speak for themselves, with no need for interpretation or analysis, is a common trope. It is sometimes associated with a legal doctrine known as Res Ipsa Loquitur - the thing speaks for itself. However this legal doctrine isn't about truth but about responsibility: if a surgeon leaves a scalpel inside the patient, this fact alone is enough to establish the surgeon's negligence.

Legal doctrine aside, perhaps the world speaks for itself. The world, someone once asserted, is all that is the case, the totality of facts not of things. Paradoxically, big data often means very large quantities of very small (atomic) data.

But data, however big, does not provide a reliable source of objective truth. This is one of the six myths of big data identified by Kate Crawford, who points out, data and data sets are not objective; they are creations of human design. In other words, we don't just build models from data, we also use models to obtain data. This is linked to Piaget's account of how children learn to make sense of the world in terms of assimilation and accommodation. (Piaget called this Genetic Epistemology.)

Data also cannot provide explanation or understanding. Data can reveal correlation but not causation. Which is one of the reasons why we need models. As Kate Crawford also observes, we get a much richer sense of the world when we ask people the why and the how not just the how many. And Bernard Stiegler links the end of theory glorified by Anderson with a loss of reason (2019, p8).

In the traditional world of data management, there is much emphasis on the single source of truth. Michael Brodie (who knows a thing or two about databases), while acknowledging the importance of this doctrine for transaction systems such as banking, argues that it is not appropriate everywhere. In science, as in life, understanding of a phenomenon may be enriched by observing the phenomenon from multiple perspectives (models). ... Database products do not support multiple models, i.e., the reality of science and life in general. One approach Brodie talks about to address this difficulty is ensemble modelling: running several different analytical models and comparing or aggregating the results. (I referred to this idea in my post on the Shelf-Life of Algorithms).

Along with the illusion that what the data tells you is true, we can identify two further illusions: that what the data tells you is important, and that what the data doesn't tell you is not important. These are not just illusions of big data of course - any monitoring system or dashboard can foster them. The panopticon affects not only the watched but also the watcher.

From the perspective of organizational intelligence, the important point is that data collection, sensemaking, decision-making, learning and memory form a recursive loop - each inextricably based on the others. An organization only perceives what it wants to perceive, and this depends on the conceptual models it already has - whether these are explicitly articulated or unconsciously embedded in the culture. Which is why real diversity - in other words, genuine difference of perspective, not just bureaucratic profiling - is so important, because it provides the organizational counterpart to the ensemble modelling mentioned above.


https://xkcd.com/552/


Each day seems like a natural fact
And what we think changes how we act



Chris Anderson, The End of Theory: The Data Deluge Makes the Scientific Method Obsolete (Wired, 23 June 2008)

Michael L Brodie, Why understanding of truth is important in Data Science? (KD Nuggets, January 2018)

Kate Crawford, The Hidden Biases in Big Data (HBR, 1 April 2013)

Kate Crawford, The Anxiety of Big Data (New Inquiry, 30 May 2014)

Bruno Gransche, The Oracle of Big Data – Prophecies without Prophets (International Review of Information Ethics, Vol. 24, May 2016)

Kevin Kelly, The Google Way of Science (The Technium, 28 June 2008)

Thomas McMullan, What does the panopticon mean in the age of digital surveillance? (Guardian, 23 July 2015)

Evelyn Ruppert, Engin Isin and Didier Bigo, Data politics (Big Data and Society, July–December 2017: 1–7)

Ian Steadman, Big Data and the Death of the Theorist (Wired, 25 January 2013)

Bernard Stiegler, The Age of Disruption: Technology and Madness in Computational Capitalism (English translation, Polity Press 2019)

Ludwig Wittgenstein, Tractatus Logico-Philosophicus (1922)


Related posts

Information Algebra (March 2008), How Dashboards Work (November 2009), Conceptual Modelling - Why Theory (November 2011), Co-Production of Data and Knowledge (November 2012), Real Criticism - The Subject Supposed to Know (January 2013), The Purpose of Diversity (December 2014), The Shelf-Life of Algorithms (October 2016), The Transparency of Algorithms (October 2016), Algorithms and Governmentality (July 2019), Mapping out the entire world of objects (July 2020)


Wikipedia: Ensemble LearningGenetic Epistemology, PanopticismRes ipsa loquitur (the thing speaks for itself)

Stanford Encyclopedia of Philosophy: Kant and Hume on Causality


For more on Organizational Intelligence, please read my eBook.
https://leanpub.com/orgintelligence/

Tuesday, November 10, 2009

How Dashboards Work

There are three ways of understanding a dashboard.

Symbolic. A dashboard simplifies and codifies What-Is-Going-On.

Imaginary. A dashboard creates an illusion of an accurate and comprehensive picture of What-Is-Going-On.

Real. The dashboard always falls short of representing What-Is-Going-On. There is always a shadow - something left over that eludes simple capture and representation, often remaining invisible or inaccessible.

Dashboard design and development often focuses on the symbolic - defining the contents of the dashboard as a aggregation of services and data feeds, defining the events that are to be displayed (for example as warning lights), and setting simple policies that set thresholds of attention for predictable events.

Some people seem to view this as a key element of systems thinking. For example Bas de Baar, who blogs under the name "Project Shrink", identifies Systems Thinking As A Technique To Find Project Problems and claims "If I have the right metrics I can ignore everything around me and focus just on the dashboard." He appears to justify this claim by comparing project managers with fish (Swimming Upstream The Information Flow). Fish may be reasonably well adapted to many environments, but they cannot deal with the complex threats posed by the much more intelligent dolphin, as I pointed out in my blog on Lean versus Complex.

However, an emphasis on the dashboard as a simple collection of metrics overlooks the way the dashboard is used within a socio-technical system. Isaac Asimov wrote a story called Reason in which a robot controlled a dashboard perfectly, while refusing to believe in any system beyond the dashboard, but of course that's science fiction.

If we imagine that the purpose of a dashboard is to support prompt and appropriate action in a wide range of normal and abnormal operating conditions, then it should support as much (human, organizational) intelligence as is required to maintain the viability and safety of the system in a given complex environment.

One of the lessons from the Three Mile Island disaster was that when something goes seriously wrong, all the red lights on the dashboard start flashing at the same time, and unless the people in charge of the system have some way of making sense of the emergency (emerging situation), they may make things worse rather than better. (Deming calls this tampering or meddling.)

The safe operation of the nuclear power plant is not just about the design of the dashboard, or about the training of the operators, but about the whole system producing good outcomes in all circumstances. In the Springfield Nuclear Plant, the risk comes not just from idiot operators (Homer Simpson) but from corrupt managers (Montgomery Burns).

Many dashboards are designed merely to aggregate and push information into a social system that the dashboard designer doesn't bother to understand. Prior to Three Mile Island, this was often true even in safety-critical situations. I trust that the safety-critical world has now learned this lesson, but dashboards in other contexts may not be so conscientiously designed and tested.

In management information systems, a dashboard focuses executive attention onto specific aspects of the business. But there seems to be an important difference between a random collection of KPIs or guiding ratios, and a joined-up view that helps the executive reason about the business as a whole system. The standard dashboard is lacking at least two elements of organizational intelligence: Sense-Making (helping the executive see how the different items are interrelated) and Double-Loop Learning (not just getting better at meeting fixed targets, but setting more appropriate targets).

So a dashboard needs to be designed to perform a clear function within an action-based command and control system, rather than merely a simplistic reporting function.

However, there are two traps here. Firstly the hubris of the designer, imagining that a complete understanding of a complex system is possible, or expecting to produce a perfect shadow-free dashboard. Secondly, the blinkered vision of the operator, staring at the dashboard and failing to look out of the window.

In any case, management-by-dashboard seems a pretty unauthentic and disengaged way of running a company. Perhaps it is ironic that HP, one of the leading technology companies of our time, boasts its co-founder Dave Packard as one of the earliest modern practitioners of the opposite technique - "management by walking around". (HP Timeline 1940s)



This blog is an edited version of my contributions to a discussion on the Lenscraft Linked-In group. Thanks to Aidan Ward, Geoff Elliott and Hans Lodder for their stimulating comments.

Related posts: Does Cameron's Dashboard App Improve the OrgIntelligence of Government? (January 2013), Big Data and Organizational Intelligence (November 2018)