tag:blogger.com,1999:blog-61067822024-03-14T04:51:34.212+00:00Architecture, Data and Intelligenceby Richard VeryardRichard Veryardhttp://www.blogger.com/profile/04499123397533975655noreply@blogger.comBlogger841125tag:blogger.com,1999:blog-6106782.post-65011705680386680562023-11-30T20:59:00.008+00:002023-12-01T00:19:15.920+00:00Dynamic Pricing Update<p>As I pointed out back in 2006, the concept of <a href="https://rvsoapbox.blogspot.com/2006/04/dynamic-pricing.htm">Dynamic Pricing</a> has been around for at least 25 years. I first encountered it in Kevin Kelly's 1998 book, <a href="https://kk.org/books/new-rules-for-the-new-economy">New Rules for the New Economy</a>.</p><p>Five years ago, when I was working for a large UK supermarket, this concept was starting to be taken seriously. The old system of sending people around the store putting yellow stickers on items that were reaching their sell-by date was seen as labour-intensive and error-prone. There were also some trials with electronic shelf-edge labels to address the related challenge of managing special offers and discounts for a specific stock-keeping unit (SKU). At the time, however, they were not ready to invest in implementing these technologies across the whole business.</p><p>The BBC reports that these systems are now widely used in other European countries, and there are further trials in the UK. This is being promoted as a way of reducing food waste. According to Matthias Gullfer of EY Germany, around 400,000 tonnes of food is wasted every year, costing German retailers over 2 billion euros. Obviously no system will completely eliminate waste, but even reducing this by 10% would represent a significant saving.</p><p>Saving for whom? Clearly some consumers will benefit from a more efficient system of marking down items for quick sale, but there are concerns that other consumers will be disadvantaged by the potential uncertainty and lack of transparency, especially if retailers start using this technology also to increase prices in response to high demand.<br /></p><hr /><p>MaryLou Costa, <a href="https://www.bbc.co.uk/news/business-67507932">Why food discount stickers may be a thing of the past</a> (BBC News, 30 November 2023)</p><p>Matthias Guffler, <a href="https://www.ey.com/de_de/strategy/handel-lebensmittelverschwendung-vermeiden">Wie der Handel das Problem der Lebensmittelverschwendung lösen kann</a> (EY-Parthenon, 28 March 2023)</p><p><br /></p><p>See also <a href="https://rvsoapbox.blogspot.com/2017/05/the-price-of-everything.html">The Price of Everything</a> (May 2017)<br /></p>Richard Veryardhttp://www.blogger.com/profile/04499123397533975655noreply@blogger.com0tag:blogger.com,1999:blog-6106782.post-68554045441056363122023-06-21T21:42:00.078+01:002023-06-24T16:14:31.418+01:00Documenting Business Requirements<p style="text-align: left;"><i>A couple of questions came in on my phone, and I thought I'd share my answers here.</i></p><p style="text-align: left;"><br /></p><h4 style="text-align: left;"><u>When is a Business Requirements Document good enough?<br /></u></h4><p style="text-align: left;"><i> </i></p><p style="text-align: left;"><i>Here are a few considerations to start with</i><br /></p><p style="text-align: left;"><b>Fit for purpose</b> - What is the document going to be used for - planning, estimating, vendor selection, design or whatever? Does it contain just enough for this purpose, or does it dive (prematurely) into solution design or technical detail?</p><p style="text-align: left;"><b>SMART</b>
- are the requirements specific, measurable, and so on, or are they
just handwaving? How will you know when (if) the requirements have been
satisfied?</p><p style="text-align: left;"></p><p style="text-align: left;"><b>Transparency and governance</b> - is it clear whose requirements are being prioritized (FOR WHOM)? Who wrote it and what is their agenda? (Documents written by a vendor or consultancy may suit their interets more than yours.)<br /></p><p style="text-align: left;"><b>Future-proof</b> - are these requirements short-term and tactical, are all the urgent requirements equally important?</p><p style="text-align: left;"><b>Complete</b> - ah, but what does that mean? <br /></p><p style="text-align: left;"><br /></p><p style="text-align: left;"><br /></p><h4 style="text-align: left;"><u>When is a Business Requirements Document complete?</u></h4><p style="text-align: left;"><br /></p><p style="text-align: left;"><b>Scope completeness</b> - covering a well-defined area of the business in terms of capability, process, org structure, ...<br /></p><p style="text-align: left;"><b>Viewpoint completeness</b> - showing requirements for people, process, data, technology, ...</p><p style="text-align: left;"><b>Constraint completeness</b> - identifying requirements relating to privacy, security, compliance, risk, ...<br /></p><p style="text-align: left;"><b>Transition plan</b> - not just data migration but also process change, big bang versus phased, ...</p><p style="text-align: left;"><b>Quality and support plan</b> - verification, validation and testing, monitoring and improvement. What is critical before going live versus what can be refined later.</p><p style="text-align: left;"><b>RAID</b> - risks, assumptions, issues and dependencies </p><p style="text-align: left;"><br /></p><p style="text-align: left;"><b>Important note</b> - this is not to say that a business requirements document must always be complete in all respects, as it's usually okay for requirements to develop and evolve over time. But it is important to be reasonably clear about those aspects of the requirements that are assumed to be more or less complete and stable within a time horizon appropriate for the stated purpose, and to make these assumptions explicit. For example, if you are using this document to select a COTS product, and that product needs to be a good fit for requirements for at least n years, then you would want to have a set of requirements that is likely to be broadly valid for this length of time. (And avoiding including design decisions tied to today's technology.)<br /></p><p style="text-align: left;"></p>Richard Veryardhttp://www.blogger.com/profile/04499123397533975655noreply@blogger.com0tag:blogger.com,1999:blog-6106782.post-76863407446183067232023-06-03T10:22:00.000+01:002023-06-03T10:22:08.913+01:00Netflix and Algorithms<p>Following my previous posts on Netflix, I have been reading a detailed analysis in <a href="https://edfinn.net/">Ed Finn</a>'s book, <b><i>What Algorithms Want</i></b> (2017).</p><p>Finn's answer to my question <a href="https://rvsoapbox.blogspot.com/2021/01/does-big-data-drive-netflix-content.html">Does Big Data Drive Netflix Content?</a> is no, at least not directly. Although Netflix had used data to commission new content as well as recommend existing content (Finn's example was <b><i>House of Cards</i></b>) it had apparently left the content itself to the producers, and then used data and algorithmic data to promote it. </p><blockquote><q>After making the initial decision to invest in House of Cards, Netflix was using algorithms to micromanage distribution, not production.</q> <cite>Finn p99</cite></blockquote><p>Obviously something written in 2017 doesn't say anything about what Netflix has been doing
more recently, but Finn seems to have been looking at the same examples
as the other pundits I referenced in my previous post.</p><p>Finn also makes some interesting points about the transition from the original Cinematch algorithm to what he calls Algorithm 2.0.<br /></p><blockquote><q>The 1.0 model gave way to a more nuanced, ambiguity-laden analytical environment, a more reflexive attempt to algorithmically comprehend Netflix as a culture machine. ... Netflix is no longer constructing a model of abstract relationships between movies based on ratings, but a model of live user behavior in their various apps</q> <cite>Finn p90-91</cite></blockquote><p></p><p>The coding system relies on a large but hidden human workforce, hidden
to reinforce the illusion of pure algorithmic recommendations (p96) and
perfect personalization (p107). As Finn sees it, algorithm 1.0 had a lot of data but no meaning, and was not able to go from data to desire (p93). Algorithm 2.0 has vastly more data, thanks to this coding system - but even the model of user behaviour still relies on abstraction. So exactly where is the data decoded and meaning reinserted (p96)? <br /></p><p>As Netflix executives acknowledge, so-called ghosts can emerge (p95), revealing a fundamental incompleteness (lack) in symbolic agency (p96).<br /></p><p><br /></p><hr /><p><br /></p><p>Ed Finn, What Algorithms Want: Imagination in the Age of Computing (MIT Press, 2017)</p><p>Alexis C. Madrigal, <a href="https://www.theatlantic.com/technology/archive/2014/01/how-netflix-reverse-engineered-hollywood/282679/">How Netflix Reverse-Engineered Hollywood</a> (Atlantic, 2 January 2014)</p><p>Previous posts: <a href="https://rvsoapbox.blogspot.com/2017/06/rhyme-or-reason-logic-of-netflix.html">Rhyme or Reason - The Logic of Netflix</a> (June 2017), <a href="https://rvsoapbox.blogspot.com/2021/01/does-big-data-drive-netflix-content.html">Does Big Data Drive Netflix Content?</a> (January 2021)<br /></p><p></p>Richard Veryardhttp://www.blogger.com/profile/04499123397533975655noreply@blogger.com0tag:blogger.com,1999:blog-6106782.post-90290258579727564482023-03-06T22:03:00.005+00:002023-03-13T10:51:33.606+00:00Trusting the Schema<p>A long time ago, I did some work for a client that had an out-of-date and inflexible billing system. The software would send invoices and monthly statements to the customers, who were then expected to remit payment to clear the balance on their account.</p><p>The business had recently introduced a new direct debit system. Customers who had signed a direct debit mandate no longer needed to send payments.</p><p>But faced with the challenge of introducing this change into an old and inflexible software system, the accounts department came up with an ingenious and elaborate workaround. The address on the customer record was changed to the address of the internal accounts department. The computer system would print and mail the statement, but instead of going straight to the customer it arrived back at the accounts department. The accounts clerk used a rubber stamp PAID BY DIRECT DEBIT, and would then mail the statement to the real customer address, which was stored in the Notes field on the customer record.</p><p>Although this may be an extreme example, there are several important lessons that follow from this story.</p><p>Firstly, business can't always wait for software systems to be redeveloped, and can often show high levels of ingenuity in bypassing the constraints imposed by an unimaginative design.</p><p>Secondly, the users were able to take advantage of a Notes field that had been deliberately underdetermined to allow for future expansion.<br /></p><p>Furthermore, users may find clever ways of using and extending a system that were not considered by the original designers of the system. So there is a divergence between <b>technology-as-designed</b> and <b>technology-in-use</b>.</p><p>Now let's think what happens when the IT people finally get around to replacing the old billing system. They will want to migrate customer data into the new system. But if they simply follow the official documentation of the legacy system (schema etc), there will lots of data quality problems.</p><p>And by documentation, I don't just mean human-generated material but also schemas automatically extracted from program code and data stores. Just because a field is called CUSTADDR doesn't mean we can guess what it actually contains.</p><p><br /></p><p>Here's another example of an underdetermined data element, which I presented at a DAMA conference in 2008. <a href="https://rvsoapbox.blogspot.com/2008/03/san-diego.html">SOA Brings New Opportunities to Data Management</a>.<br /></p><p>In this example, we have a sales system containing a Business Type called SALES PROSPECT. But the content of the sales system depends on the way it is used - the way SALES PROSPECT is interpreted by different sales teams.</p><ul style="text-align: left;"><li>Sales Executive 1 records only the primary decision-maker in the prospective organization. The decision-maker’s assistant is recorded as extra information in the NOTES field. </li><li>Sales Executive 2 records the assistant as a separate instance of SALES PROSPECT. There is a cross-reference between the assistant and the boss</li></ul><p>Now both Sales Executives can use the system perfectly well - in isolation. But we get interoperability problems under various conditions.</p><ul style="text-align: left;"><li>When we want to compare data between executives</li><li>When we want to reuse the data for other purposes</li><li>When we want to migrate to new sales system </li></ul><p>(And problems like these can occur with packaged software and software as a service just as easily as with bespoke software.)<br /></p><p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj9QL-z0XNlp1mrJTdbR6Ma-Qiw93kXJRxBRY3rm3h48wbuwmso2WUGPjp8z2eTwRaw6Q_rCS27JSwNbLv00yoOgOd0RfOrw32vqasp7McAluSl4N-ofVU50NGfhik26KLt4q92KKivAg9ktQEnnfMtiXQ5SZ6A_Ui2oHaGBrgLjynYXKpF-w/s1248/Screenshot%202023-03-06%20220746.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="742" data-original-width="1248" height="238" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj9QL-z0XNlp1mrJTdbR6Ma-Qiw93kXJRxBRY3rm3h48wbuwmso2WUGPjp8z2eTwRaw6Q_rCS27JSwNbLv00yoOgOd0RfOrw32vqasp7McAluSl4N-ofVU50NGfhik26KLt4q92KKivAg9ktQEnnfMtiXQ5SZ6A_Ui2oHaGBrgLjynYXKpF-w/w400-h238/Screenshot%202023-03-06%20220746.png" width="400" /></a></div><br /> <p></p><p>So how did this mess happen? Obviously the original designer / implementer never thought about assistants, or never had the time to implement or document them properly. Is that so unusual? </p><p>And this again shows the persistent ingenuity of users - finding ways to enrich the data - to get the system to do more than the original designers had anticipated. </p><p> </p><p>And there are various other complications. Sometimes not all the data in a system was created there, some of it was brought in from an even earlier system with a significantly different schema. And sometimes there are major data quality issues, perhaps linked to a <a href="https://rvsoapbox.blogspot.com/2008/11/post-before-processing.html">post before processing</a> paradigm.<br /></p><p> </p><p>Both data migration and data integration are plagued by such issues. Since the data content diverges from the designed schemas, it means you can't rely on the schemas of the source data but you have to inspect the actual data content. Or undertake a massive data reconstruction exercise, often misleadingly labelled "data cleansing".</p><p><br /></p><p>There are several tools nowadays that can automatically populate your data dictionary or data catalogue from the physical schemas in your data store. This can be really useful, provided you understand the linitations of what this is telling you. So there a few important questions to ask before you should trust the physical schema as providing a complete and accurate picture of the actual contents of your legacy data store.</p><ul style="text-align: left;"><li>Was all the data created here, or was some of it mapped or translated from elsewhere? </li><li>Is the business using the system in ways that were not anticipated by the original designers of the system? </li><li>What does the business do when something is more complex than the system was designed for, or when it needs to capture additional parties or other details?</li><li>Are classification types and categories used consistently across the business? For example, if some records are marked as "external partner" does this always mean the same thing? </li><li>Do all stakeholders have the same view on data quality - what "good data" looks like? <br /></li><li>And more generally, is there (and has there been through the history of the system) a consistent understanding across the business as to what the data elements mean and how to use them?<br /></li></ul><p><br /></p><p>Related posts: <a href="https://rvsoapbox.blogspot.com/2008/11/post-before-processing.html">Post Before Processing</a> (November 2008), <a href="https://rvsoapbox.blogspot.com/2010/06/ecosystem-soa-2.html">Ecosystem SOA 2</a> (June 2010), <a href="https://demandingchange.blogspot.com/2023/03/technology-in-use.html">Technology in Use</a> (March 2023)<br /></p><p><br /></p>Richard Veryardhttp://www.blogger.com/profile/04499123397533975655noreply@blogger.com0tag:blogger.com,1999:blog-6106782.post-21893657084726455422023-02-19T10:31:00.006+00:002023-06-24T10:29:11.578+01:00Customer Multiple<p>A friend of mine shares an email thread from his organization discussing the definition of CUSTOMER, disagreeing as to which categories of stakeholder should be included and which should be excluded.</p>Why is this important? Why does it matter how
the CUSTOMER label is used? Well, if you are going to call yourself a
customer-centric organization, improve customer experience and increase
customer satisfaction, it would help to know whose experience, whose
satisfaction matters. And how many customers are there actually?<br /><p>The organization provides services to A, which are experienced by B and paid for by C, based on a contractual agreement with D. This is a complex network of actors with overlapping roles, and the debate is about which of these count as customers and which don't. I have often seen similar confusion elsewhere.<br /></p><p>My friend asks: Am I supposed to have a different customer definition for different teams (splitter), or one customer definition across the whole business (lumper)? As an architect, my standard response to this kind of question is: it depends.<br /></p><p>One possible solution is to prefix everything - CONTRACT CUSTOMER, SERVICE CUSTOMER, and so on. But although that may help sort things out, the real challenge is to achieve a joined-up strategy across the various capabilities, processes, data, systems and teams that are focused on the As, the Bs, the Cs and the Ds, rather than arguing as to which of these overlapping groups best deserves the CUSTOMER label.<br /></p><p>Sometimes there is no correct answer, but a best fit across the board. That's architecture for you!</p><p> </p><p>Many business concepts are not amenable to simple definition but have fuzzy boundaries. In my 1992 book, I explain the difference between monothetic classification (here is a single defining characteristic that all instances possess) and polythetic classification (here is a set of characteristics that instances mostly possess). See also my post <a href="https://rvsoapbox.blogspot.com/2009/02/modelling.html">Modelling Complex Classification</a> (February 2009).<br /></p><p>But my friend's problem is a slightly different one: how to deal with multiple conflicting monothetic definitions. One possibility is to lump all the As, Bs, Cs and Ds into a single overarching CUSTOMER class, and then provide different views (or frames) for different teams. But this still leaves some important questions open, such as which of these types of customer should be included in the Customer Satisfaction Survey, whether they all carry equal weight in the overall scores, and whose responsibility is it to improve these scores.<br /></p><p> </p><p>In her book on medical ontology, Annemarie Mol develops Marilyn Strathern's notion of <b>partial connections</b> as a way of overcoming an apparent fragmentation of identity - in our example, between the Contract Customer and the Service Customer - when these are sometimes the same person. <br /></p><blockquote><q>Being one shapes and informs the other while they are also different identities. ... Not two different persons or one person divided into two. But they are partially connected, more than one, and less than two.</q> <cite>Mol pp 80-82</cite></blockquote><p>Mol argues that <q>frictions are vital elements of wholes</q>,</p><blockquote><q>... a tension that comes about inevitably from the fact that, somehow, we have to share the world. There need not be a single victor as soon as we do not manage to smooth all our differences away into consensus.</q> <cite>Mol p 114</cite></blockquote><p><br /></p><p>Mol's book is about medical practice rather than commercial business, but much of what she says about patients and their conditions applies also to customers. For example, there are some elements that generally belong to "the patient", and although in some cases there may be a different person (for example a parent or next-of-kin) who stands proxy for the patient and speaks on their behalf, it is usually not considered necessary to mention this complication except when it is specifically relevant.</p><p>Similar complexities can be found in commercial organizations. Let's suppose most customers pay their own bills but some customers have more complicated arrangements. It should be possible to hide this kind of complexity most of the time.<br /></p><p>Human beings can generally cope with these elisions, ambiguities and tensions in practice, but machines (by which I mean bureaucracies as well as algorithms) not so well. Organizations tend to impose standard performance targets, monitored and controlled through standard reports and dashboards, which fail to allow for these complexities. My friend's problem is then ultimately a political one, how is responsibility for "customers" distributed and governed, who needs to see what, and what consequences may follow.</p><p><br /></p><p>(As it happens, I was talking to another friend yesterday, a doctor, about the way performance targets are defined, measured and improved in the National Health Service. Some related issues, which I may try to cover in a future post.)<br /></p><p><br /></p><hr /><p>Annemarie Mol, The Body Multiple: Ontology in Medical Practice (Duke University Press 2002)</p><p>Richard Veryard, Information Modelling - Practical Guidance (Prentice-Hall
1992) </p><ul style="text-align: left;"><li>Polythetic Classification Section 3.4.1 pp 99-100</li><li>Lumpers and Splitters Section 6.3.1, pp 169-171 <br /></li></ul>Richard Veryardhttp://www.blogger.com/profile/04499123397533975655noreply@blogger.com0tag:blogger.com,1999:blog-6106782.post-8871657894612885982022-09-11T09:02:00.003+01:002022-09-18T11:22:29.355+01:00Pitfalls of Data-Driven<p>@<a href="https://www.linkedin.com/posts/jonayre_business-data-evidence-activity-6974303836257583104-1dVg">Jon_Ayre</a> questions whether an organization's being data-driven drives the right behaviours. He identifies a number of pitfalls.</p><ul style="text-align: left;"><li>It's all too easy to interpret data through a biased viewpoint</li><li>Data is used to justify a decision that has already been made</li><li>Data only tells you what happens in the existing environment, so may have limited value in predicting the consequences of making changes to this environment</li></ul><p>In a comment below Jon's post, Matt Ballentine suggests that this is about evidence-based decision making, and notes the prevalence of confirmation bias. Which can generate a couple of additional pitfalls.</p><ul style="text-align: left;"><li>Data is used selectively - data that supports one's position is emphasized, while conflicting data is ignored.</li><li>Data is collected specifically to provide evidence for the chosen position - thus resulting in policy-based evidence instead of evidence-based policy.</li></ul><p>A related pitfall is availability bias - using data that is easily
available, or satisfies some quality threshold, and overlooking the
possibility that other data (so-called dark data) might reveal a
different pattern. In science and medicine, this can take the form of publication bias. In the commercial world, this might mean analysing successful sales and ignoring interrupted or abandoned transactions.<br /></p><p>It's not difficult to find examples of these pitfalls, both in the corporate world and in public affairs. See my analysis of <a href="https://rvsoapbox.blogspot.com/2017/03/from-dodgy-data-to-dodgy-policy-mrs.html">Mrs May's Immigration Targets</a>. See also Jonathan Wilson's piece on <a href="https://www.theguardian.com/football/blog/2022/sep/17/football-tacticians-bowled-over-by-quick-fix-data-risk-being-knocked-for-six">the limits of a data-driven approach in football</a>, in which he notes <q>low sample size, the selective nature of the data, and an absence of nuance</q>.</p><p>One of the false assumptions that leads to these pitfalls is the idea that <i><b>the data speaks for itself</b></i>. (This idea was asserted by the editor of Wired Magazine in 2008, and has been widely criticized since. See my post <a href="https://rvsoapbox.blogspot.com/2018/11/big-data-and-organizational-intelligence.html">Big Data and Organizational Intelligence</a>.) In which case, being data driven simply means <i><b>following the data</b></i>.<br /></p><p>During the COVID pandemic, there was much talk about following the data, or perhaps following the science. But given that there was often disagreement about which data, or which science, some people adopted an ultra-sceptical position, reluctant to accept any data or any science. Or they felt empowered to do their own research. (Francesca Tripodi sees parallels between the idea that one should research a topic oneself rather
than relying on experts, and the Protestant ethic of
bible study and scriptural inference. See my post <a href="https://demandingchange.blogspot.com/2021/05/thinking-with-majority-new-twist.html">Thinking with the majority - a new twist</a>.)</p><p>But I don't think being data-driven entails simply blindly following some data. There should be space for critical evaluation and sense-making, questioning the strength and relevance of the data, open to alternative interpretations of the data, and always hungry for new sources of data that might provide new insight or a different perspective. Experiments, tests.<br /></p><p>Jon talks about Amazon running experiments instead of relying on historical data alone. And in my post <a href="https://rvsoapbox.blogspot.com/2017/06/rhyme-or-reason-logic-of-netflix.html">Rhyme or Reason</a> I talked about the key importance of A/B testing at Netflix. If Amazon and Netflix don't count as data-driven organizations, I don't know what does.</p><p>So Matt asks if we should be talking about "experiment-driven" instead. I agree that experiment is important and useful, but I wouldn't put it
in the driving seat. I think we need multiple
tools for situation awareness (making sense of what is going on and
where it might be going) and action judgement (thinking through the
available action paths), and experimentation is just one of these tools.</p><p> <br /></p><hr /><p>Jonathan Wilson, <a href="https://www.theguardian.com/football/blog/2022/sep/17/football-tacticians-bowled-over-by-quick-fix-data-risk-being-knocked-for-six">Football tacticians bowled over by quick-fix data risk being knocked for six</a> (Guardian, 17 September 2022)<br /></p><p>Related posts: <a href="https://rvsoapbox.blogspot.com/2017/03/from-dodgy-data-to-dodgy-policy-mrs.html">From Dodgy Data to Dodgy Policy - Mrs May's Immigration Targets</a> (March 2017), <a href="https://rvsoapbox.blogspot.com/2017/06/rhyme-or-reason-logic-of-netflix.html">Rhyme or Reason</a> (June 2017). <a href="https://rvsoapbox.blogspot.com/2018/11/big-data-and-organizational-intelligence.html">Big Data and Organizational Intelligence</a> (November 2018), <a href="https://rvsoapbox.blogspot.com/2020/02/dark-data.html">Dark Data</a> (February 2020), <a href="https://rvsoapbox.blogspot.com/2020/11/business-science-and-its-enemies.html">Business Science and its Enemies</a> (November 2020), <a href="https://demandingchange.blogspot.com/2021/05/thinking-with-majority-new-twist.html">Thinking with the majority - a new twist</a> (May 2021), <a href="https://rvsoapbox.blogspot.com/2022/04/data-driven-reasoning-covid.html">Data-Driven Reasoning (COVID)</a> (April 2022)</p><p>My new book on Data Strategy now available on LeanPub: <a href="https://leanpub.com/howtodothingswithdata/">How To Do Things With Data</a>.<br /></p><p></p>Richard Veryardhttp://www.blogger.com/profile/04499123397533975655noreply@blogger.com0tag:blogger.com,1999:blog-6106782.post-62658986173959069412022-08-03T13:54:00.000+01:002023-02-19T10:32:16.251+00:00From Data to Doing<p>One of the ideas running through my work on #datastrategy is to see data as a means to an end, rather than an end in itself. As someone might once have written, </p><p></p><blockquote><i><q>Data scientists have only interpreted the world in various ways. The point however is to change it.</q></i></blockquote><p>Many people in the data world are focussed on collecting, processing and storing data, rendering and analysing the data in various ways, and making it available for consumption or monetization. In some instances, what passes for a data strategy is essentially a data management strategy.</p><p>I agree that this is important and necessary, but I don't think it is enough.<br /></p><p>I am currently reading a brilliant book by Annemarie Mol on Medical Ontology. In one chapter, she describes the uses of test data by different specialists in a hospital. The researchers in the hospital laboratory want to understand a medical condition in great detail - what causes it, how it develops, what it looks like, how to detect it and measure its progress, how it responds to various treatments in different kinds of patient. The clinicians on the other hand are primarily interested in interventions - what can we do to help this patient, what are the prospects and risks.</p><p>In the corporate world, senior managers often use data as a monitoring tool - screening the business for areas that might need intervention. Highly aggregated data can provide them with a thin but panoramic view of what is going on, but may not provide much guidance on corrective or preventative action. See my post on <a href="https://demandingchange.blogspot.com/2010/10/orgintelligence-in-control-room.html">OrgIntelligence in the Control Room</a> (October 2010).</p><p>Meanwhile, what if your data strategy calls for a 360 view of key data domains, such as CUSTOMER and PRODUCT. If these initiatives are to be strategically meaningful to the business, and not merely exercises in technical plumbing, they need to be closely aligned with the business strategy - for example delivering on customer centricity and/or product leadership.</p><p>In other words, it's not enough just to have a lot of good quality data and generating a lot of analytic insight. Hence the title of my new book - <a href="https://leanpub.com/howtodothingswithdata/">How To Do Things With Data</a>.</p><p><br /></p><hr /><p>Annemarie Mol, The Body Multiple: Ontology in Medical Practice (Duke University Press 2002)</p><p>My book on Data Strategy is now available in beta version. <a href="https://leanpub.com/howtodothingswithdata/">https://leanpub.com/howtodothingswithdata/</a> <br /></p><p></p>Richard Veryardhttp://www.blogger.com/profile/04499123397533975655noreply@blogger.com0tag:blogger.com,1999:blog-6106782.post-49521535525723035682022-08-01T09:38:00.001+01:002022-08-02T09:48:56.524+01:00New Book - How to do things with data<p>#datastrategy My latest book has been published by @Leanpub </p><p> </p><div class="separator" style="clear: both; text-align: center;"><a href="https://leanpub.com/howtodothingswithdata" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img alt="Book cover: How to do things with data" border="0" data-original-height="800" data-original-width="550" height="320" src="https://d2sofvawe08yqg.cloudfront.net/howtodothingswithdata/s_hero2x?1635583722" width="221" /></a></div><br /><p><br /></p><p>This is a beta version, and I intend to add more material as well as responding to feedback from readers and making general improvements. Subscribers will always have access to the latest version.<br /></p>Richard Veryardhttp://www.blogger.com/profile/04499123397533975655noreply@blogger.com0tag:blogger.com,1999:blog-6106782.post-80112755753419002132022-07-31T17:12:00.000+01:002023-02-18T17:13:14.018+00:00COVID-19 - Anarchy or Panarchy?<p>In September 2005, we had reason to worry about the ability of a tightly coupled world to withstand shocks. At that time this included Hurricane Katrina and SARS. More recent crises, including the COVID-19 pandemic and the war in Ukraine have arguably outshocked these.<br />
<br />In his analysis of the economic sanctions imposed against Russia following its 2022 invasion of Ukraine, Simon Jenkins comments that <q>the interdependence of the world’s economies, so long seen as an instrument of peace, has been made a weapon of war</q>.
<br />
<br /><q>As the global economy becomes more tightly coupled, the chances of one event having a catastrophic impact on the entire system increase</q> notes an account called @<a href="https://twitter.com/Foz89107323/status/1619053086105866244">Forrest</a>. Forrest, billed as <q>anti-tech, right-wing dissident thought</q>, uses this statement as part of an argument against geoengineering remedies to climate change, on two grounds. Firstly, because meddling with complex systems is likely to have unforeseen consequences, and secondly because these supposed remedies represent a further power-shift towards global technological elites. (Bill Gates obviously, who else?)<br /></p><p>Other thinkers see this as an opportunity to shift from deterministic systems to more adaptive and resilient systems (Wieland) or to shift from technological capitalism to a different sociopolitical system (Zhang).<br /></p><p> </p><p></p><hr /><p></p><p></p><p></p><p></p><p></p><p></p><p></p><p></p><p>Tony Dutzik, Defusing a <q>rigged to blow</q> economy: Rebuilding resilience in a suddenly fragile world (<a href="https://frontiergroup.org/articles/defusing-rigged-blow-economy-rebuilding-resilience-suddenly-fragile-world/">Frontier Group, 30 March 2020</a>) reprinted (<a href="https://www.strongtowns.org/journal/2020/3/31/defusing-an-economy-rigged-to-blow">Strong Towns, 1 April 2020</a>) </p><p>Nick Gall, <a href="https://blogs.gartner.com/nick_gall/2011/01/24/panarchitecture-architecting-a-network-of-resilient-renewal/">Panarchitecture: Architecting a Network of Resilient Renewal</a> (Gartner, 24 January 2011)<br />
<br />
Tim Harford, <a href="http://timharford.com/2020/03/why-the-crisis-is-a-test-of-our-capacity-to-adapt/">Why the crisis is a test of our capacity to adapt</a> (Financial Times, 20 March 2020)</p><p>Simon Jenkins, <a href="https://www.theguardian.com/commentisfree/2022/jul/29/putin-ruble-west-sanctions-russia-europe">The rouble is soaring and Putin is stronger than ever - our sanctions have backfired</a> (The Guardian, 29 July 2022)<br /></p><p>Andreas Wieland, <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7753537/">Dancing the Supply Chain: Toward Transformative Supply Chain Management</a> (The Journal of Supply Chain Management. 2021 Jan; 57(1): 58–73. <br /></p><p>Yanzhu Zhang, <a href="https://www.bsg.ox.ac.uk/blog/panarchy-relevant-covid-19-pandemic-times">Is panarchy relevant in the COVID-19 pandemic times?</a> (Blavatnik School of Government, 10 June 2020)</p><p>Related blogposts: <a href="https://rvsoapbox.blogspot.com/2005/09/efficiency-and-robustness.htm">Efficiency and Robustness - On Tight Coupling</a> (September 2005)</p><p><span style="font-size: xx-small;">Updated 18 February 2023 </span><br /></p>Richard Veryardhttp://www.blogger.com/profile/04499123397533975655noreply@blogger.com0tag:blogger.com,1999:blog-6106782.post-31762453733275841462022-07-29T11:34:00.006+01:002023-02-19T02:55:04.092+00:00Testing Multiples<p>From engineering to medicine, professionals are often forced to rely on tests that are not always completely accurate. In this post, I shall look at the tests that millions of people were obliged to use during the pandemic, to check whether they had COVID-19. The two most common tests were Lateral Flow and PCR. Lateral Flow was quicker and more convenient, while PCR took longer (because the sample had to be sent to a lab) and was supposedly more accurate.</p><p>There was also a difference in the data collected from these tests. Whereas all the results from the PCR tests should have been available in the labs, the results from the lateral flow tests were only reported under certain circumstances. There was no obligation to report a negative test unless you needed access to something, and people sometimes chose not to report positive tests because of the restrictions that might follow. And of course people only took the tests when they had to, or wanted to. When people had to pay for the tests, this obviously made a big difference.<br /></p><p>To compensate for these limitations, some random screening was carried out, which was designed to produce more reliable and representative datasets. However, these datasets were much smaller.</p><p> </p><p>So what can we do with this kind of data? Firstly, it tells us something about the disease - whether it is distributed evenly across the country or concentrated in certain places, how quickly it is spreading. If we can combine the test results with other information about the test subjects, we may be able to get some demographic information - for example, how is the disease affecting people of different age, gender or race, how is it affecting different job categories. And if we have information from the health service, we can estimate how many of those testing positive end up in hospital.<br /></p><p>This kind of information allows us to make predictions - for example, future demand for hospital beds, possible shortages of key workers. It also allows us to assess the effects of various protective measures - for example, to what extent does mask-wearing, social distancing and working from home reduce the rate of transmission.<br /></p><p>Besides telling us about the disease, the data should also be able to tell us something about the tests. And the accuracy of the predictions provides a feedback loop,
which may enable us to reassess either the test data or the predictive
models.</p><p> </p><p>In her book The Body Multiple, Annemarie Mol discusses the differences between two alternative tests for atherosclerosis, and describes how clinicians deal with cases where the two tests appear to provide conflicting results, as well as cases where there may be other reasons to question the test results. Instead of having a single view of the disease, she talks about its multiplicity or manyfoldedness.<br /></p><p>But questioning the test results in a particular case, or highlighting particular issues with a given test, does not mean denying the overall value of the test. Most of the time we can continue to regard a test as useful, even as we are considering ways of improving it.</p><p>If and when we introduce a new or improved test, we may then wish to translate data between tests. In other words, if test A produced result X, then we would have expected test B to produce result Y. While this kind of translation may be useful for statistical purposes, we need to be careful about its use in individual cases.</p><p>For many people, the second discourse appears to undermine the first discourse. If we can't <b>always </b>trust the data, can we <b>ever </b>trust the data? During the COVID pandemic, many rival interpretations of the data emerged; some people chose interpretations that confirmed their preconceptions, while others turned away from any kind of data-driven reasoning.</p><p> </p><p>The COVID pandemic became a politically contentious field, so what if we look at other kinds of testing? In safety engineering, components and whole products are subjected to a range of tests, which assess the risk of certain kinds of failure. Obviously there are manufacturers and service providers with a commercial interest in how (and by whom) these tests are carried out, and there may be regulators and researchers looking at how these tests can be improved, or to detect various forms of cheating, but ordinary consumers don't generally spend hours on YouTube complaining about their accuracy and validity.</p><p>Meanwhile even basic corporate reporting may be subject to this kind of multiplicity, as illustrated in my recent post on <a href="https://rvsoapbox.blogspot.com/2022/07/data-estimation-stockpile-reports.html">Data Estimation</a> (July 2022).<br /></p><p>So there is a level of complexity here, which not all data users may feel comfortable with, but which data professionals may not feel comfortable about hiding. In a traditional report, these details are often pushed into footnotes, and in an online dashboard there may be symbols inviting the user to drill down for further detail. But is that good enough?</p><hr /><p>Annemarie Mol, The Body Multiple: Ontology in Medical Practice (Duke University Press 2002)<br /></p><p>Wikipedia: <a href="https://en.wikipedia.org/wiki/COVID-19_testing">COVID-19 testing</a></p><p>Related posts: <a href="https://rvsoapbox.blogspot.com/2022/04/data-driven-reasoning-covid.html">Data-Driven Reasoning - COVID</a> (April 2022), <a href="https://rvsoapbox.blogspot.com/2022/07/data-estimation-stockpile-reports.html">Data Estimation</a> (July 2022)</p>Richard Veryardhttp://www.blogger.com/profile/04499123397533975655noreply@blogger.com0tag:blogger.com,1999:blog-6106782.post-38344132970149747552022-07-20T13:50:00.005+01:002022-09-10T22:49:55.926+01:00Boundary Objects and Artful Integration<p>I once did some data architecture and modelling for <a href="https://www.nhsbt.nhs.uk/">NHS Blood and Transplant</a>. This is a UK-wide agency responsible for blood, organs and other body parts, managing transfers from donors to recipients.<br /></p><p>One of the interesting challenges for this kind of organization is the need for collaboration between different specialist disciplines. Some teams are responsible for engaging with potential and regular donors, encouraging and arranging donation sessions for blood and plasma. Meanwhile there are other teams who need an extremely precise biomedical profile of each donor, to ensure safety as well as identifying people with rare blood types. While there is a conceptual boundary between these two sets of concerns, the teams need to collaborate effectively and reliably across this boundary.</p><p>So in terms of data and interoperability, we have an entity (in this case the donor) that is viewed in significantly different ways, but with a common identity. In the past, I've talked about two-faced entities or hinge entities, but the term that is generally used nowadays is Boundary Object.</p><p></p><blockquote><p><q>We define boundary object as those objects that both inhabit several communities of practice and satisfy the informational requirements of each of them.</q> <cite>Bowker and Star p 16</cite><br /></p></blockquote><p></p><blockquote><p><q>Boundary objects are the canonical forms of all objects in our built and natural environments.</q> <cite>Bowker and Star p 307</cite><br /></p><p></p></blockquote><p>In general, such boundary objects tend to be weakly structured, while being linked to much stronger structures in each separate domain. While boundary objects are considerably more than just data objects, they raise important questions for the data architect, who needs a critical eye for possible multiplicity or misfit that might compromise interoperability.</p><p>Simple multiplicity occurs when there are different levels of granularity each side of the boundary - one side lumping things together, the other side splitting them apart. In a library, for example, the people responsible for the catalogue may understand BOOK to refer to the title, while the people responsible for managing loans may want each physical copy to be represented as a separate instance of BOOK. And while people outside a warehouse may be happy with a notion of STOCK LOCATION that simply points to the warehouse as a single location, people inside the warehouse will want a more fine-grained notion, telling them more precisely where the stock can be found - for example, which shelf in which aisle.<br /></p><p>Misfits can occur when there are competing notions of inclusion or classification - for example, does PRODUCT include the products and services of our partners as well as our own, does it include legacy products we no longer sell, does it include products that haven't been launched yet?</p><p>And with people, it may not be clear whether we are interested in the person or the role.</p><p>One way of detecting these issues is simply to ask how many there are. If you get widely different answers, this is a pretty good indicator that people aren't talking about the same thing. But sometimes the discrepancies can be more subtle and harder to detect.</p><p>And resolving (brokering) these issues can often be a political challenge, as Kimble et al argue, not merely a technical one. Specialists may be reluctant to share even a highly simplified version of their view of an object, for fear that this information might be misunderstood and misapplied. And yet there may be some value in sharing some of this information. Furthermore, there may be considerable resistance to relaxing any of the strong constraints on either side of the boundary. So the data architect needs to negotiate exactly what the boundary object will be and how much it should contain.</p><p>In his earlier writings, Étienne Wenger described this as a broker role.</p><p></p><blockquote><q>The job of brokering is complex. It involves processes of translation, coordination and alignment between perspectives. It requires enough legitimacy to influence the development of a practice ... it also requires the ability to link practices by facilitating transactions between them and to cause learning by introducing into a practice, elements of another.</q> <cite>Wenger 1998, p 109</cite></blockquote><p></p><p>He now talks more generally about system convening, which combines and reinterprets several different roles, including that of broker.</p><p></p><blockquote><q>If a group needs some scaffolding and enabling, call a facilitator. A broker is ideal for helping to translate ideas from one practice to another. A weaver will join the dots, strategically connecting people into new networks. An inspiring visionary with charisma is not necessarily a systems convener. Nor is a person who convenes an event or manages systems change or multi-stakeholder processes. None of these roles in themselves are systems convening, although systems conveners often play some of them and it is quite possible that a person reinterpreting one of these roles ends up adopting a systems-convening approach.</q> <cite>Wenger-Trayner 2021, p 28</cite></blockquote>The creation of boundary objects seems to require a combination of these roles. Lucy Suchman talks about artful integration, <q>attempting to shift the frame of design practice and its objects from the figure of the heroic designer and associated next new thing, to ongoing, collective practices of sociomaterial configuration, and reconfiguration in use</q>. <p></p><p> </p><hr /><p> </p><p>Geoffrey Bowker and Sarah Leigh Star, Sorting Things Out (MIT Press, 1999) - <a href="https://www.ics.uci.edu/~gbowker/classification/">online extract</a><br /></p><p>Mary L Darking, <a href="etheses.lse.ac.uk/259/1/Darking_Integrating%20on-line%20learning%20technologies%20into%20higher%20education.pdf">Integrating on line learning technologies into higher education</a> (LSE 2004) </p><p>Chris Kimble, Corinne Grenier and Karine Goglio-Primard, <a href="https://www.researchgate.net/publication/222960258">Innovation and Knowledge Sharing Across Professional Boundaries: Political Interplay between Boundary Objects and Brokers</a> (International Journal of Information Management, October 2010)</p><p>Lucy Suchman, <a href="at http://www.comp.lancs.ac.uk/sociology/papers/Suchman-Located-Accountabilities.pdf">Located Accountabilities in Technology Production</a>, (Centre for Science Studies, Lancaster University, 2000-2003)<br /></p><p>Etienne Wenger, Communities of Practice: Learning, Meaning, and Identity (New York: Cambridge University Press, 1998)<br /></p><p>Etienne and Beverly Wenger-Trayner, System Convening: A crucial form of leadership for the 21st century (Social Learning Lab, 2021)<br /></p><p>Wikipedia: <a href="https://en.wikipedia.org/wiki/Boundary_object">Boundary Object</a><br /></p>Richard Veryardhttp://www.blogger.com/profile/04499123397533975655noreply@blogger.com0tag:blogger.com,1999:blog-6106782.post-40998691355782639212022-07-08T10:38:00.003+01:002024-03-10T18:27:58.742+00:00Data Estimation - Stockpile Reports<p>My attention has been drawn to a company called <a href="https://www.stockpilereports.com">Stockpile Reports</a>, which provides a data estimation service for piles of material. As I understand it, the calculations are based on visual images of a pile of material, possibly obtained using either a drone or a handmobile phone app, from which a detailed 3D model of the pile is produced, allowing the volume, weight or value of the material to be estimated. <br /></p><p>I don't have any information about how good these estimates are, but they must be a lot more accurate than simply gauging the height of the pile, and I guess they are probably good enough for the organizations that use the service. In this blogpost, I want to make some observations about this service and its potential value to user organizations, as well as some general thoughts about aggregation, variety and governance<br /></p><p>The top benefit promised by Stockpile Reports is to eliminate painful write-offs. Because inventory is shown as an asset in a company's accounts, and companies don't like having to make large adjustments when these asset values turn out to be grossly inaccurate.<br /></p><p>But this makes it seem as if the primary motivation for accurate inventory data is financial accounting. While I understand that many execs may be concerned about this, especially CFOs, financial accounting is a largely retrospective process. Whereas operational excellence and other forms of strategic advantage have a more immediate need for accurate inventory data.<br /></p><p>And this is not just about the relative importance of financial reporting versus operational excellence etc. It is also about top-down versus bottom-up.
Because a write-off essentially means that the total figure is incorrect. This plays into a top-down view of data quality and trustworthy data. Stockpile provide a service whose primary purpose appears to be to reassure investors that the financial accounts are correct. </p><p>This leads me to talk about <b>aggregation</b>. Top-down aggregation is about calculating a number, from a given body of data, for a given purpose. For example, the purpose might be financial accounting, management accounting, or billing. (Where billing essentially means aggregating everything that has happened during the past month in order to determine how much the customer owes us.) With some exceptions (such as Activity-Based Costing), accounting is largely premised on reducing everything to a standard set of numbers. </p><p>So this implies three capabilities. </p><ol style="text-align: left;"><li>The capability of producing the correct number </li><li>The capability of inspecting the source data and calculation in order to explain or verify the number. (For example, the reason for itemized billing is to justify the number at the bottom of the invoice.) </li><li>The capability of preserving the source data and calculation, so that inspection and audit will be possible into the future. (Also mechanisms to allow external parties to trust the integrity of the numbers.) </li></ol><p>All of these capabilities are simplified if we can eliminate as much variation as possible. If Stockpile’s drones are solely assessing the size of different heaps of stuff in order to aggregate them, then the fact that the heaps are all different is simply an annoying difficulty that the technology is designed to solve. Thus the variation between the piles is smoothed over. <br /></p><p>Some degree of standardization may also be needed to support and enable Statistical Process Control. </p><p>But what if the differences between the heaps are interesting and strategically significant? Now we start to look at the data in a different way - no longer top-down aggregation but something else.
The Stockpile Reports claim is that estimating the size of the piles more frequently produces more accurate data. The immediate benefit is more consistent data and fewer nasty surprises. Presumably there is also a feedback loop, so that when a pile of gravel is loaded onto trucks, we can see if the quantity on the trucks matches the quantity we thought we had in the pile. So this is what might generate greater accuracy (over time), assuming other conditions remains unaltered. </p><p>Where is the variety in this business problem?</p><ul style="text-align: left;"><li>Different shapes and sizes of pile</li><li>Vegetation or other extraneous material on top of the pile</li><li>Different kinds of material - e.g. gravel versus sand</li><li>Lack of consistency in the material - e.g. the quality of the gravel may depend on the quality of the original rock, plus the operational characteristics of the extraction machinery. Even within a single pile, the gravel may not be completely homogeneous.</li><li>Weather. Presumably wet gravel is heavier than dry gravel.</li><li>Control. What difference does it make if the pile of gravel is at your customer’s or supplier’s site rather than your own? How transparent are inventory levels across organizational boundaries? Do companies really want to share this information, or might they have reasons for hiding or distorting the true state of their inventory? </li></ul><p>These are all factors that may need to be considered when estimating the pile. But there is a more fundamental question, which is about the meaning and use of the number that is produced by this estimation process, given the variety of industry operating contexts - so I wonder whether there is a universal understanding of what counts as a pile. When estimating the size of a pile of gravel, does it matter what the
gravel is going to be used for? What about the end-to-end journey of the
gravel: if a pile of gravel is moved to a new location, is it still the
same pile? </p><p>Depending on the frequency of viewing the pile, it may be possible to track changes in the shape of the pile, and this might provide information about stock movements (including unauthorized ones). For example, it may be possible to infer something about the movement vehicle (truck, wheelbarrow) from the difference made to the shape of the pile. For some materials, there may also be natural processes that change the volume of the pile over time - for example organic material may dry out or decay.<br /></p><p>But this discussion of variety brings us to an important question - variety for whom. If some degree of requisite agility is needed to provide requisite variety, where does this agility need to be situated? How much does Stockpile Reports know (need to know) about its customer’s context of use? </p><p>There
may be variation within a single customer, to provide economies of
scale and scope (to those customers), and to allow these organizations to operate with requisite agility in relation to a dynamic demand. Stockpile Reports might also wish to manage variation across
different customers. For example, there may be some value in aggregating or comparing data from different
customers - for example, calibrating the estimates of similar materials,
allowing a new customer to get up to speed more quickly. </p><p>There may be some value to both Stockpile Reports and its customers from closer coordination and mutual agility. But this raises some important questions. Firstly, how this value is shared between them. Secondly what kind of mutual trust does this require. And therefore how this coordination is to be governed.</p><p><br /></p><p>See also my post on <a href="https://rvsoapbox.blogspot.com/2022/07/testing-multiples.html">Testing Multiples</a> (July 2022). Philip Boxer's analysis of the Stockpile example can be found here <a href="https://asymmetricleadership.com/2023/04/07/accelerating-demand-tempos-and-the-double-v-cycle/" rel="bookmark">Accelerating demand tempos and the double-V cycle</a> and here <a href="https://asymmetricleadership.com/2023/04/07/the-dialectics-implied-by-the-q-sectors/" rel="bookmark">The dialectics implied by the Q-sectors</a> (April 2023).<br /></p>Richard Veryardhttp://www.blogger.com/profile/04499123397533975655noreply@blogger.com0tag:blogger.com,1999:blog-6106782.post-10885948056884343082022-07-07T14:26:00.002+01:002022-08-15T11:20:01.649+01:00Data Governance Overview<p>In many organizations there is a growing awareness of the need for data governance. This is often driven by a perception that something is lacking - perhaps related to data quality or accountability. So this leads to a solution based on data ownership, and the monitoring and control of data quality. But is that all there is to data governance?<br /></p><p>Data management operates at many levels, and there may be governance issues at each of these levels.</p><p> </p><p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhxTbR8VLBvp90HzrqOLAJ3PcJsW8Ai_Us06YB1KRMw8_1jJHY7rokgVtigUOR5Zi4vxdRMMG8TxebF5V4aLQQLkZhw-ztolGluiQGFqHGKEmqbolxpsYyzZGmpZ1-JTzGrrKJWD0obsoWXhQfFFIKjDD_BZqzszeKX_63_6O7QkUW_ere5vg/s1569/GovernanceLevels.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="634" data-original-width="1569" height="258" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhxTbR8VLBvp90HzrqOLAJ3PcJsW8Ai_Us06YB1KRMw8_1jJHY7rokgVtigUOR5Zi4vxdRMMG8TxebF5V4aLQQLkZhw-ztolGluiQGFqHGKEmqbolxpsYyzZGmpZ1-JTzGrrKJWD0obsoWXhQfFFIKjDD_BZqzszeKX_63_6O7QkUW_ere5vg/w640-h258/GovernanceLevels.png" width="640" /></a></div> <p></p><p>In some organizations, we may be constrained to work bottom-up, starting from level 1 and 2, because this is where the perceived issues are focused, and where we can engage people to establish the correct activities and control mechanisms. Some level 3 issues may be perceived, but it may initially be difficult to find people willing to commit any effort to resolving them. Level 4 and 5 issues are "above the ceiling", at least within these forums. </p><p>There may be some clashes between the five levels, especially the higher ones. For example, at Level 5 the organization may assert a vision of a data-driven strategy, delivering strategic advantage to the business, while at Level 4 the data-related policies of an organization might be predominantly defensive, for example concerned with privacy, security and other compliance issues. If these two levels are not properly aligned, this is a matter for data governance, but one which bottom-up data governance is unlikely to reach. </p><p>So what would top-down data governance look like? </p><p><b>Purpose</b> (final cause) - for example, to drive the reach, richness, agility and assurance of enterprise data - see my posts on <a href="https://rvsoapbox.blogspot.com/search/label/DataStrategy">DataStrategy</a><br /></p><p><b>Approach</b> (efficient cause) - to establish policies and processes that align the enterprise to these purposes </p><p><b>Structure</b> (formal cause) - an understanding of the contradictions and conflicts that might enable or inhibit change - what kinds of collaboration and coordination might be possible - and how might these structures themselves be altered </p><p><b>Problem space</b> (material cause) - finding issues that people are willing to invest time and energy to address </p><p>Bottom-up data governance is driven by the everyday demands of case workers and decision-makers within the organization, the data fabric not being fit for purpose for fairly mundane tasks. Top-down data governance would be driven by senior management trying to push a transformation of data management and culture - always provided that they can give this topic much attention alongside all the other transformations they are trying to push. (I have some experiences from different organizations, which I plan to mash together into a generic example. Watch this space.) <br /></p><p>But what if that's not where the true demand lies? </p><p>Philip Boxer and I have been looking at alternatives to these top-down/bottom-up hierarchical forms, which we call edge-driven governance. If the traditional top-down / bottom-up can be thought of as North-South, then this is East-West. We recently got together at the Requisite Agility conference to discuss how far our thinking had developed (separately but largely in parallel) since we wrote our original papers for the Microsoft Architecture Journal. During this time, Phil spent a few years at the Software Engineering Institute (SEI) while I was mostly working at the architectural coal-face. There's a lot of really interesting work that has emerged recently, including further work by David Alberts, and a book on Bifurcation edited by Bernard Stiegler.<br /></p><p>So how can we make practical interventions into these data governance issues from an East-West perspective? There are some critical concepts that are in play in this world, including agility, alignment, centricity, flexibility, simplicity, variety. Everyone pays lip service to these concepts in their own way, but they turn out to be highly unstable, capable of being interpreted in quite different ways from different stakeholder positions. </p><p>So to pin these concepts down requires what John Law and Annemarie Mol call Ontological Politics. This include asking the Who-For-Whom question - agility for whom, flexibility for whom, etc. Philip has developed an abstract framework he calls Ontic Scaffolding, to support practical efforts to move “Across and Up”. <br /></p><p><i><br /></i></p><p><i>Note - videos from the Requisite Agility conference are currently in post-production and should be available soon. I'll post a link here when it is.</i><br /></p><hr /><p>
Philip Boxer, <a href="https://asymmetricleadership.com/2006/04/06/east-west-dominance/">East-West Dominance</a> (Asymmetric Leadership, April 2006)</p><p>Philip Boxer, <a href="https://www.slideshare.net/PhilipBoxer/2019-3rd-epoch-pathway-3-platform-architecture-consultation-v02-250832584">Pathways across the 3rd epoch domain</a> (Slideshare, November 2019)</p><p>Philip Boxer and Richard Veryard, <a href="http://msdn.microsoft.com/en-us/library/bb245658.aspx">Taking Governance to the Edge</a> (Microsoft Architecture Journal 6, August 2006) </p><p>Annemarie Mol, <a href="https://doi.org/10.1111/j.1467-954X.1999.tb03483.x">Ontological Politics</a> (Sociological Review 1999)<br /></p><p>Bernard Stiegler and the Internation Collective (eds), <a href="http://www.openhumanitiespress.org/books/titles/bifurcate/">Bifurcate: There Is No Alternative</a> (trans Daniel Ross, Open Humanities Press, 2021) <br /></p><p>Richard Veryard and Philip Boxer, <a href="http://msdn.microsoft.com/en-us/library/aa480051.aspx">Metropolis and SOA Governance: Towards the Agile Metropolis</a> (Microsoft Architecture Journal 5, July 2005) </p><p> </p><p>Related topics on Philip's blog: <a href="https://asymmetricleadership.com/category/agility/">Agility</a> </p><p>Related topics on Richard's blog: <a href="https://rvsoapbox.blogspot.com/search/label/EdgeStrategy">EdgeStrategy</a>, <a href="https://rvsoapbox.blogspot.com/search/label/top-down">Top-Down</a>, <a href="https://rvsoapbox.blogspot.com/search/label/RequisiteVariety">RequisiteVariety</a></p><p>NEW eBook <a href="https://leanpub.com/howtodothingswithdata">How to do things with data</a> (Leanpub 2022) <br /></p>Richard Veryardhttp://www.blogger.com/profile/04499123397533975655noreply@blogger.com0tag:blogger.com,1999:blog-6106782.post-50357259806265364922022-05-23T18:54:00.003+01:002022-05-27T10:27:14.283+01:00Risk Algebra<p>In this post, I want to explore some important synergies between architectural thinking and risk management. </p><hr /><p>The first point is that if we want to have an enterprise-wide understanding of risk, then it helps to have an enterprise-wide view of how the business is configured to deliver against its strategy. Enterprise architecture should provide a unified set of answers to the following questions.<br /></p><ul style="text-align: left;"><li>What capabilities delivering what business outcomes?</li><li>Delivering what services to what customers?</li><li>What information, process and resources to support these?</li><li>What organizations, systems, technologies and external partnerships to support these?</li><li>Who is accountable for what?</li><li>And how is all of this monitored, controlled and governed?<br /></li></ul><p>Enterprise architecture should also provide an understanding of the dependencies between these, and which ones are business-critical or time-critical. For example, there may be some components of the business that are important in the long-term, but could easily be unavailable for a few weeks before anyone really noticed. But there are other components (people, systems, processes) where any failure would have an immediate impact on the business and its customers, so major issues have to be fixed urgently to maintain business continuity. For some critical elements of the business, appropriate contingency plans and backup arrangements will need to be in place.<br /></p><p>Risk assessment can then look systematically across this landscape, reviewing the risks associated with assets of various kinds, activities of various kinds (processes, projects, etc), as well as other intangibles (motivation, brand image, reputation). Risk assessment can also review how risks are shared between the organization and its business partners, both officially (as embedded in contractual agreements) and in actual practice.</p><p> </p><p>Architects have an important concept, which should also be of great interest for enterprise risk management - the idea of a single point of failure (<a href="https://rvsoapbox.blogspot.com/search/label/SPOF">SPOF</a>). When this exists, it is often the result of a poor design, or an over-zealous attempt to standardize and strip out complexity. But sometimes this is the result of what I call <a href="https://rvsoapbox.blogspot.com/2011/03/creeping-business-dependency.html">Creeping Business Dependency</a> - in other words, not noticing that we have become increasingly reliant on something outside our control.<br /></p><p><br /></p><p>There are also important questions of scale and aggregation. Some years ago, I did some risk management consultancy for a large supermarket chain. One of the topics we were looking at was fridge failure. Obviously the supermarket had thousands and thousands of fridges, some in the customer-facing parts of the stores, some at the back, and some in the warehouses.</p><p>Fridges fail all the time, and there is a constant processes of inspecting, maintaining and replacing fridges. So a single fridge failing is not regarded as a business risk. But if several thousand fridges were all to fail at the same time, presumably for the same reason, that would cause a significant disruption to the business.</p><p>So this raised some interesting questions. Could we define a cut-off point? How many fridges would have to fail before we found ourselves outside business-as-usual territory? What kind of management signals or dashboard could be put in place to get early warning of such problems, or to trigger a switch to a "safety" mode of operation.<br /></p><p>Obviously these questions aren't only relevant to fridges, but can apply to any category of resource, including people. During the pandemic, some organizations had similar issues in relation to staff absences.<br /></p><p><br /></p><p>Aggregation is also relevant when we look beyond a single firm to the whole ecosystem. Suppose we have a market or ecosystem with n players, and the risk carried by each player is R(n). Then what is the aggregate risk of the market or ecosystem as a whole? </p><p>If we assume complete independence between the risks of each player, then we may assume that there is a significant probability of a few players failing, but a very small probability of a large number of players failing - the so-called Black Swan event.
Unfortunately, the assumption of independence can be flawed, as we have seen in financial markets, where there may be a tangled knot of interdependence between players. Regulators often think they can regulate markets by imposing rules on individual players. And while this might sometimes work, it is easy to see why it doesn’t always work. In some cases, the regulator draws a line in the sand (for example defining a minimum capital ratio) and then checks that nobody crosses the line. But then if everyone trades as close as possible to this line, how much capacity does the market as a whole have for absorbing unexpected shocks?</p><p> </p><p>Both here and in the fridge example, there is a question of standardization versus diversity. On the one hand, it's a lot simpler for the supermarket if all the fridges are the same type, with a common set of spare parts. But on the other hand, having more than one type of fridge helps to mitigate the risk of them all failing at the same time. It also gives some space for experimentation, thus addressing the longer term risk of getting stuck with an out-of-date fridge estate. The fridge example also highlights the importance of redundancy - in other words, having spare fridges. </p><p>So there are some important trade-offs here between pure economic optimization and a more balanced approach to enterprise risk.<br /></p>Richard Veryardhttp://www.blogger.com/profile/04499123397533975655noreply@blogger.com0tag:blogger.com,1999:blog-6106782.post-85796406831214746452022-05-21T10:18:00.002+01:002022-10-24T09:52:03.135+01:00Uber and the Economics of Disruption<p>In a previous post on <a href="https://rvsoapbox.blogspot.com/2021/09/uber-mathematics-4.html">Uber Mathematics</a>, I asked how the magic of <q>digital</q> would allow a centralized company with international overheads (Uber or Lyft) to provide a service more cheaply and cost-effectively than local cab companies.</p><p>Uber once promised it would achieve economies of scale, thanks to <q>network effects</q>. But then, as Ali Griswald comments,</p><p></p><blockquote><q>What is scale if not a company that operates in 72 countries and more than 10,500 cities, which last year had 118 million active users every month and completed 6.3 billion rides/trips/deliveries? Uber is the definition of scale, yet it is still nowhere near consistent and reliable profitability.</q></blockquote><p></p><p>Uber appears to retain an advantage over local cab companies in terms of consumer convenience, which we can frame in terms of the economics of alignment. But as Henry Graber points out, it remains to be seen how much consumers are willing to pay for this convenience when they are no longer being heavily subsidized by over-optimistic investors, and Uber is no longer the cheapest option.<br /></p><p>Another point Graber makes about Uber is that it has disrupted alternative modes of transport, by a combination of predatory pricing and political influence, and the regulatory environment in many cities has been altered in Uber's favour. <br /></p><p>There are processes here that operate at different tempos. There is a demand tempo - getting sufficient numbers of drivers to a specific location at a specific time to satisfy peaks of demand. And an acquisition tempo - how long does it take to increase the number of available drivers, given the necessary investment in vehicles and equipment, driver training, and a whole range of safety issues.</p><p>Because there's a big difference between these two tempos, the challenge for a healthy ecosystem is in the intermediate tempo, which we call the integration tempo. What are the options for cities to restore requisite variety to local transportation?</p><p><br /></p><hr /><p>Aaron Gordon, <a href="https://jalopnik.com/uber-and-lyft-dont-have-a-right-to-exist-1837680434">Uber And Lyft Don't Have A Right To Exist</a> (Jalopnik, 30 August 2019), <a href="https://www.vice.com/en/article/m7vmpb/uber-and-lyft-are-out-of-ideas-jacking-up-prices-in-desperation-for-profit">Uber and Lyft Are Out of Ideas, Jacking Up Prices in Desperation for Profit</a> (Vice, 27 May 2022)<br /></p><p>Henry Grabar, <a href="https://slate.com/business/2022/05/uber-subsidy-lyft-cheap-rides.html">The Decade of Cheap Rides Is Over</a>
(Slate, 18 May 2022)</p><p>Ali Griswold, <a href="https://oversharing.substack.com/p/uber-wants-to-show-you-the-money">Uber wants to show you the money</a>
(Oversharing, 10 May 2022)<br /></p>Richard Veryardhttp://www.blogger.com/profile/04499123397533975655noreply@blogger.com0tag:blogger.com,1999:blog-6106782.post-87676886208906575472022-05-19T12:50:00.000+01:002022-05-21T17:41:07.567+01:00Enterprise as Noun<p>@<a href="https://www.linkedin.com/posts/tetradian_im-still-exploring-the-new-togaf-version-activity-6928075568391340032-8hkm">tetradian</a> spots a small, subtle yet very welcome sentence in the latest version of TOGAF. </p><p></p><blockquote>"An enterprise may include partners, suppliers, and customers as well as internal business units."</blockquote><p></p><p>Although similar statements can be found in earlier versions of TOGAF, and the Open Group (which produces TOGAF) has been talking about the "extended enterprise" and "boundarylessness" for over twenty years, these ideas have often seemed to take a back seat. So let's hope that the launch of TOGAF 10 will trigger a new emphasis on the "extended enterprise". </p><p>But why is this so difficult? It seems people are much more comfortable thinking about "the enterprise" or "the organization" as a fixed thing with a clearly defined perimeter. An enterprise is assumed to be a legal entity, with a dedicated workforce, one or more bank accounts and perhaps even a stock exchange listing. </p><p>Among other things, this encourages a perimeter-based approach to risk and security, where the primary focus is to control events crossing the perimeter in order to protect and manage what is inside. Companies often seek to export risk and uncertainty onto their customers or suppliers.<br /></p><p>Alternatives to the fortress model of security are known as de-perimeterization. The aptly named Jericho Forum was set up by the Open Group in 2004 to promote this idea, now merged into the Open Group's Security Forum. </p><p>Instead of enterprise as noun, what about enterprise as verb, thinking of enterprise simply as a way of being enterprising? This would allow us to consider transient collaborations as well as permanent legal entities. Businesses are sometimes described as "going concerns", and this phrase shifts our attention from the identity of the business to questions of activity, viability and intentionality.<br /></p><ul style="text-align: left;"><li><b>Activity </b>- What is going on?</li><li><b>Viability - </b>Is it going? Is it going to go on going?</li><li><b>Intentionality - </b>Where is it supposed to be going? Whose concern is it? <br /></li></ul><p>With this way of talking about enterprise, does it still make sense to talk about inside and outside, what belongs to the organization and what doesn't? I think it probably does, as long as we are careful not to take these terms too literally. Enterprises are assemblages of knowledge, power, responsibility, etc, and these elements are never going to be distributed uniformly. So even if we don't have a strict line dividing the inside from the outside, we can may be able to detect a gradient from inwards to outwards, and there may be opportunities to alter the gradient and make some strategic improvements, which we might frame in terms of agility or requisite variety. Hence Power to the Edge.</p><p> <br /></p><hr /><p>Related posts: <a href="https://demandingchange.blogspot.com/2005/04/dividing-risk.html">Dividing Risk</a> (April 2005), <a href="https://rvsoftware.blogspot.com/2005/05/jericho.html">Jericho</a> (May 2005), <a href="https://rvsoapbox.blogspot.com/2005/12/power-to-edge.html">Power to the Edge</a> (December 2005), <a href="https://www.asymmetricleadership.com/2007/03/16/boundary-perimeter-edge/">Boundary, Perimeter, Edge</a> (March 2007), <a href="https://rvsoapbox.blogspot.com/2007/08/outside-in-architecture.htm">Outside-In Architecture</a> (August 2007), <a href="https://rvsoapbox.blogspot.com/2010/06/ecosystem-soa-2.html">Ecosystem SOA</a> (June 2010)<br /></p>Richard Veryardhttp://www.blogger.com/profile/04499123397533975655noreply@blogger.com0tag:blogger.com,1999:blog-6106782.post-79422824513454373042022-04-19T18:50:00.003+01:002022-07-29T13:40:22.811+01:00Data-Driven Reasoning (COVID)<p>Following the data is all very well, but how should we decide which data to follow?</p><p> </p><p>Nate Silver argues that the total data are more important than the marginal data. There is more virus transmission in restaurants than in aeroplanes.</p><p></p><blockquote class="twitter-tweet"><p dir="ltr" lang="en">The average American spends something like 5 hours per year on a plane. The mask mandate might be good or bad at the margin, but it is very unlikely to make a major difference in the overall course of the coronavirus.</p>— Nate Silver (@NateSilver538) <a href="https://twitter.com/NateSilver538/status/1516225654018265088?ref_src=twsrc%5Etfw">April 19, 2022</a></blockquote> <p></p><blockquote class="twitter-tweet" data-conversation="none"><p dir="ltr" lang="en">The average American eats at a restaurant about 250 times per year, whereas they travel by airplane about 2.5 times per year. Dining out and general nightly social activity is a much, much, much bigger contributor to COVID spread than air travel.</p>— Nate Silver (@NateSilver538) <a href="https://twitter.com/NateSilver538/status/1516242301965770761?ref_src=twsrc%5Etfw">April 19, 2022</a></blockquote><p> </p><p>Several people have challenged this, including Carl Bergstrom.</p><p></p><blockquote class="twitter-tweet"><p dir="ltr" lang="en">Imagine thinking that this is what data-driven reasoning looks like </p>— Carl T. Bergstrom (@CT_Bergstrom) <a href="https://twitter.com/CT_Bergstrom/status/1516235629939478529?ref_src=twsrc%5Etfw">April 19, 2022</a></blockquote> <script async="" charset="utf-8" src="https://platform.twitter.com/widgets.js"></script> <p><br /></p><p>The first point I want to make is about risk. When calculating the risk of transmission in aeroplanes, the level of risk somewhere else is not really relevant. As Bergstrom points out, the calculation should be based simply on the costs and benefits of wearing masks in that particular setting.</p><p></p><blockquote class="twitter-tweet" data-conversation="none"><p dir="ltr" lang="en">To explain: whether or not to take some safety precaution requires a cost-benefit analysis.<br /><br />Nate forgot the cost side of the equation.<br /><br />Wingsuit jumping be far less common than motorcycle riding, but that's hardly an argument against safety precautions for squirrel-suiters.</p>— Carl T. Bergstrom (@CT_Bergstrom) <a href="https://twitter.com/CT_Bergstrom/status/1516246331886485508?ref_src=twsrc%5Etfw">April 19, 2022</a></blockquote><p> <script async="" charset="utf-8" src="https://platform.twitter.com/widgets.js"></script> The problem here, of course, is determining the cost of wearing masks. If some people regard mask-wearing as a minor inconvenience, while others regard it as a major infringement of their human rights, what sort of data would be relevant to this calculation? Or if mask-wearing impedes communication to some extent, can we quantify this effect?<br /></p><p>The broader question is whether we should be looking at total costs and benefits, or marginal costs and benefits? In the case of mask-wearing, if we assume that most people possess a usable mask, then the marginal cost of wearing one might simply include the increased frequency of washing or replacing it, plus whatever psychological, physiological or social costs we can determine. <br /></p><p>But for some people, opposition to mask-wearing is much more fundamental than that. It is not about the costs and benefits of mask-wearing in a particular setting, but an overall objection to living in the kind of society where mask-wearing is mandated at all. Some people even object (violently) to the sight of other people wearing masks. So any kind of mask-wearing may cause social division, therefore incurring a sociopolitical cost, and this needs to be set against the overall benefits of controlling the transmission of disease.</p><p>Nate Silver acknowledges that a mask-wearing mandate may be useful at the margin, but questions its overall value. Presumably people aren't going to wear masks in restaurants while they are eating. So his tweet seems to be asking what is the point of making them wear masks on planes? <br /></p><p>This then brings us onto a question about policy, and the extent to which policy can be evidence-based. Is it better to have a consistent policy on mask-wearing across different settings, or to focus policy on those areas that are regarded as the highest risk? And should policies be optimized for effectiveness (what will have the greatest effect on suppressing transmission of the virus) or social acceptability (what will most people accept as reasonable)? Nate Silver's data may be relevant to this calculation, at least to the extent that they influence people's opinions.</p><p><br /></p><p>While there are some particularly emotive features of this example, it raises some important general points about data-driven reasoning, especially in relation to cost-benefit calculations and evidence-based policy.<br /></p><p><br /></p><p><br /></p><p></p><p></p><p></p><p></p>Richard Veryardhttp://www.blogger.com/profile/04499123397533975655noreply@blogger.com0tag:blogger.com,1999:blog-6106782.post-12164669478130229392022-03-26T22:37:00.000+00:002024-02-24T00:37:41.905+00:00Information Superiority in Ukraine<p>In my work on <a href="https://rvsoapbox.blogspot.com/search/label/DataStrategy">Data Strategy</a>, I have drawn heavily on the concept of information superiority / information advantage, which was originally developed in a military / defence context. Like many other innovations, there is significant potential for peaceful / civilian use of this concept.</p><p>There are now some early hints of information advantage / disadvantage emerging from the war in Ukraine. One obvious area is Information, Surveillance, Target Acquisition and Reconnaisance (ISTAR).</p><p></p><blockquote><q>By utilizing its intelligence and surveillance assets in Eastern Europe,
the United States was able to build a picture of Russia’s movements and
strategically release information about Russia’s plans.</q> <cite>US source quoted in Insinna</cite></blockquote><p></p><p>Another area where information is important is logistics. As already predicted before the invasion started (for example Vershinin), Russia's troops appear to have suffered significant problems in this area, at least initially. <br /></p><p>In other areas, however, predictive computer models of the conflict seem to have been way off the mark, mainly because they failed to adequately represent the human factor.<br /></p><p>John Naughton wonders why Russia appears to have abandoned the Gerasimov doctrine of new generation (nonlinear) warfare, which appears to have been executed successfully in the occupation of Crimea in 2014, and reverted to an older style of warfare. Ukraine, on the other hand, appears to have applied the Gerasimov doctrine more successfully, protecting against cyber attack while easily managing to intercept insecure Russian communications. By a strange historical irony, it is reported that one of the Russian generals killed after his geolocation was intercepted by the Ukranians was called Vitaly Gerasimov. Wikipedia advises us not to confuse him with Valery Gerasimov, the author of the doctrine.<br /></p><p><br /></p><p>No doubt there will be more clues emerging from this horrible conflict.<br /></p><hr /><p> </p><p>Valerie Insinna, <a href="https://breakingdefense.com/2022/03/top-american-generals-on-three-key-lessons-learned-from-ukraine/">Top American generals on three key lessons learned from Ukraine</a> (Breaking Defense, 11 March 2022) </p><p>Martin Murphy, <a href="https://www.heritage.org/defense/report/understanding-russias-concept-total-war-europe">Understanding Russia’s Concept for Total War in Europe</a> (Heritage Foundation, 12 September 2016)<br /></p><p>John Naughton, <a href="https://www.theguardian.com/commentisfree/2022/mar/26/putin-has-a-21st-century-digital-battle-plan-so-why-is-he-fighting-like-its-1939">Putin has a 21st-century digital battle plan, so why is he fighting like it’s 1939?</a> (Guardian, 26 March 2022)<br /></p><p>Dan Sabbagh, <a href="https://www.theguardian.com/world/2022/mar/08/russia-solving-logistics-problems-and-could-attack-kyiv-within-days-experts">Russia <q>solving logistics problems</q> and could attack Kyiv within days – experts</a> (Guardian, 8 March 2022)</p><p>Alex Vershinin, <a href="https://warontherocks.com/2021/11/feeding-the-bear-a-closer-look-at-russian-army-logistics/">Feeding the Bear: A Closer Look at Russian Army Logistics and the Fait Accompli</a> (War on the Rocks, 23 November 2021)<br /></p><p>Wikipedia: <a href="https://en.wikipedia.org/wiki/Intelligence,_surveillance,_target_acquisition,_and_reconnaissance">ISTAR</a>, <a href="https://en.wikipedia.org/wiki/New_generation_warfare">New Generation Warfare</a>, <a href="https://en.wikipedia.org/wiki/Valery_Gerasimov">Valery Gerasimov</a>, <a href="https://en.wikipedia.org/wiki/Vitaly_Gerasimov">Vitaly Gerasimov</a><br /></p><p>Previous posts on Information Superiority: <a href="https://rvsoapbox.blogspot.com/2017/03/information-superiority-and-customer.html">Information Superiority and Customer Centricity</a> (March 2017), <a href="https://rvsoapbox.blogspot.com/2019/12/developing-data-strategy.html">Developing Data Strategy</a> (December 2019), <a href="https://rvsoapbox.blogspot.com/2020/07/information-advantage-in-air-and-space.html">Information Advantage (not necessarily) in Air and Space</a> (July 2020) <br /></p>Richard Veryardhttp://www.blogger.com/profile/04499123397533975655noreply@blogger.com0tag:blogger.com,1999:blog-6106782.post-84124082678417600102022-03-19T23:31:00.008+00:002022-03-29T21:24:06.353+01:00Christopher Alexander 1936-2022<p>The architectural theorist Christopher Alexander, who has died at the age of 85, had a major influence on software architecture as well as buildings.</p><p>His first book <i><b>Notes on the Synthesis of Form </b></i>(1964) was referenced by the structured methods gurus
of the 1970s (Ed Yourdon, Tom deMarco). This was followed by <i><b>A Pattern Language </b></i>(1977), which was adopted as a core text by the Object Oriented school of software engineering. </p><p>In 1987, Alexander and a few colleagues published an account of a design experiment under the title <i><b>A
New Theory of Urban Design</b></i>, showing how order can be created without top-down planning,
by a governance process that imposes some simple structural principles on all design activity. I believe I was one of the first to apply this approach within the software world, and he remains a major influence on my thinking on SOA and enterprise architecture. See for example the following posts</p><ul style="text-align: left;"><li><a href="https://rvsoapbox.blogspot.com/2005/03/fabric.htm">SOA Fabric</a> (March 2005) <br /></li><li><a href="https://rvsoapbox.blogspot.com/2006/03/christopher-alexander.htm">Christopher Alexander</a> (March 2006)</li><li><a href="https://rvsoapbox.blogspot.com/2009/01/soa-and-holism.html">SOA and Holism</a> (January 2009)<br /></li></ul><p>I also drew on Alexander in my books <a href="http://veryard.com/cbb">Component-Based Business</a> (Springer 2001) and <a href="https://leanpub.com/NextPracticeEA">Next Practice Enterprise Architecture</a> (LeanPub 2013). </p><p>In 2004, Pat Helland, who was then a senior architect at
Microsoft, wrote an article entitled Metropolis, in which he suggested computer system
architects could learn something from city planning. Philip Boxer
and I wrote a reply, mentioning Christopher Alexander.
Microsoft subsequently flew me over to the USA for a closed workshop (<a href="https://rvsoapbox.blogspot.com/search/label/SPARK06">SPARK</a>), sending all attendees copies of
<i><b> The Timeless Way of Building</b></i> in advance. However, although Microsoft had
recognized Alexander's importance, I'm not sure all the
other attendees at the workshop did, and I don't recall much discussion of his work. I think we spent more time discussing Stewart Brand's
notion of <a href="https://rvsoapbox.blogspot.com/search/label/pace%20layering">shearing layers / pace layering</a>.<br />
</p>
My observation was that it had taken around 15 years for the
ideas in <i><b>Notes on the Synthesis of Form </b></i>to be taken up in the
software world in the late 1970s, and another 15 years from the
publication of <i><b>A Pattern Language</b></i> to the popularity of pattern
thinking for software in the early 1990s. So in the early 2000s, I
was publishing ideas taken from <i><b>A New Theory of Urban Design</b></i>, in the
hope that the software world would now be ready for them.
Unfortunately it wasn't. For many people in the software world, Alexander continued and continues to be seen as the man who invented Design Patterns. Alexander's material from this period is also credited as having been an inspiration for The Sims game.<p></p><p>Perhaps the same is true of people in the architecture world. A book was published last year called <i><b>Relational Theories of Urban Form</b></i>, in which Alexander was represented by an extract from <i><b>A Pattern Language</b></i>, plus a few papes of <i><b>Timeless Way</b></i>. <br /></p><p>Alexander initially expressed some ambivalence and suspicion about the use of his work by software engineers. Later he was persuaded to make occasional keynote speeches at software conferences and to write prefaces for software books. It is possible that some software practitioners understood his work better than most architects. However, I believe he remained concerned that software practitioners mostly only picked up fragments of his work, and ignored the holistic aspects of his thinking that he regarded as crucial. <br /></p><p>I was once privileged to observe Christopher Alexander teaching first year students at the Prince of Wales Institute of Architecture in London. An amazing experience. Click here for the full story >> <a href="https://demandingchange.blogspot.com/2004/05/christopher-alexander-as-teacher.html">Christopher Alexander as Teacher</a>.</p><p> </p><p>Most of the tributes on Twitter mention <i><b>Notes on the Synthesis of Form </b></i>and <i><b>A Pattern Language</b></i>. Some mention <i><b>The Timeless Way of Building</b></i>. But the culmination of his work was the four-volume <i><b>Nature of Order</b></i>. I'm not sure how much I understood when it first came out, but I'm now re-reading. More than 15 years have passed since its publication, so maybe its time has come?<br /></p><p><br /></p><hr /><p>Good sources of material on Christopher Alexander include <a href="https://patterns.architexturez.net/">Architecture's New Scientific Foundations</a> (Architexturez) and <a href="http://www.katarxis3.com/index.html">New Science, New Urbanism, New Architecture</a> (Katarxis 3, September 2004)<br /></p><p>David Brussat, <a href="https://architecturehereandthere.com/2022/03/20/christopher-alexander-r-i-p/">Christopher Alexander RIP</a> (Architecture Here and There, 20 March 2022)</p><p>Howard Davis, <a href="https://www.theguardian.com/artanddesign/2022/mar/29/christopher-alexander-obituary">Christopher Alexander Obituary</a> (The Guardian, 29 March 2022) <br /></p><p>Pat Helland, <a href="https://docs.microsoft.com/en-us/previous-versions/aa480026(v=msdn.10)">Metropolis</a> (Microsoft Architecture Journal, April 2004)</p><p>Daniel Kiss and Simon Kretz (eds) Relational Theories of Urban Form (Birkhäuser, March 2021) <a href="https://archidose.blogspot.com/2021/07/relational-theories-of-urban-form.html">Review by John Hill</a> (9 July 2021) <br /></p><p>Mae-Wan Ho, <a href="https://www.i-sis.org.uk/ArchitectofLife.php">The Architect of Life</a> (Science and Society, 4/11/2003)<br /></p><p>Bin Jiang and Nikos Salingaros (eds), <a href="https://www.mdpi.com/journal/urbansci/special_issues/new_applications_development_nature_order">New Applications and Development of Christopher Alexander’s The Nature of Order</a> (Urban Science Special Issue, 3/1, 2019)</p><p>Ben Sledge, <a href="https://www.theguardian.com/games/2020/feb/05/the-sims-millennials-home-ownership">Home comforts: how The Sims let millennials live out a distant dream</a> (The Guardian, 5 February 2020)<br /></p><p>Richard Veryard and Philip Boxer, <a href="https://docs.microsoft.com/en-us/previous-versions/aa480051(v=msdn.10)">Metropolis and SOA Governance</a> (Microsoft Architecture Journal,. July 2005). See also Philip Boxer and Richard Veryard, <a href="https://docs.microsoft.com/en-us/previous-versions/bb245658(v=msdn.10)">Taking Governance to the Edge</a> (Microsoft Architecture Journal, August 2006)</p><p><span style="font-size: xx-small;">updated 29 March 2022</span><br /></p>Richard Veryardhttp://www.blogger.com/profile/04499123397533975655noreply@blogger.com0tag:blogger.com,1999:blog-6106782.post-35833264113703905742022-02-28T00:51:00.005+00:002022-05-10T12:49:38.942+01:00Does the Algorithm have the Last Word?<p>In my post on the <a href="https://rvsoapbox.blogspot.com/2021/08/on-performativity-of-data.html">performativity of data</a> (August 2021), I looked at some of the ways in which data and information can make something <b>true</b>. In this post, I want to go further. What if an an algorithm can make something <b>final</b>.</p><p>I've just read a very interesting paper by the Canadian sociologist Arthur Frank, which traces the curious history of a character called Devushkin - from a story by Gogol via another story by Dostoevsky into some literary analysis by Bakhtin.</p><p>In Dostoevsky's version, Devushkin complained that Gogol's account of him was complete and final, leaving him no room for change or development, <q>hopelessly determined and finished off, as if he were already quite dead</q>. </p><p></p><blockquote><q>For Bakhtin, all that is unethical begins and ends when one human being claims to determine all that another is and can be; when one person claims that the other has not, cannot, and will not change, that she or he will die just as she or he always has been.</q> <cite>Frank</cite></blockquote><p></p><p>But that's pretty much what many algorithms do. Machine learning algorithms extrapolate from historical data, captured and coded in ways that reinforce the past, while more traditionally programmed algorithms simply betray the opinions and assumptions of their developers. For example, we see recruitment algorithms that select men with a certain profile, while rejecting women with equal or superior qualifications. Because that's what's happened in the past, and the algorithm has no way of escaping from this.</p><p> </p><p>
</p><blockquote class="twitter-tweet"><p dir="ltr" lang="en">I'm going to keep saying this, I think: just as we have a right to download the data any corp holds on us, we should also have the right to reset any view an algorithm has of us.<br /><br />We used to be able to redefine and evolve ourselves. Algorithmic views of us hold us back.</p>— Russell Garner (@rgarner) <a href="https://twitter.com/rgarner/status/1499840070853046272?ref_src=twsrc%5Etfw">March 4, 2022</a></blockquote> <script async="" charset="utf-8" src="https://platform.twitter.com/widgets.js"></script>
<br /><p>The inbuilt bias of algorithms has been widely studied. See for example Safiya Noble and Cathy O'Neil.<br /></p><p>David Beer makes two points in relation to the performativity of algorithms. Firstly through their material interventions.<br /></p><p></p><blockquote><q>Algorithms might be understood to create truths around things like riskiness, taste, choice, lifestyle, health and so on. The search for truth becomes then conflated with the perfect algorithmic design – which is to say the search for an algorithm that is seen to make the perfect material intervention.</q></blockquote><p>And secondly through what he calls discursive interventions.<br /></p><p></p><blockquote><q>The notion of the algorithm is part of a wider vocabulary, a vocabulary that we might see deployed to promote a certain rationality, a rationality based upon the virtues of calculation, competition, efficiency, objectivity and the need to be strategic. As such, the notion of the algorithm can be powerful in shaping decisions, influencing behaviour and ushering in certain approaches and ideals.</q></blockquote><p></p><p>As Massimo Airoldi argues, both of these fall under what Bourdieu calls Habitus - which means an inbuilt bias towards the status quo. And once the algorithm has decided your fate, what chance do you have of breaking free?<br /></p><p><br /></p><p></p><hr /><p></p><p>Massimo Airoldi, Machine Habitus: Towards a sociology of algorithms (Polity Press, 2022) <br /></p><p>David Beer, The social power of algorithms (Information, Communication & Society, 20:1, 2017) 1-13, DOI: 10.1080/1369118X.2016.1216147</p><p>Safiya Noble, Algorithms of Oppression (New York University Press, 2018)</p><p>Cathy O'Neil, Weapons of Math Destruction (2016)<br /></p><p>Arthur W. Frank, <a href="http://qhr.sagepub.com/cgi/content/abstract/15/7/964">What Is Dialogical Research, and Why Should We Do It?</a> (Qual Health Res 2005; 15; 964) DOI: 10.1177/1049732305279078</p><p>Carissa Véliz, <a href="https://www.wired.com/story/algorithmic-prophecies-undermine-free-will">If AI Is Predicting Your Future, Are You Still Free?</a> (Wired, 27 December 2021) <br /></p><p>Related posts: <a href="https://rvsoapbox.blogspot.com/2017/07/could-we-switch-algorithms-off.html">Could we switch the algorithms off?</a> (July 2017), <a href="https://demandingchange.blogspot.com/2019/07/algorithms-and-governmentality.html">Algorithms and Governmentality</a> (July 2019), <a href="https://posiwid.blogspot.com/2021/03/algorithmic-bias.html">Algorithmic Bias</a> (March 2021), <a href="https://rvsoapbox.blogspot.com/2021/08/on-performativity-of-data.html">On the performativity of data</a> (August 2021) <br /></p><p></p>Richard Veryardhttp://www.blogger.com/profile/04499123397533975655noreply@blogger.com0tag:blogger.com,1999:blog-6106782.post-30703310724925662732022-02-06T11:41:00.004+00:002022-02-08T22:32:38.798+00:00Network Effects and the Decline of Facebook<p>Let me start with a story. You have a friend, let's call him Paul, who used to work for a big shot consultancy. A few years ago, the consultancy was undergoing a periodic cost-cutting exercise; Paul happened to be <q>on the bench</q> and was <q>let go</q>. He didn't immediately find another full-time job, but he found a series of short-term contracts to pay the bills until something else turned up. And of course he changed his profile on Linked-In.</p><p>Three years later, he's still in the same position. You receive a notification from Linked-In, asking if you would like to <q>congratulate</q> Paul on the anniversary of his losing his job. Doesn't really seem appropriate, does it?</p><p>One of the problems of social networks is maintaining a reasonably signal-to-noise ratio. When most of the notifications on Linked-In are to tell you that someone you barely remember is attending some random event, it's tempting to switch off notifications altogether.</p><p>And Facebook notifications can be even more annoying. Many years ago, Bill Gates quit Facebook because of the volume of friend requests he was receiving. Taylor Buley cites this as an illustration of Beckstrom's Law. </p><p>The best-known model of network effects is Metcalfe's Law, which states that the value of a network is proportional to the square of the number of users. Beckstrom's Law is a more advanced model, and includes the net value of the transactions on the network. Thus subtracting the aggregated costs of belonging to the network, including all those bothersome requests and notifications.<br /></p><p>This week, the Facebook shareprice crashed. In his analysis of this event, John Naughton suggests that the value of Facebook depends not on the absolute number of users, but in the rate of change, and this is what explains Mark Zuckerberg's obsession with growth (as shown in Ben Grosser's montage "<a href="https://bengrosser.com/projects/order-of-magnitude/">Order of Magnitude</a>"). <q>If user numbers start to decline, then the virtuous circle suddenly turns vicious, leading to a downward spiral.</q></p><p>What is also clear from this event is the extent to which Facebook's commercial success is dependent on other players in the ecosystem. Access to Facebook services is almost entirely channelled through devices running software from three other tech giants - Apple, Google and Microsoft. </p><p>And with Microsoft's recent acquisition of Activision Blizzard, we may expect more shifts in the balance of power between them. According to Microsoft's announcement (<a href="https://news.microsoft.com/2022/01/18/microsoft-to-acquire-activision-blizzard-to-bring-the-joy-and-community-of-gaming-to-everyone-across-every-device/">18 January 2022</a>)</p><p></p><blockquote><p><q>This acquisition will accelerate the growth in Microsoft’s gaming
business across mobile, PC, console and cloud and will provide building
blocks for the metaverse.</q> <br /></p></blockquote><p> </p><p>Your move I think, Zuck.<br /></p><p> <br /></p><hr /><p>Dina Bass, <a href="https://www.bloomberg.com/news/articles/2022-01-18/microsoft-s-top-five-reasons-for-buying-activision-blizzard">Five Reasons Microsoft Is Making Activision Blizzard Its Biggest Deal Ever</a> (Bloomberg,18 January 2022)<br /></p><p>Taylor Buley, <a href="https://www.forbes.com/2009/07/31/facebook-bill-gates-technology-security-defcon.html">How To Value Your Networks</a> (Forbes, 31 July 2009) </p><p>Michael Hiltzik, <a href="https://www.latimes.com/business/story/2022-02-04/facebook-stock-decline-zuckerberg-meta">Is this the beginning of Facebook’s downfall?</a> (LA Times, 4 February 2022)</p><p>Natasha Lomas (@<a href="https://twitter.com/riptari">riptari</a>), <a href="https://techcrunch.com/2022/02/04/on-metas-regulatory-headwinds-and-adtechs-privacy-reckoning/">On Meta’s <q>regulatory headwinds</q> and adtech’s privacy reckoning</a> (TechCrunch, 4 February 2022)<br /></p><p>John Naughton, <a href="https://www.theguardian.com/commentisfree/2022/feb/06/first-time-history-facebook-decline-has-tech-giant-begun-crumble">For the first time in its history, Facebook is in decline. Has the tech giant begun to crumble?</a> (The
Guardian, 6 February 2022)<br /></p><p>Wikipedia: <a href="https://en.wikipedia.org/wiki/Beckstrom%27s_law">Beckstrom's Law</a>, <a href="https://en.wikipedia.org/wiki/Metcalfe%27s_law">Metcalfe's Law</a><br /></p><p>Related posts: <a href="https://rvsoapbox.blogspot.com/2012/05/what-makes-platform-open-or-closed.html">What makes a platform open or closed?</a> (May 2012), <a href="https://rvsoapbox.blogspot.com/2018/03/making-world-more-open-and-connected.html">Making the World more Open and Connected</a> (March 2018), <a href="https://posiwid.blogspot.com/2021/08/metrication-and-demetrication.html">Metrication and Demetrication</a> (August 2021)<br /></p>Richard Veryardhttp://www.blogger.com/profile/04499123397533975655noreply@blogger.com0tag:blogger.com,1999:blog-6106782.post-57313521327696604982021-11-10T19:18:00.022+00:002021-11-10T19:18:00.162+00:00Uber Mathematics 5<p><i>Continuing a series of posts on Uber's business model. Much of this material also applies to Lyft and other similar operations. For previous posts, see <a href="https://rvsoapbox.blogspot.com/search/label/Uber">https://rvsoapbox.blogspot.com/search/label/Uber</a></i><br /></p><hr /><p>Another thing about Uber is that it operates as a <b>two-sided platform</b>, matching passengers with drivers. Two-sided platforms can only operate successfully if there is sufficient demand on both sides - people looking for rides, drivers looking for work.</p><p>As I noted in an earlier post, Uber might initially have been able to recruit more drivers than it needed, in order to provide a high quality of service to its customers. Many people may have signed up as drivers based on unrealistic estimates of the likely earnings, costs and overheads, but obviously this is not sustainable for long.<br /></p><p>So if Uber's pricing model can't pay the drivers enough to make the job worth doing, who would be surprised that Uber is now facing a shortage of drivers? While Uber customers experience increasing prices and worsening service levels. </p><p>So after years of what they regarded as unfair competition, traditional taxi services are now returning to popularity in large cities such as London (black) and New York (yellow).<br /></p><p>But this has also resulted in an excess of demand over supply. In recent years, traditional cab driving has been hit by a triple whammy - competition from Uber et al, environmental regulations forcing older polluting vehicles off the road, and of course the COVID pandemic forcing many potential passengers to stay at home. So there is now a shortage of drivers and vehicles across the industry.<br /></p><p><br /></p><hr /><p><br /></p><p>Will Dunn, <a href="https://www.newstatesman.com/the-explainer/2021/11/why-is-there-an-uber-shortage">Why is there an Uber shortage?</a> (New Statesman, 8 November 2021)</p><p>Caroline Tanner, <a href="https://www.msn.com/en-us/travel/tips/why-yellow-cabs-are-again-your-best-bet-in-new-york-city/ar-AAQjRZf">Why yellow cabs are (again) your best bet in New York City</a> (MSN, November 2021)<br /></p><p>James Tapper, <a href="https://www.theguardian.com/cities/2021/oct/30/black-cabs-roar-back-into-favour-as-app-firms-put-up-their-prices">Black cabs roar back into favour as app firms put up their prices</a> (Guardian, 30 October 2021)<br /></p>Richard Veryardhttp://www.blogger.com/profile/04499123397533975655noreply@blogger.com0tag:blogger.com,1999:blog-6106782.post-20535634472013868802021-10-29T18:11:00.006+01:002021-10-29T19:34:30.379+01:00Interaction and Impedance<p>In the early 1990s, I was on a research and development project called the Enterprise Computing Project, within an area that was then known as Open Distributed Processing (ODP) and subsequently evolved into Service Oriented Architecture (SOA). One of the concepts we introduced was that of <b>Interaction Distance</b>. This was explained in a couple of papers that Ian Macdonald and I wrote in 1994-5, and mentioned briefly in my 2001 book.</p><p>Interaction distance is not measured primarily in terms of physical distance, but in terms of such dimensions as cost, risk, speed and convenience. It is related to notions of <b>commodity </b>and <b>availability</b>.</p><p></p><blockquote><q>Goods that are available to us enrich our lives and, if they are technologically
available, they do so without imposing
burdens on us. Something is available in this sense if it has been
rendered instantaneous, ubiquitous, safe, and easy.</q> <cite>Borgmann p 41<br /></cite></blockquote><p></p><blockquote><q>In our time, things are not even regarded as objects, because their only important quality has become their readiness for use. Today all things are being swept together into a vast network in which their only meaning lies in their being available to serve some end that will itself also be directed towards getting everything under control.</q> <cite>Levitt</cite></blockquote><p>One of the key principles of ODP was distribution transparency - you can
consume a service without knowing where anything is located. The
service interface provides convenient access to something that might be
physically located anywhere. Or even something that doesn't have a fixed
existence, but is assembled dynamically from multiple sources to
satisfy your request.</p><p>As we noted at the time, this affects the <b>relational economy</b> in several ways. It may introduce new intermediary roles into the ecosystem, or alternatively it may allow some previously dominant intermediaries to be bypassed. Meanwhile, new value-adding services may become viable. Over the past twenty years there have been various standardization initiatives of this kind, often prefixed by the word Open. For example, Open Banking. <br /></p><p></p><p>The example we used in our 1995 paper was video on demand (VoD). At that time there were three main methods for watching films: cinema, scheduled television or cable broadcast, and video rental. Video rental generally involved borrowing (and then rewinding and returning) VHS cassettes. DVDs were not introduced until 1996, and Netflix was founded in 1997.</p><p>Our analysis of VoD identified a delivery subsystem and a control subsystem, and sketched how these roles might be taken by some kind of collaboration between existing players (cable companies, phone companies). We also noted the organizational and commercial difficulties of implementing such a collaboration. As we now know, in the VoD case these difficulties were bypassed by the emergence of streaming services that were able to combine control and delivery into a single platform, and the technical configuration we outlined now looks horribly complicated, but the organizational and commercial issues are still relevant for other potential collaborative innovations.</p><p>And our analysis of interaction distance in relation to this example is still valid. In particular, we showed how VoD (in whatever technological form this might take) could significantly reduce the interaction distance between the film distributor and the consumer. </p>People often talk about <b>digital transformation</b>, and want to use this label for all kinds of random innovations. As I see it, the digital transformation in the video industry was largely on the production side. While the switch from VHS to DVD brought some minor benefits for the consumer, the real difference for the consumer came from the switch from rental to streaming, reducing interaction distance and bringing availability closer in space and time to the consumer. So I think a more meaningful label for this kind of innovation is <b>service transformation</b>.<br />
<p><br /></p>
<hr /><p>Albert Borgmann, Technology and the Character of Contemporary Life (University of Chicago Press, 1984)</p><p>William Levitt's introduction to Heiddegger, The Question Concerning Technology</p><p>Christian Licoppe, <a href="https://www.researchgate.net/publication/240505224">‘Connected’ Presence: The Emergence of a New Repertoire for Managing Social Relationships in a Changing Communication Technoscape</a> (Environment and Planning D Society and Space, February 2004)
<br /></p><p>Ian G Macdonald and Richard Veryard, <a href="https://www.researchgate.net/publication/302374080_Modelling_Business_Relationships_in_a_Non-Centralised_Systems_Environment">Modelling Business Relationships in a Non-Centralised Systems Environment</a>. In A. Sölvberg et al. (eds.) Information Systems Development for Decentralized Organizations
(Springer 1995)</p><p>Richard Veryard, Information Coordination (Prentice-Hall 1994) </p><p>Richard Veryard, <a href="http://www.users.globalnet.co.uk/~rxv/CBDmain/odp.htm">The Future of Distributed Services</a> (December 2000) <br /></p><p>Richard Veryard, Component-Based Business: Plug and Play (Springer 2001)<br /></p><p>Richard Veryard and Ian G. Macdonald, EMM/ODP: A Methodology for Federated and
Distributed Systems, in Verrijn-Stuart, A.A., and Olle, T.W. (eds) Methods and Associated Tools for the Information Systems Life
Cycle (IFIP Transactions North-Holland 1994)</p><p>Wikipedia: <a href="https://en.wikipedia.org/wiki/RM-ODP">ODP Reference Model</a><br /></p>Richard Veryardhttp://www.blogger.com/profile/04499123397533975655noreply@blogger.com0tag:blogger.com,1999:blog-6106782.post-87299265566386296222021-10-16T10:12:00.007+01:002021-10-20T10:51:59.924+01:00Walking Wounded<p>Let us suppose we can divide the world into those who trust service companies to treat their customers fairly, and those who assume that service companies will be looking to exploit any customer weakness or lapse of attention.<br /></p><p><span>For example, some loyal customers renew</span> without question, even though the cost creeps up from one year to the next. (This is known as price walking.) While other customers switch service providers frequently to chase the best deal. (This is known as churn. B2C businesses generally regard this as a Bad Thing when their own customers do it, not so bad when they can steal their competitors' customers.)<br /></p><p>Price walking is a particular concern for the insurance business. The UK Financial Conduct Authority (FCA) has recently issued new measures to protect customers from price walking.</p><p>Duncan Minty, an insurance industry insider who blogs on Ethics and Insurance, believes that claims optimization (which he calls Settlement Walking) raises similar ethical issues. This is where the insurance company tries to get away with a lower claim settlement, especially with those customers who are most likely to accept and least likely to complain. He cites a Bank of England report on machine learning, which refers among other things to propensity modelling. In other words, adjusting how you treat a customer according to how you calculate they will respond. </p><p>My work on data-driven personalization includes ethics as well as practical considerations. However, there is always the potential for asymmetry between service providers and consumers. And as Tim Harford points out, this kind of exploitation long predates the emergence of algorithms and machine learning.</p><p> </p><p><b>Update</b><br /></p><p>In the few days since I posted this, I've seen a couple of news items about autorenewals. <span data-offset-key="535dp-0-0"><span data-text="true">There seems to be a trend of increasing vigilance by various regulators in different countries to protect consumers.</span></span></p><p>Firstly, the UK's Competition and Markets Authority (CMA) has unveiled <a href="https://www.gov.uk/government/publications/compliance-principles-for-anti-virus-software-firms" rel="nofollow" target="_blank">compliance principles</a> to curb locally some of the sharper auto-renewal practices of antivirus software firms. (via <a href="https://www.theregister.com/2021/10/19/cma_antivirus/">The Register</a>).</p><p>Secondly, new banking rules in India for repeating payments. Among other things, this creates challenges for free trials and introductory offers. (via <a href="https://techcrunch.com/2021/09/30/india-recurring-payments-rbi/">Tech Crunch</a>)<br /></p><hr /><p><a href="https://www.bankofengland.co.uk/-/media/boe/files/report/2019/machine-learning-in-uk-financial-services.pdf">Machine Learning in UK financial services</a> (Bank of England / FCA, October 2019)<br /></p><p><a href="https://www.fca.org.uk/news/press-releases/fca-confirms-measures-protect-customers-loyalty-penalty-home-motor-insurance-markets">FCA confirms measures to protect customers from the loyalty penalty in home and motor insurance markets</a> (FCA, 28 May 2021)</p><p>Tim Harford, <a href="http://timharford.com/2018/11/exploitative-algorithms-are-using-tricks-as-old-as-haggling-at-the-bazaar/">Exploitative algorithms are using tricks as old as haggling at the bazaar</a> (2 November 2018)</p><p><a href="https://twitter.com/Joi/status/1092791671727767553">Joi Ito</a>, <a href="https://www.wired.com/story/ideas-joi-ito-insurance-algorithms/">Supposedly ‘Fair’ Algorithms Can Perpetuate Discrimination</a> (Wired Magazine, 5 February 2019) <br /></p><p>Duncan Minty, <a href="https://ethicsandinsurance.info/2021/03/18/settlement-walking/">Is settlement walking now part of UK insurance?</a> (18 March 2021), <a href="https://ethicsandinsurance.info/2021/09/07/competitiveness-of-premiums/">Why personalisation will erode the competitiveness of premiums</a> (7 September 2021)</p><p>Manish Singh, <a href="https://techcrunch.com/2021/09/30/india-recurring-payments-rbi/">Tech giants brace for impact in India as new payments rule goes into effect</a> (TechCrunch, 1 October 2021)</p><p>Richard Speed, <a href="https://www.theregister.com/2021/10/19/cma_antivirus/">UK competition watchdog unveils principles to make a kinder antivirus business</a> (The Register, 19 October 2021)<br /></p><p>Related posts: <a href="https://rvsoapbox.blogspot.com/2005/01/support-economy.htm">The Support Economy</a> (January 2005), <a href="https://rvsoapbox.blogspot.com/2017/05/the-price-of-everything.html">The Price of Everything</a> (May 2017), <a href="https://posiwid.blogspot.com/2019/02/insurance-and-veil-of-ignorance.html">Insurance and the Veil of Ignorance</a> (February 2019)<br /></p><p>Related presentations: <a href="https://www.slideshare.net/RichardVeryard/customer-engagement-open-group-oct-2015">Boundaryless Customer Engagement</a> (October 2015), <a href="https://www.slideshare.net/RichardVeryard/realtime-personalization">Real-Time Personalization</a> (December 2015)<br /></p>Richard Veryardhttp://www.blogger.com/profile/04499123397533975655noreply@blogger.com0tag:blogger.com,1999:blog-6106782.post-30291734397362243702021-09-30T08:26:00.002+01:002021-10-17T22:29:47.256+01:00Uber Mathematics 4<p>As I have previously noted, drawing on research by Izabella Kaminski and others, there is something seriously problematic about the current business model of rideshare platforms such as Uber and Lyft. It seems that the only route to profitability is via some disruptive event in favour of their business. Although investors have poured eye-watering amounts of cash into these companies, this only makes sense as a bet on such an event occurring before the cash (or investor patience) runs out.
</p><p>
How does the magic of <q>digital</q> allow a centralized company with international overheads (Uber or Lyft) provide a service more cheaply and cost-effectively than a distributed network of local cab companies, many of which are run by one rather harassed guy in a small booth next to the train station? Well it doesn't. Even screwing the drivers can only go so far in stemming the losses.
</p><p>
So what is this disruptive event, that these companies and their investors are eagerly waiting for? One theory was that the rideshare platforms could only be profitable once they had eliminated all competition, by a combination of undercutting rivals and doing deals with local authorities - for example, offering to run other elements of the local transport network. Once a monopoly had been established, then they could start to push the prices up.
</p><p>
Obviously this strategem only works if the consumers accept the price rises, and the guy in the booth doesn't find a way to take back his business.
</p><p>
Another theory was that they were just waiting for self-driving cars. This would also explain why they weren't looking after their drivers properly, because they didn't expect to need them for the longer-term. However, not everyone was convinced by this. Aaron Benanav thought that they will have to wait rather longer than they originally reckoned, while Sameepa Shetty noted that Uber's ambitions in this regard were currently focused on small favourable pockets rather than being spread across the whole operation, and quoted an analyst who worried that investors were throwing good money after bad. </p><p>At the end of 2020, we learned that Uber itself had come to the same conclusion, selling off the driverless car division <q>to focus on profits</q>.<br /></p><p>So remind me, where are these profits coming from?</p><p><br /></p><hr /><p><br /></p><p><a href="https://www.bbc.co.uk/news/business-55224462">Uber sells self-driving cars to focus on profits</a> (BBC News, 7 December 2020)<br /></p><p>Aaron Benanav, <a href="https://www.theguardian.com/commentisfree/2020/aug/24/gig-economy-uber-lyft-insecurity-crisis">Why Uber's business model is doomed</a> (Guardian, 24 August 2020)</p><p>Julia Kollewe, <a href="https://www.theguardian.com/technology/2020/dec/08/uber-self-driving-car-aurora">Uber ditches effort to develop own self-driving car</a> (Guardian, 8 December 2020)</p><p>Elaine Moore and Dave Lee, <a href="https://www.ft.com/content/aed64100-917f-455d-9569-2220c53204bf">Does Uber deserve its $91bn valuation?</a> (FT, 17 October 2021)<br /></p><p>Sameepa Shetty, <a href="https://www.cnbc.com/2020/01/28/ubers-self-driving-cars-are-a-key-to-its-path-to-profitability.html">Uber’s self-driving cars are a key to its path to profitability</a> (CNBC, 28 January 2020)</p><p>Gwyn Topham, <a href="https://www.theguardian.com/technology/2021/jan/03/peak-hype-driverless-car-revolution-uber-robotaxis-autonomous-vehicle"><q>Peak hype</q>: why the driverless car revolution has stalled</a> (Guardian, 3 January 2021)</p><p><br /></p><p>Related Posts <a href="https://rvsoapbox.blogspot.co.uk/2016/11/uber-mathematics.html">Uber Mathematics</a> (Nov 2016), <a href="https://rvsoapbox.blogspot.co.uk/2016/12/uber-mathematics-2.html">Uber Mathematics 2</a> (December 2016), <a href="https://rvsoapbox.blogspot.co.uk/2016/12/uber-mathematics-3.html">Uber Mathematics 3</a> (Dec 2016)
</p>Richard Veryardhttp://www.blogger.com/profile/04499123397533975655noreply@blogger.com0