Saturday, March 10, 2018

Fail Fast - Why did the Chicken cross the road?

A commonly accepted principle of architecture and engineering is to avoid a single point of failure (SPOF). A single depot for a chain of over 850 fast food restaurants could be risky, as KFC was warned when it announced that it was switching its logistics from Bidvest to a partnership with DHL and QSL, to be served out of a single depot in Rugby. We may imagine that the primary motivation for KFC was cost-saving, although the announcement was dressed up in management speak - "re-writing the rule book" and "setting a new benchmark".

The new system went live on 14th February 2018. The changeover did not go well: by the weekend, over three quarters of the stores were closed. Rugby is a great location for a warehouse - except when there is a major incident on a nearby motorway. (Who knew that could happen?)

After a couple of weeks of disruption, as well as engaging warehouse-as-a-service startup Stowga for non-food items, KFC announced that it was resuming its relationship with Bidvest. According to some reports, Burger King also flirted with DHL some years ago before returning to Bidvest. History repeating itself.

However, the problems faced by KFC cannot be attributed solely to the decision to supply the whole UK mainland from Rugby. A just-in-time supply chain needs contingency planning - covering business continuity and disaster recovery. (Good analysis by Richard Priday, who tweets as @InsomniacSteel.)

KFC revolutionizes UK foodservice supply chain with DHL and QSL appointment (DHL Press Release, 11 Oct 2017)

Andrew Don, KFC admits chicken waste as cost of DHL failure grows (The Grocer, 23 Feb 2018)

Andrea Felsted, Supply chains: Look for the single point of failure (FT 2 May 2011)

Adam Leyland, KFC supply chain fiasco is Heathrow's Terminal 5 all over again (The Grocer, 23 Feb 2018)

Charlie Pool (CEO of Stowga), Warehousing on-demand saves KFC (Retail Technology 26 February 2018)

Richard Priday, The inside story of the great KFC chicken shortage of 2018 (Wired 21 February 2018) How KFC ended the great chicken crisis by taking care of its mops (Wired 2 March 2018) The KFC chicken crisis is finally over: it's (sort of) ditched DHL (Wired 8 March 2018)

Carol Ryan, Stuffed KFC only has itself to blame (Reuters, 20 February 2018)

Su-San Sit, KFC was 'warned DHL would fail' (Supply Management, 20 February 2018)

Matthew Weaver, Most KFCs in UK remain closed because of chicken shortage (Guardian 19 Feb 2018) KFC was warned about switching UK delivery contractor, union says (Guardian 20 Feb 2018)

Zoe Wood, KFC returns to original supplier after chicken shortage fiasco (Guardian 8 March 2018)

Wikipedia: Single Point of Failure

Related posts: Fail Fast - Burger Robotics (March 2018)

Sunday, March 04, 2018

The Exception That Proves the Rule

My thin clean-shaven friend @futureidentity is reassured by messages that appear to be misdirected.

But when I read his latest tweet, I thought of the exception that proves the rule. Fowler defines five uses of this phrase; I'm going to use two of them.

Firstly, when an advert is exceptionally badly targeted, we notice it precisely because it is an outlier - an exception to the normal pattern or rule. Thus reinforcing our belief in the normal pattern - the idea that many if not most messages nowadays are moderately well targeted. This is what Fowler calls the "loose rhetorical sense" of the phrase.

Secondly, adverts aren't necessarily misdirected by accident. Conjurers and politicians use misdirection as a form of deception, to distract the audience's attention from what they are really doing. (Some commentators regard the 45th US President as a master of misdirection.)

This is how Target does it, so the pregnant customer doesn't feel she's being stalked.
"Then we started mixing in all these ads for things we knew pregnant women would never buy, so the baby ads looked random. We’d put an ad for a lawn mower next to diapers. We’d put a coupon for wineglasses next to infant clothes. That way, it looked like all the products were chosen by chance." (Forbes)

So just because a marketing message appears to be a random error, that doesn't mean it is. Further investigation might reveal it to be carefully designed to foster exactly that illusion in a specific recipient. And if it turns out to be targeted after all, this would be what Fowler calls "the secondary rather complicated scientific sense" of the phrase.

Related posts

85 million faces (Oct 2016)


Charles Duhigg, How companies learn your secrets (New York Times, 16 Feb 2012)

Kashmir Hill, How Target Figured Out A Teen Girl Was Pregnant Before Her Father Did (Forbes, 16 Feb 2012)

Wikipedia: Exception that proves the ruleMisdirection (magic)

Sunday, February 04, 2018

The Hungry Tapeworm

This week, three American companies announced a joint venture to sort out healthcare for their own employees. Ambitious, huh?

This is not the first time large American companies have tried to challenge tho market power of healthcare providers. According to Warren Buffett, "the ballooning costs of healthcare act as a hungry tapeworm on the American economy". Intel and Walmart are among those that have previously ventured into this area. In 2016, 20 companies including Coca Cola, American Express, IBM and Macy’s joined the Health Transformation Alliance (HTA). So why should anyone take this latest attempt seriously? Only because the three companies are Amazon, Berkshire Hathaway and JPMorgan Chase. And Amazon (need I remind you?) eats everyone's lunch.

John Naughton sees this as a typical play for a data hungry tech giant, based on two hypotheses.
  • Transactional data will lead to transactional efficiencies. The joint venture starts with the three companies experimenting on their own employees, who will "tell Amazon and its algorithms what works and doesn’t work". 
  • "Mastery of big data might yield clinical benefit".
As Pressman and Lashinsky note, the experiment is based on a pretty good sample of Americans: "a diverse workforce spanning low-wage normal folk to the most elite of our society".

Amazon is obviously a major player in the data and analytics world, but so is IBM, which is playing an important role in the HTA. Not only is IBM a corporate member, but IBM Watson Health will do the data and analytics. According to Pharmaceutical Commerce, it will "aggregate participating HTA member companies' data, enabling insights both into outcomes of medical interventions, as well as wellness initiatives to improve employees’ health".

And what about Google? Google Health was discontinued in 2011, following a lack of widespread adoption. Perhaps data isn't the whole story.

But Amazon is not just about data. In an article published before this announcement, Zack Kanter attributes Amazon's strategic dominance to SOA. "Each piece of Amazon is being built with a service-oriented architecture, and Amazon is using that architecture to successively turn every single piece of the company into a separate platform — and thus opening each piece to outside competition."

Moazed and Johnson discuss the platform implications of the healthcare announcement. They argue that "platforms thrive with fragmentation, not consolidation", and that "the new platform needs to offer enough potential scale to outweigh those risks, otherwise manufacturers may be too afraid to join". Sarah Buhr sees this as an opportunity for smaller players, such as Collective Health.

Three employers, even large ones, probably won’t have enough muscle to negotiate fair prices for healthcare and pharma. But if Bezos can create the right expectations, and provide a flexible platform for smaller players ...

Health Transformation Alliance sets its 2017 agenda (Pharmaceutical Commerce, 9 March 2017)

Amazon alliance takes on ‘hungry tapeworm’ of healthcare costs (Pharmaceutical Technology, 1 February 2018)

Sarah Buhr, Collective Health Wants To Replace The Health Insurance Industry With A Software Program (TechCrunch, 11 Aug 2014)

Sarah Buhr, Amazon’s new healthcare company could give smaller healthtech players a boost (TechCrunch, 30 Jan 2018)

Paul Demko, Amazon's new health care business could shake up industry after others have failed (Politico, 30 January 2018)

Zack Kanter, Why Amazon is eating the world (TechCrunch, 14 May 2017)

Paul Martyn, Healthcare Consumerism: Taming The Hungry Tapeworm (Forbes, 30 January 2018)

Alex Moazed and Nicholas L Johnson, Amazon's Long-Awaited Health Care Platform (Inc, 30 January 2018)

John Naughton, Healthcare is a huge industry – no wonder Amazon is muscling in (Observer, 4 February 2018)

Aaron Pressman and Adam Lashinsky, Data Sheet—Why Jeff Bezos Just Might Crack the Health Care Challenge (Fortune, 31 January 2018)

Jordan Weissmann, Can Amazon, Berkshire Hathaway, and JPMorgan Revolutionize Health Care? (Slate, 30 Jan 2018)

Wikipedia: Google Health

Updated 5 February 2018

Sunday, January 21, 2018

Another 20 million faces

Just over a year ago, Microsoft launched some software that would guess how old you were. Millions of people were persuaded to donate a selfie to Microsoft in return for playing this game. See my post 85 Million Faces (October 2016).

Google's latest face-collecting gimmick is to find a painting that looks like you. Although the Arts and Culture app was originally launched in 2015, the face-matching feature was only added last month. This weekend the app shot to the number one slot in the downloads chart, and 20 million selfies (and counting) have already been donated to Google.

As @ArwaM comments, facial recognition technology allows Google to find the artwork you most resemble – but it also supports the rise of the surveillance state.

And yet Google cannot (yet) compete with old-fashioned serendipity. Before Museum-Doppelgänger-Hunt was an app, it was a viral meme, featuring (among others) @fleezee.

But there have been other Doppelgänger-Hunts before, using Face Recognition software. For example, the TwinStrangers project. So which is the egg and which the chicken?

Rebecca Fleenor, I'm on the front page of Reddit. This is how it feels (CNET, 13 September 2017)

Christine Hauser, Meet your art twin: a 400-year-old with an oily complexion (New York Times, 17 Jan 2018)

Arwa Mahdawi, Finding your museum doppelganger is fun – but the science behind it is scary (Guardian, 16 January 2018)

Rosie Spinks, Why the Art Museum Doppelgänger meme is to profoundly addictive (Quartzy, 2 January 2018)

Der fremde Zwilling (Spiegel, 15 April 2015) in German

Monday, January 15, 2018

Bus Safety Announcement

Transport for London (TfL) reckons around 3000 people are injured every year by slips, trips and falls on London buses. So it is running trials of an automated system that announces the departure of the bus from the stop.
"Please hold on, the bus is about to move"

or as Bon Jovi might say
"We've gotta hold on ready or not."

The problem is that these alerts often come after the bus is already halfway down the road.
"Whoa, we're half-way there."

As the BBC News explains, the timing of the alert is based on the average amount of time a bus would spend at a bus stop, and is often hopelessly inaccurate. Passengers have taken to social media in droves to complain or mock. Many have wondered whether it was such a problem in the first place, and whether an alert would help to alleviate the problem. Others have pointed out the potential value of such an alert for certain categories of passenger - such as the elderly or visually impaired - but of course this only works if the alert comes at the right time.

I haven't spoken to anyone at TfL about this, but I can imagine what happened. In order to get a trial up and running quickly, they didn't have time (or permission) to link the alert with any of the systems on board the bus that could have sent a more accurate event signal. So we have a stand-alone system, knocked up quickly, as an experimental solution to a problem that most people hadn't previously recognized. In the trimodal scheme, this is a classic Pioneer project.
"For love we'll give it a shot."

So if the trial isn't laughed into touch, then maybe the Settlers can take over and do the alert properly.
"Take my hand, we'll make it. I swear."

And the Town Planners can come up with a joined-up long-term vision for passenger comfort and safety. Altogether now ...
"Whoa, livin' on a prayer."


The wording of the announcement has changed, but the timing hasn't. It now says.
"Please hold on while the bus is moving"

What can I say to that?
"Standing on the ledge, I show the wind how to fly. When the world gets in my face, I say, have a nice day."

Londoners hit out at 'mistimed' bus safety alerts (BBC News, 14 January 2018)

Nadia Khomami, Please hold on: TfL urged to get a grip over annoying bus warnings (Guardian, 15 January 2018)

Eleanor Rose, TfL anger London commuters again with replacement bus announcement that is 'still annoying as hell' (Standard, 26 January 2018)

Londoners baffled by 'bonkers' bus safety announcements warning them 'the bus is about to move' (Evening Standard,15 January 2018).

For more on Trimodal IT, see my post Beyond Bimodal (May 2016)

Wednesday, December 27, 2017

Automated Tetris

Following complaints that Amazon sometimes uses excessively large boxes for packing small items, the following claim appeared on Reddit.

"Amazon uses a complicated software system to determine the box size that should be used based on what else is going in the same truck and the exact size of the cargo bay. It is playing automated Tetris with the packages. Sometimes it will select a larger box because there is nothing else that needs to go out on that specific truck, and by making it bigger, it is using up the remaining space so items don't slide around and break. This actually minimizes waste and is on the whole a greener system. Even if for some individual item it looks weird. It's optimizing for the whole, not the individual." [source: Reddit via @alexsavinme]

Attached to the claim is a link to @willknight's 2015 article about Amazon's robotic warehouses. The article mentions the packing problem but doesn't mention the variation of box sizes.

The claim quickly led to vigorous debate, both on Reddit and on Twitter. Here are a selection of the argument and counter-arguments.

  • Suggesting that the Reddit claim was based on a misreading of the MIT article.
  • Asserting that people working in warehouses (Amazon and other) were unaware of such an algorithm. (As if this were relevant evidence.)
    • Evidence that equally sophisticated algorithms are in use at other retailers and logistics companies. (Together with an assumption that if others have them, Amazon must definitely have them.)
    • Evidence that some operational inefficiencies exist at Amazon and elsewhere. (What, isn't Amazon perfectly optimized yet?)
      • Providing evidence that computer systems would not always recommend the smallest possible box. For example, this comment: "At Target the systems would suggest a size but we could literally use whatever we wanted to. I constantly put stuff in smaller boxes because it just made so much more sense." (Furthermore, the humans being able to frustrate the intentions of the software.)
      • Suggesting that errors in box sizes are sometimes caused by mix-up of units - one item going in a box large enough for a dozen.
      • Pointing out that the solution described above would only work for transport between warehouses (where the vehicle is full for the whole trip) but wouldn't work for "last mile" delivery runs (where the vehicle becomes progressively more empty during the trip).
      • Pointing out that the "last mile" is the most inefficient part of the journey. (But this doesn't stop retailers looking for efficiency savings earlier in the journey.)
      • Pointing out that there were more efficient solutions for preventing packages shifting in transit - for example, inflatable bags.
      • Pointing out that an overlarge box merely displaces the problem - the item can be damaged by sliding around inside the box.
      • Complaining about the ethics, employment policies and environmental awareness of Amazon.
      • Denigrating the intelligence and diligence of the workers in the Amazon warehouse. (Lazy? Really?)

      Some people have complained that as the claim is evidently false, it counts as fake news and should be deleted. But it is certainly true that retailers and logistics companies are constantly thinking about ways of reducing packaging and waste, and there are several interesting contributions to the debate, even if some of the details may not quite work.

      It's also worth noting that the claim is written in a highly plausible style - that's just how people in that world would talk. So maybe someone has come across a proposal or pilot or patent application along these lines, even if this exact solution was never fully implemented.

      Some may doubt that such a solution would be "greener on the whole". But any solution architect should get the principle of "optimizing for the whole, not the individual". (Not always so easy in practice, though.) 

      Will Knight, Inside Amazon’s Warehouse, Human-Robot Symbiosis (MIT Technology Review, 7 July 2015)

      Wikipedia: Packing Problems

      Thursday, December 14, 2017

      Expert Systems

      Is there a fundamental flaw in AI implementation, as @jrossCISR suggests in her latest article for Sloan Management Review? She and her colleagues have been studying how companies insert value-adding AI algorithms into their processes. A critical success factor for the effective use of AI algorithms (or what we used to call expert systems) is the ability to partner smart machines with smart people, and this calls for changes in working practices and human skills.

      As an example of helping people to use probabilistic output to guide business actions, Ross uses the example of smart recruitment.
      But what’s the next step when a recruiter learns from an AI application that a job candidate has a 50% likelihood of being a good fit for a particular opening?

      Let's unpack this. The AI application indicates that at this point in the process, given the information we currently have about the candidate, we have a low confidence in predicting the performance of this candidate on the job. Unless we just toss a coin and hope for the best, obviously the next step is to try and obtain more information and insight about the candidate.

      But which information is most relevant? An AI application (guided by expert recruiters) should be able to identify the most efficient path to reaching the desired level of confidence. What are the main reasons for our uncertainty about this candidate, and what extra information would make the most difference?

      Simplistic decision support assumes you only have one shot at making a decision. The expert system makes a prognostication, and then the human accepts or overrules its advice.

      But in the real world, decision-making is often a more extended process. So the recruiter should be able to ask the AI application some follow-up questions. What if we bring the candidate in for another interview? What if we run some aptitude tests? How much difference would each of these options make to our confidence level?

      When recruiting people for a given job, it is not just that the recruiters don't know enough about the candidate, they also may not have much detail about the requirements of the job. Exactly what challenges will the successful candidate face, and how will they interact with the rest of the team? So instead of shortlisting the candidates that score most highly on a given set of measures, it may be more helpful to shortlist candidates with a range of different strengths and weaknesses, as this will allow interviewers to creatively imagine how each will perform. So there are a lot more probabilistic calculations we could get the algorithms to perform, if we can feed enough historical data into the machine learning hopper.

      Ross sees the true value of machine learning applications to be augmenting intelligence - helping people accomplish something. This means an effective collaboration between one or more people and one or more algorithms. Or what I call organizational intelligence.

      Postscript (18 December 2017)

      In his comment on Twitter, @AidanWard3 extends the analysis to multiple stakeholders.
      This broader view brings some of the ethical issues into focus, including asymmetric information and algorithmic transparency

      Jeanne Ross, The Fundamental Flaw in AI Implementation (Sloan Management Review, 14 July 2017)

      Saturday, December 02, 2017

      The Smell of Data

      Retailers have long used fragrances to affect the customer in-store experience. See for example Air/Aroma.

      So perhaps we can use smell to alert consumers to dodgy websites? An artist and graphic designer, Leanne Wijnsma, has built what is basically an air-defreshener: a hexagonal resin block with a perfume reservoir inside, which connects over Wi-Fi to your computer. When it notices a possible data leak (like the user connecting to an unsecured Wi-Fi network, or browsing a webpage over an unsecure connection) — puff! It releases the smell of data.

      James Vincent, What does a data leak smell like? This little device lets you find out (Verge, 31 Aug 2017)

      That's all very well, but it only sniffs out the most obvious risks. If you want to smell the actual data leak, you'd need a device that released a data leak fragrance when (or perhaps I should say whenever) your employer or favourite online retailer is hacked. Or maybe a device that sniffed around a corporate website looking for vulnerabilities ...

      I'm sure my regular readers don't need me to spell out the flaws in that idea.

      Related posts

      Pax Technica - On Risk and Security (November 2017)
      UK Retail Data Breaches (December 2017)

      UK Retail Data Breaches

      Some people talk as if data protection and security must be fixed before May 2018 because of GDPR. Wrong. Data protection and security must be fixed now.

      Morrisons (2014)

      The High Court has just found Morrisons to be liable for a leak of employee data by a disaffected employee in 2014. (The perpetrator got eight years in jail.)

      Sports Direct (2016)

      A hacker obtained employee details in September 2016, but Sports Direct failed to communicate the breach to the affected employees.

      CEX (2017)

      Second-hand gadget and video games retailer Cex has said up to two million customers have had their data stolen in an online breach

      Zomato (2017)

      Up to 17 million users affected by data breach at restaurant search platform Zomato

      Tesco Bank (2016)

      Cyber thieves steal £2.5m

      Related posts

      The Smell of Data (December 2017)

      Monday, September 25, 2017

      Regulating Platforms

      On Friday, Transport for London (TfL) declared that Uber was not fit and proper to hold a private hire operator licence. Uber's current licence expires next week. However, Uber can continue to operate in London until any appeal processes have been exhausted. (TFL Press Release, 22 September 2017)

      By Saturday afternoon, a petition in Uber's favour had raised half a million signatures. Uber seems to put more energy into campaigning against evil regulators than into operating within the regulations, and was evidently already prepared for this fight. (You don't send out messages to millions of customers at the drop of a hat without a bit of forward planning.) As Emine Saner writes,
      "Calling for better legislation certainly is not as exciting as a glossy app, or whipped-up social media reaction, but it may make your trip home safer – and would be a better use of online petitions."

      The protests follow a number of well-worn arguments
      • Many users of the Uber service (especially young women) have become dependent on a cheap, convenient and supposedly safer alternative to public transport and expensive taxis.
      • Many drivers have borrowed heavily to invest in the Uber business model, and fear being thrown into penury.
      • This is an anti-competitive and technologically backward move, prompted by entrenched interests. And as TfL is itself a transport operator, it is not appropriate that TfL should regulate its competitors.

      None of these arguments can be taken completely at face value.

      • It is true that many women believe the Uber model is safer than the alternatives; however, some women have been raped, and other women have had extremely scary experiences. Uber is accused of failing to carry out proper checks, and failing to report serious incidents.
      • Uber service is cheap not only because it cuts costs and exploits its drivers, but also because it is subsidized by Uber investors. This looks suspiciously like predatory pricing rather than fair competition. Analysts such as Izabella Kaminska argue that Uber will only become profitable when it has driven its competitors out of business, at which point it will be able to increase its prices. Like much of Silicon Valley, it appears to operate according to the Peter Thiel anti-competition playbook. Even Steve Bannon has been heard arguing for closer regulation of what are effectively monopoly platforms.
      • Technology companies such as Uber sometimes describe themselves as "disruptive". While it is true that disruptions sometimes yield socioeconomic benefits, the belief that disruption is always good for competition is based on ideology rather than evidence. Regulation is generally opposed to disruption.
      • And as Stephen Bush points out, it's not as a digital start-up company that Uber has fallen foul of regulations, but as an old fashioned minicab operator. (As John Bull explains, Uber London is just a minicab company; the app is operated by Uber BV in the Netherlands. This corporate separation helps Uber to finesse both regulation and tax.) Persuading politicians and economists to see Uber as a shining example of technological progress is just "a very, very clever marketing trick".

      I'm quoting Steve Bannon because I'm just amazed to find something I agree with him about.  Regulating platforms is not the same as regulating regular companies, and the general art of regulation needs a kick up the proverbial. However, that is no reason to diss the current regulations or regulators, who are doing the best they can with insufficient regulatory mechanisms and resources. Experience from other cities shows that if Uber can't get its act together, there are plenty others that can.

      John Bull, Understanding Uber: It’s Not About The App (Reconnections 25 September 2017)

      Stephen Bush, The right are defending Uber, because they don't really understand it (New Statesman 22 September 2017)

      Martin Farrer, Nadia Khomami et al, More than 500,000 sign petition to save Uber as firm fights London ban (Guardian 23 September 2017)

      Ryan Grim, Steve Bannon Wants Facebook and Google Regulated Like Utilities (The Intercept, 27 July 2017)

      Hubert Horan, Will the Growth of Uber Increase Economic Welfare? (September 14, 2017)

      Izabella Kaminska. For references see earlier post Uber Mathematics 2 (December 2016)

      Sam Levine,'There is life after Uber': what happens when cities ban the service? (Guardian 23 September 2017)

      Jason Murugesu, Night bus or black cab - what will save stranded Londoners post-Uber? (New Statesman 22 September 2017)

      Andrew Orlowski, Why Uber isn't the poster child for capitalism you wanted (The Register, 26 September 2017)

      Emine Saner, Will the end of Uber in London make women more or less safe? (Guardian, 25 September 2017)

      Related posts (with further references): Platform, Regulation, Uber