Friday, July 27, 2018

Standardizing Processes Worldwide

September 2015
Lidl is looking to press ahead with standardizing processes worldwide and chose SAP ERP Retail powered by SAP HANA to do the job (PressBox 2, September 2015)

November 2016
Lidl rolls out SAP for Retail powered by SAP HANA with KPS (Retail Times, 9 November 2016)

July 2018
Lidl stops million-dollar SAP project for inventory management (CIO, in German, 18 July 2018)

Lidl cancels SAP introduction after spending 500M Euro and seven years (An Oracle Executive, via Linked-In, 20 July 2018) 
Lidl software disaster another example of Germany’s digital failure (Handelsblatt Global, 30 July 2018)

I don't have any inside information about this project, but I have seen other large programmes fail on because of the challenges of process standardization. When you are spending so much money on the technology, people across the organization may start to think of this as primarily a technology project. Sometimes it is as if the knowledge of how to run the business is no longer grounded in the organization and its culture but (by some form of transference) is located in the software. To be clear, I don't know if this is what happened in this case.

Also to be clear, some organizations have been very successful at process standardization. This is probably more to do with management style and organizational culture than technology choices alone.

Writing in Handelsblatt Global, Florian Kolf and Christof Kerkmann suggest that Lidl's core mentality was "but this is how we always do it". Alexander Posselt refers to Schicksalsgemeinschaften, which can be roughly translated as collective wilful blindness. Kolf and Kerkmann also make a point related to the notion of shearing layers.
Altering existing software is like changing a prefab house, IT experts say — you can put the kitchen cupboards in a different place, but when you start moving the walls, there’s no stability.
But at least with a prefab house, it is reasonably clear what counts as Cupboard and what counts as Wall. Whereas with COTS software, people may have widely different perceptions about which elements are flexible and which elements need to be stable. So the IT experts may imagine it's cheaper to change the business process than the software, while the business imagines it's easier and quicker to change the software than the business process.

What will Lidl do now? Apparently it plans to fall back on its old ERP system, at least in the short term. It's hard to imagine that Lidl is going to be in a hurry to burn that amount of cash on another solution straightaway. (Sorry Oracle!) But the frustrations with the old system are surely going to get greater over time, and Lidl can't afford to spend another seven years tinkering around the edges. So what's the answer? Organic planning perhaps?


Thanks to @EnterprisingA for drawing this story to my attention.

Slideshare: Organic Planning (September 2008), Next Generation Enterprise Architecture (September 2011)

Related Posts: SOA and Holism (January 2009), Differentiation and Integration (May 2010), EA Effectiveness and Process Standardization (August 2012), Agile and Wilful Blindness (April 2015).


Updated 31 August 2018

Tuesday, July 24, 2018

Evidence-Based Planning

Everybody's favourite internet-book-retailer-cum-cloud-computing-giant is planning for a wide range of outcomes after Brexit.
"Like any business, we consider a wide range of scenarios in planning discussions so that we’re prepared to continue serving customers and small businesses who count on Amazon, even if those scenarios are very unlikely," a spokesperson said.

However, a Government spokesperson dismissed speculation about civil unrest, saying
"Where is the evidence to suggest that would happen?"

To which one might counter

"Where is the evidence to suggest that wouldn't happen?"



There is a methodological gulf between these two positions. One is planning for things you can't prove won't happen. The other is NOT planning for things you can't prove WILL happen.

The political problem with planning for things that might not happen, is that people may criticize you for wasting time and money on something that didn't happen. Whereas if you fail to plan for something that is unlikely to happen, and then it does happen, you can appeal to bad luck. Or the wrong kind of snow.

As with other modes of decision-making, planning simply to avoid censure is not necessarily conducive to good outcomes.


Gareth Corfield, I predict a riot: Amazon UK chief foresees 'civil unrest' for no-deal Brexit (The Register, 23 July 2018)

Rob Davies, No-deal Brexit risks 'civil unrest', warns Amazon's UK boss (The Guardian, 23 July 2018)

Related Post: Decision-Making Models (March 2017)

Friday, June 08, 2018

Data and Intelligence Principles From Major Players

The purpose of this blogpost is to enumerate the declared ethical positions of major players in the data world. This is a work in progress.




Google

In June 2018, Sundar Pinchai (Google CEO) announced a set of AI principles for Google. This includes seven principles, four application areas that Google will avoid (including weapons), references to international law and human rights, and a commitment to a long-term sustainable perspective.

https://www.blog.google/topics/ai/ai-principles/


Also worth noting the statement on AI ethics and social impact published by DeepMind last year. (DeepMind was accquired by Google in 2014 and is now a subsidiary of Google parent Alphabet.)

https://deepmind.com/applied/deepmind-ethics-society/research/



IBM

In January 2017, Ginni Rometty (IBM CEO) announced a set of Principles for the Cognitive Era.

https://www.ibm.com/blogs/think/2017/01/ibm-cognitive-principles/

This was followed up in October 2017, with a more detailed ethics statement for data and intelligence, entitled Data Responsibility @IBM.

https://www.ibm.com/blogs/policy/dataresponsibility-at-ibm/



Microsoft

In January 2018, Brad Smith (Microsoft President and Chief Legal Officer) announced a book called The Future Computed: Artificial Intelligence and its Role in Society, to which he had contributed a forward.

https://blogs.microsoft.com/blog/2018/01/17/future-computed-artificial-intelligence-role-society/



Twitter


@Jack Dorsey (Twitter CEO) asked the Twitterverse whether Google's AI principles were something the tech industry as a whole could get around (via The Register, 9 June 2018).



Selected comments

These comments are mostly directed at the Google principles, because these are the most recent. However, many of them apply equally to the others. Commentators have also remarked on the absence of ethical declarations from Amazon.


Many commentators have welcomed Google's position on military AI, and congratulate those Google employees who lobbied for discontinuing its work with the US Department of Defense analysing drone footage, known as Project Maven. @kateconger, Google Plans Not to Renew Its Contract for Project Maven, a Controversial Pentagon Drone AI Imaging Program (Gizmodo 1 June 2018) Google Backtracks, Says Its AI Will Not Be Used for Weapons or Surveillance, (Gizmodo 7 June 2018)

Interesting thread from former Googler @tbreisacher on the new principles (HT @kateconger)

@EricNewcomer talks about What Google's AI Principles Left Out (Bloomberg 8 June 2018). He reckons we're in a "golden age for hollow corporate statements sold as high-minded ethical treatises", complains that the Google principles are "peppered with lawyerly hedging and vague commitments", and asks about governance - "who decides if Google has fulfilled its commitments".

@katecrawford(Twitter 8 June 2018) also asks about governance. "How are they implemented? Who decides? There's no mention of process, or people, or how they'll evaluate if a tool is 'beneficial'. Are they... autonomous ethics?" And @mer__edith (Twitter 8 June 2018) calls for "strong governance, independent external oversight and clarity".

Andrew McStay (Twitter 8 June 2018) asks about Google's business model. "Please tell me if you spot any reference to advertising, or how Google actually makes money. Also, I’d be interested in knowing if Government “work” dents reliance on ads."

Earlier, in relation to DeepMind's ethics and social impact statement, @riptari (Natasha Lomas) suggested that "it really shouldn’t need a roster of learned academics and institutions to point out the gigantic conflict of interest in a commercial AI giant researching the ethics of its own technology’s societal impacts" (TechCrunch October 2017). See also my post on Conflict of Interest (March 2018).

@rachelcoldicutt asserts that "ethical declarations like these need to have subjects. ... If they are to be useful, and can be taken seriously, we need to know both who they will be good for and who they will harm." She complains that the Google principles fail on these counts. (Tech ethics, who are they good for? Medium 8 June 2018)


Updated 11 June 2018

Tuesday, June 05, 2018

Responsibility by Design

Over the past twelve months or so, we have seen a big shift in the public attitude towards new technology. More people are becoming aware of the potential abuses of data and other cool stuff. Scandals involving Facebook and other companies have been headline news.

Security professionals have been pushing the idea of security by design for ages, and the push to comply with GDPR has made a lot of people aware of privacy by design. Responsibility by design (RbD) represents a logical extension of these ideas to include a range of ethical issues around new technology.

Here are some examples of the technologies that might be covered by this.

Technologies such as
...
Benefits such as
...
Dangers such as
...
Principles such as
...
Big Data Personalization Invasion of Privacy Consent
Algorithms Optimization Algorithmic Bias Fairness
Automation Productivity Fragmentation of Work Human-Centred Design
Internet of Things Cool Devices Weak Security Ecosystem Resilience
User Experience Convenience Dark Patterns, Manipulation Accessibility, Transparency


Ethics is not just a question of bad intentions, it includes bad outcomes through misguided action. Here are some of the things we need to look at.
  • Unintended outcomes - including longer-term or social consequences. For example, platforms like Facebook and YouTube are designed to maximize engagement. The effect of this is to push people into progressively more extreme content in order to keep them on the platform for longer.
  • Excluded users - this may be either deliberate (we don't have time to include everyone, so let's get something out that works for most people) or unwitting (well it works for people like me, so what's the problem)
  • Neglected stakeholders - people or communities that may be indirectly disadvantaged - for example, a healthy politics that may be undermined by the extremism promoted by platforms such as Facebook and YouTube.
  • Outdated assumptions - we used to think that data was scarce, so we grabbed as much as we could and kept it for ever. We now recognize that data is a liability as well as an asset, and we now prefer data minimization - only collect and store data for a specific and valid purpose. A similar consideration applies to connectivity. We are starting to see the dangers of a proliferation of "always on" devices, especially given the weak security of the IoT world. So perhaps we need to replace a connectivity-maximization assumption with a connectivity minimization principle. There are doubtless other similar assumptions that need to be surfaced and challenged.
  • Responsibility break - potential for systems being taken over and controlled by less responsible stakeholders, or the chain of accountability being broken. This occurs when the original controls are not robust enough.
  • Irreversible change - systems that cannot be switched off when they are no longer providing the benefits and safeguards originally conceived.


Wikipedia: Algorithmic Bias (2017), Dark Pattern (2017), Privacy by Design (2011), Secure by Design (2005), Weapons of Math Destruction (2017). The date after each page shows when it first appeared on Wikipedia.

Ted Talks: Cathy O'Neil, Zeynep Tufekci, Sherry Turkle

Related Posts: Pax Technica (November 2017), Risk and Security (November 2017), Outdated Assumptions - Connectivity Hunger (June 2018)



Updated 12 June 2018

Monday, June 04, 2018

Outdated Assumptions - Connectivity Hunger

Behaviours developed in a state of scarcity may cease to be appropriate in a state of abundance. Our stone age ancestors struggled to get enough energy-rich food, so they acquired a taste for food with a strong energy hit. We inherited a greed for sweet and fatty foods, and can now stuff our faces on delicacies our stone age ancestors never knew, such as ice-cream and cheesecake.

***

So let's talk about data. Once upon a time, data processing systems struggled to get enough data, and long-term data storage was expensive, so we were told to regard data as an asset. People learned to grab as much data as they could, and keep it until the data storage was full. But the greed for data was always moderated by the cost of collection, storage and retrieval, as well as the limited choice of data that was available in the first place.

Take away the assumption of data scarcity and cost, and our greed for data becomes problematic. We now recognize that data (especially personal data) can be a liability as much as an asset, and have become wedded to the principle of data minimization - only collecting the data you need, and only keeping it as long as you need.

***

But data scarcity is not the only outdated assumption that still influences our behaviour. Let's also talk about connectivity. Once upon a time, connectivity was intermittent, slow, unreliable. Hungry for greater connectivity, computer scientists dreamed of a world where everything was always on. More recently, Facebook has argued that Connectivity is a Human Right. (But you can only read this document if you have a Facebook account!)

But as with an overabundance of data, we may experience an overabundance of connectivity. Thus we are starting to realise the downside of the "always on", not just in the highly insecure world of the Internet of Things (Rainie and Anderson) but also in corporate computing (Ben-Meir, Hill).

Increasingly, products and services are being designed for "always on" operation. Ben-Meir notes Apple’s assertion that constant connectivity is essential for features such as AirDrop and AirPlay, and only today a colleague was grumbling to me about the downgrading of offline functionality in Microsoft Outlook.

Perhaps therefore, similar to the data minimization principle, there needs to be a network minimization principle. The wider the network, the larger the scope of responsibility. Or as Bruce Schneier puts it, "the more we network things together, the more vulnerabilities on one thing will affect other things". So don’t just connect because you can. Connect for a reason, disconnect by default, support offline functionality and disruption-tolerance, prefer secure hubs to insecure peer-to-peer.

Bruce Schneier again: "We also need to reverse the trend to connect everything to the internet. And if we risk harm and even death, we need to think twice about what we connect and what we deliberately leave uncomputerized. If we get this wrong, the computer industry will look like the pharmaceutical industry, or the aircraft industry. But if we get this right, we can maintain the innovative environment of the internet that has given us so much."



Elad Ben-Meir, How an 'Always-On' Culture Compromises Corporate Security (Info Security, 2 November 2017)

Paul Hill, Always-on Access Brings Always-Threatening Security Risks (System Experts, 25 June 2015)

Lee Rainie and Janna Anderson, The Internet of Things Connectivity Binge: What Are the Implications? (Pew Research Centre, 6 June 2017)

Bruce Schneier, Click Here to Kill Everyone (New York Magazine, 27 January 2017)

Maeve Shearlaw, Mark Zuckerberg says connectivity is a basic human right – do you agree? (Guardian 3 Jan 2014)


Thanks to @futureidentity for useful discussion

Saturday, April 28, 2018

Be the Change

Anyone fancy a job as Head of Infrastructure? Here is the job description, posted to Linked-In earlier this week.

We're responsible for "IT Change", including the end to end architecture, deployment and maintenance of IT infrastructure technologies across [organization]. We’re the first technical point of contact for people in [organization] who want to speak to the CIO function. We take business requirements and architect solutions, then work with [group IT] to input the solution into our data centres.
We provide direction, thought leadership, guidance and subject matter expertise on our IT estate to make sure we get the maximum value from our investment in our IT. We do this by defining our IT strategy and aligning it with Group IT, producing technology roadmaps and identifying and recommending IT solution opportunities, supporting business initiatives and ideas, and documenting and managing our architecture assets.
The Head of Infrastructure is a key leadership role in the CIO and critical to the delivery of both customer and partner facing technology. Working closely with our technology supplier, group IT, CISO and Service Management teams, this leader will be accountable for the end to ends analysis, design, build, test and implementation of; Platforms and Middleware, Network and Communications, Cloud Services, Data Warehouse and End User Services.
https://www.linkedin.com/jobs/view/633114556/

The job description contains a number of key words and phrases that architects should be comfortable with - direction, strategy, alignment, thought leadership, roadmaps, architecture assets.

But perhaps the first clue that there may be something amiss with this position is the fact that "IT Change" is in quotes. (As if to say that in IT, nothing really changes.)

The Register has contacted the person who is (according to Linked-In) currently holding this position. Is he moving on, moving up? Could this vacancy be connected in any way with recent IT difficulties facing the organization? (No answer reported. Curious.)

The recent IT difficulties facing this particular organization have come to the attention of politicians and the media. After the chair of the Treasury Select Committee described the situation as having "all the hallmarks of an IT meltdown", the word "meltdown" is now the descriptor of choice for journalists covering the story.

But help is at hand: IBM has kindly volunteered to help sort out the mess. So we can guess what "working closely with our technology supplier" might look like.




Karl Flinders, TSB IT meltdown has the makings of an epic (Computer Weekly, 25 April 2018)

Samuel Gibbs, Warning signs for TSB's IT meltdown were clear a year ago – insider (The Guardian, 28 April 2018)

Kat Hall, Newsworthy Brit bank TSB is looking for a head of infrastructure (The Register, 27 April 2018)

Stuart Sumner, TSB brings in IBM in attempt to resolve IT crisis (Computing, 26 April 2018)

Tuesday, March 20, 2018

Making the World More Open and Connected

Last year, Facebook changed its mission statement, from "Making The World More Open And Connected" to "Bringing The World Closer Together".

As I said in September 2005, interoperability is not just a technical question but a sociotechnical question (involving people, processes and organizations). (Some of us were writing about "open and connected" before Facebook existed.) But geeks often start with the technical interface, or what is sometimes called an API.

For many years, Facebook had an API that allowed developers to snoop on friends' data: this was shut down in April 2015. As Constine reported at the time, this was not just because the API was "kind of shady" but also to "deny developers the ability to build apps ... that could compete with Facebook’s own products". Sandy Paralikas (himself a former Facebook executive) made a similar point (as reported by Paul Lewis): Facebook executives were nervous about the commercial value of data being passed to other companies, and worried that the large app developers could be building their own social graphs.

In other words, the decision was not motivated by concern for user privacy but by the preservation of Facebook's hegemony.

When Tim Berners-Lee first talked about the Giant Global Graph in 2007, it seemed such a good idea. When Facebook launched the Open Graph in 2010, this was billed as "a taste of the future where everything can be more personalized". Like!




Philip Boxer and Richard Veryard, Taking Governance to the Edge (Microsoft Architecture Journal, August 2006)

Josh Constine, Facebook Is Shutting Down Its API For Giving Your Friends’ Data To Apps (TechCrunch, 28 April 2015)

Josh Constine and Frederic Lardinois, Everything Facebook Launched At f8 And Why (TechCrunch, 2 May 2014)

John Lanchester, You Are the Product (London Review of Books, 17 August 2017)

Paul Lewis, 'Utterly horrifying': ex-Facebook insider says covert data harvesting was routine (Guardian, 20 March 2018)

Caroline McCarthy, Facebook F8: One graph to rule them all (CNet, 21 April 2010)

Sandy Parakilas, We Can’t Trust Facebook to Regulate Itself (New York Times, 19 November 2017)

Wikipedia: Giant Global GraphOpen API,


Related Posts SOA Stupidity (September 2005), Social Networking as Reuse (November 2007), Security is Downstream from Strategy (March 2018), Connectivity Hunger (June 2018)

Saturday, March 10, 2018

Fail Fast - Why did the Chicken cross the road?

A commonly accepted principle of architecture and engineering is to avoid a single point of failure (SPOF). A single depot for a chain of over 850 fast food restaurants could be risky, as KFC was warned when it announced that it was switching its logistics from Bidvest to a partnership with DHL and QSL, to be served out of a single depot in Rugby. We may imagine that the primary motivation for KFC was cost-saving, although the announcement was dressed up in management speak - "re-writing the rule book" and "setting a new benchmark".

The new system went live on 14th February 2018. The changeover did not go well: by the weekend, over three quarters of the stores were closed. Rugby is a great location for a warehouse - except when there is a major incident on a nearby motorway. (Who knew that could happen?)

After a couple of weeks of disruption, as well as engaging warehouse-as-a-service startup Stowga for non-food items, KFC announced that it was resuming its relationship with Bidvest. According to some reports, Burger King also flirted with DHL some years ago before returning to Bidvest. History repeating itself.

However, the problems faced by KFC cannot be attributed solely to the decision to supply the whole UK mainland from Rugby. A just-in-time supply chain needs contingency planning - covering business continuity and disaster recovery. (Good analysis by Richard Priday, who tweets as @InsomniacSteel.)



KFC revolutionizes UK foodservice supply chain with DHL and QSL appointment (DHL Press Release, 11 Oct 2017)

Andrew Don, KFC admits chicken waste as cost of DHL failure grows (The Grocer, 23 Feb 2018)

Andrea Felsted, Supply chains: Look for the single point of failure (FT 2 May 2011)

Adam Leyland, KFC supply chain fiasco is Heathrow's Terminal 5 all over again (The Grocer, 23 Feb 2018)

Charlie Pool (CEO of Stowga), Warehousing on-demand saves KFC (Retail Technology 26 February 2018)

Richard Priday, The inside story of the great KFC chicken shortage of 2018 (Wired 21 February 2018) How KFC ended the great chicken crisis by taking care of its mops (Wired 2 March 2018) The KFC chicken crisis is finally over: it's (sort of) ditched DHL (Wired 8 March 2018)

Carol Ryan, Stuffed KFC only has itself to blame (Reuters, 20 February 2018)

Su-San Sit, KFC was 'warned DHL would fail' (Supply Management, 20 February 2018)

Matthew Weaver, Most KFCs in UK remain closed because of chicken shortage (Guardian 19 Feb 2018) KFC was warned about switching UK delivery contractor, union says (Guardian 20 Feb 2018)

Zoe Wood, KFC returns to original supplier after chicken shortage fiasco (Guardian 8 March 2018)

Wikipedia: Single Point of Failure

Related posts: Fail Fast - Burger Robotics (March 2018)

Sunday, March 04, 2018

The Exception That Proves the Rule

My thin clean-shaven friend @futureidentity is reassured by messages that appear to be misdirected.



But when I read his latest tweet, I thought of the exception that proves the rule. Fowler defines five uses of this phrase; I'm going to use two of them.

Firstly, when an advert is exceptionally badly targeted, we notice it precisely because it is an outlier - an exception to the normal pattern or rule. Thus reinforcing our belief in the normal pattern - the idea that many if not most messages nowadays are moderately well targeted. This is what Fowler calls the "loose rhetorical sense" of the phrase.

Secondly, adverts aren't necessarily misdirected by accident. Conjurers and politicians use misdirection as a form of deception, to distract the audience's attention from what they are really doing. (Some commentators regard the 45th US President as a master of misdirection.)

This is how Target does it, so the pregnant customer doesn't feel she's being stalked.
"Then we started mixing in all these ads for things we knew pregnant women would never buy, so the baby ads looked random. We’d put an ad for a lawn mower next to diapers. We’d put a coupon for wineglasses next to infant clothes. That way, it looked like all the products were chosen by chance." (Forbes)

So just because a marketing message appears to be a random error, that doesn't mean it is. Further investigation might reveal it to be carefully designed to foster exactly that illusion in a specific recipient. And if it turns out to be targeted after all, this would be what Fowler calls "the secondary rather complicated scientific sense" of the phrase.




Related posts

85 million faces (Oct 2016)


Sources

Charles Duhigg, How companies learn your secrets (New York Times, 16 Feb 2012)

Kashmir Hill, How Target Figured Out A Teen Girl Was Pregnant Before Her Father Did (Forbes, 16 Feb 2012)

Wikipedia: Exception that proves the ruleMisdirection (magic)

Sunday, February 04, 2018

The Hungry Tapeworm

This week, three American companies announced a joint venture to sort out healthcare for their own employees. Ambitious, huh?

This is not the first time large American companies have tried to challenge tho market power of healthcare providers. According to Warren Buffett, "the ballooning costs of healthcare act as a hungry tapeworm on the American economy". Intel and Walmart are among those that have previously ventured into this area. In 2016, 20 companies including Coca Cola, American Express, IBM and Macy’s joined the Health Transformation Alliance (HTA). So why should anyone take this latest attempt seriously? Only because the three companies are Amazon, Berkshire Hathaway and JPMorgan Chase. And Amazon (need I remind you?) eats everyone's lunch.


John Naughton sees this as a typical play for a data hungry tech giant, based on two hypotheses.
  • Transactional data will lead to transactional efficiencies. The joint venture starts with the three companies experimenting on their own employees, who will "tell Amazon and its algorithms what works and doesn’t work". 
  • "Mastery of big data might yield clinical benefit".
As Pressman and Lashinsky note, the experiment is based on a pretty good sample of Americans: "a diverse workforce spanning low-wage normal folk to the most elite of our society".


Amazon is obviously a major player in the data and analytics world, but so is IBM, which is playing an important role in the HTA. Not only is IBM a corporate member, but IBM Watson Health will do the data and analytics. According to Pharmaceutical Commerce, it will "aggregate participating HTA member companies' data, enabling insights both into outcomes of medical interventions, as well as wellness initiatives to improve employees’ health".

And what about Google? Google Health was discontinued in 2011, following a lack of widespread adoption. Perhaps data isn't the whole story.


But Amazon is not just about data. In an article published before this announcement, Zack Kanter attributes Amazon's strategic dominance to SOA. "Each piece of Amazon is being built with a service-oriented architecture, and Amazon is using that architecture to successively turn every single piece of the company into a separate platform — and thus opening each piece to outside competition."

Moazed and Johnson discuss the platform implications of the healthcare announcement. They argue that "platforms thrive with fragmentation, not consolidation", and that "the new platform needs to offer enough potential scale to outweigh those risks, otherwise manufacturers may be too afraid to join". Sarah Buhr sees this as an opportunity for smaller players, such as Collective Health.

Three employers, even large ones, probably won’t have enough muscle to negotiate fair prices for healthcare and pharma. But if Bezos can create the right expectations, and provide a flexible platform for smaller players ...






Health Transformation Alliance sets its 2017 agenda (Pharmaceutical Commerce, 9 March 2017)

Amazon alliance takes on ‘hungry tapeworm’ of healthcare costs (Pharmaceutical Technology, 1 February 2018)

Sarah Buhr, Collective Health Wants To Replace The Health Insurance Industry With A Software Program (TechCrunch, 11 Aug 2014)

Sarah Buhr, Amazon’s new healthcare company could give smaller healthtech players a boost (TechCrunch, 30 Jan 2018)

Paul Demko, Amazon's new health care business could shake up industry after others have failed (Politico, 30 January 2018)

Zack Kanter, Why Amazon is eating the world (TechCrunch, 14 May 2017)

Paul Martyn, Healthcare Consumerism: Taming The Hungry Tapeworm (Forbes, 30 January 2018)

Alex Moazed and Nicholas L Johnson, Amazon's Long-Awaited Health Care Platform (Inc, 30 January 2018)

John Naughton, Healthcare is a huge industry – no wonder Amazon is muscling in (Observer, 4 February 2018)

Aaron Pressman and Adam Lashinsky, Data Sheet—Why Jeff Bezos Just Might Crack the Health Care Challenge (Fortune, 31 January 2018)

Jordan Weissmann, Can Amazon, Berkshire Hathaway, and JPMorgan Revolutionize Health Care? (Slate, 30 Jan 2018)

Wikipedia: Google Health


Updated 5 February 2018