Tuesday, November 06, 2018

Big Data and Organizational Intelligence

Ten years ago, the editor of Wired Magazine published an article claiming the end of theory. "With enough data, the numbers speak for themselves."

The idea that data (or facts) speak for themselves, with no need for interpretation or analysis, is a common trope. It is sometimes associated with a legal doctrine known as Res Ispa Loquitur - the thing speaks for itself. However this legal doctrine isn't about truth but about responsibility: if a surgeon leaves a scalpel inside the patient, this fact alone is enough to establish the surgeon's negligence.

Or even the world speaks for itself. The world, someone once asserted, is all that is the case, the totality of facts not of things. Paradoxically, big data often means very large quantities of very small (atomic) data.

But data, however big, does not provide a reliable source of objective truth. This is one of the six myths of big data identified by Kate Crawford, who points out, "data and data sets are not objective; they are creations of human design". In other words, we don't just build models from data, we also use models to obtain data. This is linked to Piaget's account of how children learn to make sense of the world in terms of assimilation and accommodation. (Piaget called this Genetic Epistemology.)

Data also cannot provide explanation or understanding. Data can reveal correlation but not causation. Which is one of the reasons why we need models. As Kate Crawford also observes, "we get a much richer sense of the world when we ask people the why and the how not just the how many".

In the traditional world of data management, there is much emphasis on the single source of truth. Michael Brodie (who knows a thing or two about databases), while acknowledging the importance of this doctrine for transaction systems such as banking, argues that it is not appropriate everywhere. "In science, as in life, understanding of a phenomenon may be enriched by observing the phenomenon from multiple perspectives (models). ... Database products do not support multiple models, i.e., the reality of science and life in general.". One approach Brodie talks about to address this difficulty is ensemble modelling: running several different analytical models and comparing or aggregating the results. (I referred to this idea in my post on the Shelf-Life of Algorithms).

Along with the illusion that what the data tells you is true, we can identify two further illusions: that what the data tells you is important, and that what the data doesn't tell you is not important. These are not just illusions of big data of course - any monitoring system or dashboard can foster them. The panopticon affects not only the watched but also the watcher.

From the perspective of organizational intelligence, the important point is that data collection, sensemaking, decision-making, learning and memory form a recursive loop - each inextricably based on the others. An organization only perceives what it wants to perceive, and this depends on the conceptual models it already has - whether these are explicitly articulated or unconsciously embedded in the culture. Which is why real diversity - in other words, genuine difference of perspective, not just bureaucratic profiling - is so important, because it provides the organizational counterpart to the ensemble modelling mentioned above.





Chris Anderson, The End of Theory: The Data Deluge Makes the Scientific Method Obsolete (Wired, 23 June 2008)

Michael L Brodie, Why understanding of truth is important in Data Science? (KD Nuggets, January 2018)

Kate Crawford, The Hidden Biases in Big Data (HBR, 1 April 2013)

Kate Crawford, The Anxiety of Big Data (New Inquiry, 30 May 2014)

Bruno Gransche, The Oracle of Big Data – Prophecies without Prophets (International Review of Information Ethics, Vol. 24, May 2016)

Thomas McMullan, What does the panopticon mean in the age of digital surveillance? (Guardian, 23 July 2015)

Evelyn Ruppert, Engin Isin and Didier Bigo, Data politics (Big Data and Society, July–December 2017: 1–7)

Ian Steadman, Big Data and the Death of the Theorist (Wired, 25 January 2013)

Ludwig Wittgenstein, Tractatus Logico-Philosophicus (1922)


Related posts

Information Algebra (March 2008)
How Dashboards Work (November 2009)
Co-Production of Data and Knowledge (November 2012)
Real Criticism - The Subject Supposed to Know (January 2013)
The Purpose of Diversity (December 2014)
The Shelf-Life of Algorithms (October 2016)
The Transparency of Algorithms (October 2016)


Wikipedia: Ensemble LearningGenetic Epistemology, PanopticismRes ispa loquitur (the thing speaks for itself)

Stanford Encyclopedia of Philosophy: Kant and Hume on Causality


For more on Organizational Intelligence, please read my eBook.
https://leanpub.com/orgintelligence/

Sunday, November 04, 2018

On Repurposing AI

With great power, as they say, comes great responsibility.


In London this week for Microsoft's Future Decoded event, according to reporter @richard_speed of @TheRegister, Satya Nadella asserted that an AI trained for one purpose being used for another was "an unethical use".

If Microsoft really believes this, it would certainly be a radical move. In April this year Mark Russinovich, Azure CTO, gave a presentation at the RSA Conference on Transfer Learning: Repurposing ML Algorithms from Different Domains to Cloud Defense.

Repurposing data and intelligence - using AI for a different purpose to its original intent - may certainly have ethical consequences. This doesn't necessarily mean it's wrong, simply that the ethics must be reexamined. Responsibility by design (like privacy by design, from which it inherits some critical ideas) considers a design project in relation to a specific purpose and use-context. So if the purpose and context change, it is necessary to reiterate the responsibility-by-design process.

A good analogy would be the off-label use of medical drugs. There is considerable discussion on the ethical implications of this very common practice. For example, Furey and Wilkins argue that off-label prescribing imposes additional responsibilities on a medical practitioner, including weighing the available evidence and proper disclosure to the patient.

There are often strong arguments in favour of off-label prescribing (in medicine) or transfer learning (in AI). Where a technology provides some benefit to some group of people, there may be good reasons for extending these benefits. For example, Rachel Silver argues that transfer learning has democratized machine learning, lowered the barriers to entry, thus promoting innovation. Interestingly, there seem to be some good examples of transfer learning in AI for medical purposes.

However, transfer learning in AI raises some ethical concerns. Not only the potential consequences on people affected by the repurposed algorithms, but also potential sources of error. For example, Wang and others identify a potential vulnerability to misclassification attacks.

There are also some questions of knowledge ownership and privacy that were relevant to older modes of knowledge transfer (see for example Baskerville and Dulipovici).



By the way, if you thought the opening quote was a reference to Spiderman, Quote Investigator has traced a version of it to the French Revolution. Other versions from various statesmen including Churchill and Roosevelt.


Richard Baskerville and Alina Dulipovici, The Ethics of Knowledge Transfers and Conversions: Property or Privacy Rights? (HICSS'06: Proceedings of the 39th Annual Hawaii International Conference on System Sciences, 2006)

Katrina Furey and Kirsten Wilkins, Prescribing “Off-Label”: What Should a Physician Disclose? (AMA Journal of Ethics, June 2016)

Marian McHugh, Microsoft makes things personal at this year's Future Decoded (Channel Web, 2 November 2018)

Rachel Silver, The Secret Behind the New AI Spring: Transfer Learning (TDWI, 24 August 2018)

Richard Speed, 'Privacy is a human right': Big cheese Sat-Nad lays out Microsoft's stall at Future Decoded (The Register, 1 November 2018)

Bolun Wang et al, With Great Training Comes Great Vulnerability: Practical Attacks against Transfer Learning (Proceedings of the 27th USENIX Security Symposium, August 2018)


See also Off-Label (March 2005)

Thursday, October 18, 2018

Why Responsibility by Design now?

Excellent article by @riptari, providing some context for Gartner's current position on ethics and privacy.

Gartner has been talking about digital ethics for a while now - for example, it got a brief mention on the Gartner website last year. But now digital ethics and privacy has been elevated to the Top Ten Strategic Trends, along with (surprise, surprise) Blockchain.

Progress of a sort, says @riptari, as people are increasingly concerned about privacy.

The key point is really the strategic obfuscation of issues that people do in fact care an awful lot about, via the selective and non-transparent application of various behind-the-scenes technologies up to now — as engineers have gone about collecting and using people’s data without telling them how, why and what they’re actually doing with it. 
Therefore, the key issue is about the abuse of trust that has been an inherent and seemingly foundational principle of the application of far too much cutting edge technology up to now. Especially, of course, in the adtech sphere. 
And which, as Gartner now notes, is coming home to roost for the industry — via people’s “growing concern” about what’s being done to them via their data. (For “individuals, organisations and governments” you can really just substitute ‘society’ in general.) 
Technology development done in a vacuum with little or no consideration for societal impacts is therefore itself the catalyst for the accelerated concern about digital ethics and privacy that Gartner is here identifying rising into strategic view.

Over the past year or two, some of the major players have declared ethics policies for data and intelligence, including IBM (January 2017), Microsoft (January 2018) and Google (June 2018). @EricNewcomer reckons we're in a "golden age for hollow corporate statements sold as high-minded ethical treatises".

According to the Magic Sorting Hat, high-minded vision can get organizations into the Ravenclaw or Slytherin quadrants (depending on the sincerity of the intention behind the vision). But to get into the Hufflepuff or Gryffindor quadrants, organizations need the ability to execute. So it's not enough for Gartner simply to lecture organizations on the importance of building trust.

Here we go round the prickly pear
Prickly pear prickly pear
Here we go round the prickly pear
At five o'clock in the morning.




Natasha Lomas (@riptari), Gartner picks digital ethics and privacy as a strategic trend for 2019 (TechCrunch, 16 October 2018)

Sony Shetty, Getting Digital Ethics Right (Gartner, 6 June 2017)


Related posts (with further links)

Data and Intelligence Principles from Major Players (June 2018)
Practical Ethics (June 2018)
Responsibility by Design (June 2018)
What is Responsibility by Design (October 2018)

What is Responsibility by Design

Responsibility by design (RbD) represents a logical extension of Security by Design and Privacy by Design, as I stated in my previous post. But what does that actually mean?

X by design is essentially a form of governance that addresses a specific concern or set of concerns - security, privacy, responsibility or whatever.

  • What. A set of concerns that we want to pay attention to, supported by principles, guidelines, best practices, patterns and anti-patterns.
  • Why. A set of positive outcomes that we want to attain and/or a set of negative outcomes that we want to avoid.
  • When. What triggers this governance activity? Does it occur at a fixed point in a standard process or only when specific concerns are raised? Is it embedded in a standard operational or delivery model?
  • For Whom. How are the interests of stakeholders and expert opinions properly considered? To whom should this governance process be visible?
  • Who. Does this governance require specialist input or independent review, or can it usually be done by the designers themselves?
  • How. Does this governance include some degree of formal verification, independent audit or external certification, or is an informal review acceptable? How much documentation is needed?
  • How Much. Design typically involves a trade-off between different requirements, so this is about the weight given to X relative to anything else.


#JustAnEngineer
Check out @katecrawford talking at the Royal Society in London this summer. Just an Engineer.



Related posts

Practical Ethics (June 2018), Responsibility by Design (June 2018)

Friday, July 27, 2018

Standardizing Processes Worldwide

September 2015
Lidl is looking to press ahead with standardizing processes worldwide and chose SAP ERP Retail powered by SAP HANA to do the job (PressBox 2, September 2015)

November 2016
Lidl rolls out SAP for Retail powered by SAP HANA with KPS (Retail Times, 9 November 2016)

July 2018
Lidl stops million-dollar SAP project for inventory management (CIO, in German, 18 July 2018)

Lidl cancels SAP introduction after spending 500M Euro and seven years (An Oracle Executive, via Linked-In, 20 July 2018) 
Lidl software disaster another example of Germany’s digital failure (Handelsblatt Global, 30 July 2018)

I don't have any inside information about this project, but I have seen other large programmes fail on because of the challenges of process standardization. When you are spending so much money on the technology, people across the organization may start to think of this as primarily a technology project. Sometimes it is as if the knowledge of how to run the business is no longer grounded in the organization and its culture but (by some form of transference) is located in the software. To be clear, I don't know if this is what happened in this case.

Also to be clear, some organizations have been very successful at process standardization. This is probably more to do with management style and organizational culture than technology choices alone.

Writing in Handelsblatt Global, Florian Kolf and Christof Kerkmann suggest that Lidl's core mentality was "but this is how we always do it". Alexander Posselt refers to Schicksalsgemeinschaften, which can be roughly translated as collective wilful blindness. Kolf and Kerkmann also make a point related to the notion of shearing layers.
Altering existing software is like changing a prefab house, IT experts say — you can put the kitchen cupboards in a different place, but when you start moving the walls, there’s no stability.
But at least with a prefab house, it is reasonably clear what counts as Cupboard and what counts as Wall. Whereas with COTS software, people may have widely different perceptions about which elements are flexible and which elements need to be stable. So the IT experts may imagine it's cheaper to change the business process than the software, while the business imagines it's easier and quicker to change the software than the business process.

What will Lidl do now? Apparently it plans to fall back on its old ERP system, at least in the short term. It's hard to imagine that Lidl is going to be in a hurry to burn that amount of cash on another solution straightaway. (Sorry Oracle!) But the frustrations with the old system are surely going to get greater over time, and Lidl can't afford to spend another seven years tinkering around the edges. So what's the answer? Organic planning perhaps?


Thanks to @EnterprisingA for drawing this story to my attention.

Slideshare: Organic Planning (September 2008), Next Generation Enterprise Architecture (September 2011)

Related Posts: SOA and Holism (January 2009), Differentiation and Integration (May 2010), EA Effectiveness and Process Standardization (August 2012), Agile and Wilful Blindness (April 2015).


Updated 31 August 2018

Tuesday, July 24, 2018

Evidence-Based Planning

Everybody's favourite internet-book-retailer-cum-cloud-computing-giant is planning for a wide range of outcomes after Brexit.
"Like any business, we consider a wide range of scenarios in planning discussions so that we’re prepared to continue serving customers and small businesses who count on Amazon, even if those scenarios are very unlikely," a spokesperson said.

However, a Government spokesperson dismissed speculation about civil unrest, saying
"Where is the evidence to suggest that would happen?"

To which one might counter

"Where is the evidence to suggest that wouldn't happen?"



There is a methodological gulf between these two positions. One is planning for things you can't prove won't happen. The other is NOT planning for things you can't prove WILL happen.

The political problem with planning for things that might not happen, is that people may criticize you for wasting time and money on something that didn't happen. Whereas if you fail to plan for something that is unlikely to happen, and then it does happen, you can appeal to bad luck. Or the wrong kind of snow.

As with other modes of decision-making, planning simply to avoid censure is not necessarily conducive to good outcomes.


Gareth Corfield, I predict a riot: Amazon UK chief foresees 'civil unrest' for no-deal Brexit (The Register, 23 July 2018)

Rob Davies, No-deal Brexit risks 'civil unrest', warns Amazon's UK boss (The Guardian, 23 July 2018)

Related Post: Decision-Making Models (March 2017)

Friday, June 08, 2018

Data and Intelligence Principles From Major Players

The purpose of this blogpost is to enumerate the declared ethical positions of major players in the data world. This is a work in progress.




Google

In June 2018, Sundar Pinchai (Google CEO) announced a set of AI principles for Google. This includes seven principles, four application areas that Google will avoid (including weapons), references to international law and human rights, and a commitment to a long-term sustainable perspective.

https://www.blog.google/topics/ai/ai-principles/


Also worth noting the statement on AI ethics and social impact published by DeepMind last year. (DeepMind was accquired by Google in 2014 and is now a subsidiary of Google parent Alphabet.)

https://deepmind.com/applied/deepmind-ethics-society/research/



IBM

In January 2017, Ginni Rometty (IBM CEO) announced a set of Principles for the Cognitive Era.

https://www.ibm.com/blogs/think/2017/01/ibm-cognitive-principles/

This was followed up in October 2017, with a more detailed ethics statement for data and intelligence, entitled Data Responsibility @IBM.

https://www.ibm.com/blogs/policy/dataresponsibility-at-ibm/



Microsoft

In January 2018, Brad Smith (Microsoft President and Chief Legal Officer) announced a book called The Future Computed: Artificial Intelligence and its Role in Society, to which he had contributed a forward.

https://blogs.microsoft.com/blog/2018/01/17/future-computed-artificial-intelligence-role-society/



Twitter


@Jack Dorsey (Twitter CEO) asked the Twitterverse whether Google's AI principles were something the tech industry as a whole could get around (via The Register, 9 June 2018).



Selected comments

These comments are mostly directed at the Google principles, because these are the most recent. However, many of them apply equally to the others. Commentators have also remarked on the absence of ethical declarations from Amazon.


Many commentators have welcomed Google's position on military AI, and congratulate those Google employees who lobbied for discontinuing its work with the US Department of Defense analysing drone footage, known as Project Maven. @kateconger, Google Plans Not to Renew Its Contract for Project Maven, a Controversial Pentagon Drone AI Imaging Program (Gizmodo 1 June 2018) Google Backtracks, Says Its AI Will Not Be Used for Weapons or Surveillance, (Gizmodo 7 June 2018)

Interesting thread from former Googler @tbreisacher on the new principles (HT @kateconger)

@EricNewcomer talks about What Google's AI Principles Left Out (Bloomberg 8 June 2018). He reckons we're in a "golden age for hollow corporate statements sold as high-minded ethical treatises", complains that the Google principles are "peppered with lawyerly hedging and vague commitments", and asks about governance - "who decides if Google has fulfilled its commitments".

@katecrawford(Twitter 8 June 2018) also asks about governance. "How are they implemented? Who decides? There's no mention of process, or people, or how they'll evaluate if a tool is 'beneficial'. Are they... autonomous ethics?" And @mer__edith (Twitter 8 June 2018) calls for "strong governance, independent external oversight and clarity".

Andrew McStay (Twitter 8 June 2018) asks about Google's business model. "Please tell me if you spot any reference to advertising, or how Google actually makes money. Also, I’d be interested in knowing if Government “work” dents reliance on ads."

Earlier, in relation to DeepMind's ethics and social impact statement, @riptari (Natasha Lomas) suggested that "it really shouldn’t need a roster of learned academics and institutions to point out the gigantic conflict of interest in a commercial AI giant researching the ethics of its own technology’s societal impacts" (TechCrunch October 2017). See also my post on Conflict of Interest (March 2018).

@rachelcoldicutt asserts that "ethical declarations like these need to have subjects. ... If they are to be useful, and can be taken seriously, we need to know both who they will be good for and who they will harm." She complains that the Google principles fail on these counts. (Tech ethics, who are they good for? Medium 8 June 2018)


Related posts

Why Responsibility by Design Now? (October 2018)


Updated 11 June 2018. Link to later post added 18 October 2018.

Tuesday, June 05, 2018

Responsibility by Design

Over the past twelve months or so, we have seen a big shift in the public attitude towards new technology. More people are becoming aware of the potential abuses of data and other cool stuff. Scandals involving Facebook and other companies have been headline news.

Security professionals have been pushing the idea of security by design for ages, and the push to comply with GDPR has made a lot of people aware of privacy by design. Responsibility by design (RbD) represents a logical extension of these ideas to include a range of ethical issues around new technology.

Here are some examples of the technologies that might be covered by this.

Technologies such as
...
Benefits such as
...
Dangers such as
...
Principles such as
...
Big Data Personalization Invasion of Privacy Consent
Algorithms Optimization Algorithmic Bias Fairness
Automation Productivity Fragmentation of Work Human-Centred Design
Internet of Things Cool Devices Weak Security Ecosystem Resilience
User Experience Convenience Dark Patterns, Manipulation Accessibility, Transparency


Ethics is not just a question of bad intentions, it includes bad outcomes through misguided action. Here are some of the things we need to look at.
  • Unintended outcomes - including longer-term or social consequences. For example, platforms like Facebook and YouTube are designed to maximize engagement. The effect of this is to push people into progressively more extreme content in order to keep them on the platform for longer.
  • Excluded users - this may be either deliberate (we don't have time to include everyone, so let's get something out that works for most people) or unwitting (well it works for people like me, so what's the problem)
  • Neglected stakeholders - people or communities that may be indirectly disadvantaged - for example, a healthy politics that may be undermined by the extremism promoted by platforms such as Facebook and YouTube.
  • Outdated assumptions - we used to think that data was scarce, so we grabbed as much as we could and kept it for ever. We now recognize that data is a liability as well as an asset, and we now prefer data minimization - only collect and store data for a specific and valid purpose. A similar consideration applies to connectivity. We are starting to see the dangers of a proliferation of "always on" devices, especially given the weak security of the IoT world. So perhaps we need to replace a connectivity-maximization assumption with a connectivity minimization principle. There are doubtless other similar assumptions that need to be surfaced and challenged.
  • Responsibility break - potential for systems being taken over and controlled by less responsible stakeholders, or the chain of accountability being broken. This occurs when the original controls are not robust enough.
  • Irreversible change - systems that cannot be switched off when they are no longer providing the benefits and safeguards originally conceived.


Wikipedia: Algorithmic Bias (2017), Dark Pattern (2017), Privacy by Design (2011), Secure by Design (2005), Weapons of Math Destruction (2017). The date after each page shows when it first appeared on Wikipedia.

Ted Talks: Cathy O'Neil, Zeynep Tufekci, Sherry Turkle

Related Posts: Pax Technica (November 2017), Risk and Security (November 2017), Outdated Assumptions - Connectivity Hunger (June 2018)



Updated 12 June 2018

Monday, June 04, 2018

Outdated Assumptions - Connectivity Hunger

Behaviours developed in a state of scarcity may cease to be appropriate in a state of abundance. Our stone age ancestors struggled to get enough energy-rich food, so they acquired a taste for food with a strong energy hit. We inherited a greed for sweet and fatty foods, and can now stuff our faces on delicacies our stone age ancestors never knew, such as ice-cream and cheesecake.

***

So let's talk about data. Once upon a time, data processing systems struggled to get enough data, and long-term data storage was expensive, so we were told to regard data as an asset. People learned to grab as much data as they could, and keep it until the data storage was full. But the greed for data was always moderated by the cost of collection, storage and retrieval, as well as the limited choice of data that was available in the first place.

Take away the assumption of data scarcity and cost, and our greed for data becomes problematic. We now recognize that data (especially personal data) can be a liability as much as an asset, and have become wedded to the principle of data minimization - only collecting the data you need, and only keeping it as long as you need.

***

But data scarcity is not the only outdated assumption that still influences our behaviour. Let's also talk about connectivity. Once upon a time, connectivity was intermittent, slow, unreliable. Hungry for greater connectivity, computer scientists dreamed of a world where everything was always on. More recently, Facebook has argued that Connectivity is a Human Right. (But you can only read this document if you have a Facebook account!)

But as with an overabundance of data, we may experience an overabundance of connectivity. Thus we are starting to realise the downside of the "always on", not just in the highly insecure world of the Internet of Things (Rainie and Anderson) but also in corporate computing (Ben-Meir, Hill).

Increasingly, products and services are being designed for "always on" operation. Ben-Meir notes Apple’s assertion that constant connectivity is essential for features such as AirDrop and AirPlay, and only today a colleague was grumbling to me about the downgrading of offline functionality in Microsoft Outlook.

Perhaps therefore, similar to the data minimization principle, there needs to be a network minimization principle. The wider the network, the larger the scope of responsibility. Or as Bruce Schneier puts it, "the more we network things together, the more vulnerabilities on one thing will affect other things". So don’t just connect because you can. Connect for a reason, disconnect by default, support offline functionality and disruption-tolerance, prefer secure hubs to insecure peer-to-peer.

Bruce Schneier again: "We also need to reverse the trend to connect everything to the internet. And if we risk harm and even death, we need to think twice about what we connect and what we deliberately leave uncomputerized. If we get this wrong, the computer industry will look like the pharmaceutical industry, or the aircraft industry. But if we get this right, we can maintain the innovative environment of the internet that has given us so much."



Elad Ben-Meir, How an 'Always-On' Culture Compromises Corporate Security (Info Security, 2 November 2017)

Paul Hill, Always-on Access Brings Always-Threatening Security Risks (System Experts, 25 June 2015)

Lee Rainie and Janna Anderson, The Internet of Things Connectivity Binge: What Are the Implications? (Pew Research Centre, 6 June 2017)

Bruce Schneier, Click Here to Kill Everyone (New York Magazine, 27 January 2017)

Maeve Shearlaw, Mark Zuckerberg says connectivity is a basic human right – do you agree? (Guardian 3 Jan 2014)


Thanks to @futureidentity for useful discussion

Saturday, April 28, 2018

Be the Change

Anyone fancy a job as Head of Infrastructure? Here is the job description, posted to Linked-In earlier this week.

We're responsible for "IT Change", including the end to end architecture, deployment and maintenance of IT infrastructure technologies across [organization]. We’re the first technical point of contact for people in [organization] who want to speak to the CIO function. We take business requirements and architect solutions, then work with [group IT] to input the solution into our data centres.
We provide direction, thought leadership, guidance and subject matter expertise on our IT estate to make sure we get the maximum value from our investment in our IT. We do this by defining our IT strategy and aligning it with Group IT, producing technology roadmaps and identifying and recommending IT solution opportunities, supporting business initiatives and ideas, and documenting and managing our architecture assets.
The Head of Infrastructure is a key leadership role in the CIO and critical to the delivery of both customer and partner facing technology. Working closely with our technology supplier, group IT, CISO and Service Management teams, this leader will be accountable for the end to ends analysis, design, build, test and implementation of; Platforms and Middleware, Network and Communications, Cloud Services, Data Warehouse and End User Services.
https://www.linkedin.com/jobs/view/633114556/

The job description contains a number of key words and phrases that architects should be comfortable with - direction, strategy, alignment, thought leadership, roadmaps, architecture assets.

But perhaps the first clue that there may be something amiss with this position is the fact that "IT Change" is in quotes. (As if to say that in IT, nothing really changes.)

The Register has contacted the person who is (according to Linked-In) currently holding this position. Is he moving on, moving up? Could this vacancy be connected in any way with recent IT difficulties facing the organization? (No answer reported. Curious.)

The recent IT difficulties facing this particular organization have come to the attention of politicians and the media. After the chair of the Treasury Select Committee described the situation as having "all the hallmarks of an IT meltdown", the word "meltdown" is now the descriptor of choice for journalists covering the story.

But help is at hand: IBM has kindly volunteered to help sort out the mess. So we can guess what "working closely with our technology supplier" might look like.




Karl Flinders, TSB IT meltdown has the makings of an epic (Computer Weekly, 25 April 2018)

Samuel Gibbs, Warning signs for TSB's IT meltdown were clear a year ago – insider (The Guardian, 28 April 2018)

Kat Hall, Newsworthy Brit bank TSB is looking for a head of infrastructure (The Register, 27 April 2018)

Stuart Sumner, TSB brings in IBM in attempt to resolve IT crisis (Computing, 26 April 2018)