Thursday, December 14, 2017

Expert Systems

Is there a fundamental flaw in AI implementation, as @jrossCISR suggests in her latest article for Sloan Management Review? She and her colleagues have been studying how companies insert value-adding AI algorithms into their processes. A critical success factor for the effective use of AI algorithms (or what we used to call expert systems) is the ability to partner smart machines with smart people, and this calls for changes in working practices and human skills.

As an example of helping people to use probabilistic output to guide business actions, Ross uses the example of smart recruitment.
But what’s the next step when a recruiter learns from an AI application that a job candidate has a 50% likelihood of being a good fit for a particular opening?

Let's unpack this. The AI application indicates that at this point in the process, given the information we currently have about the candidate, we have a low confidence in predicting the performance of this candidate on the job. Unless we just toss a coin and hope for the best, obviously the next step is to try and obtain more information and insight about the candidate.

But which information is most relevant? An AI application (guided by expert recruiters) should be able to identify the most efficient path to reaching the desired level of confidence. What are the main reasons for our uncertainty about this candidate, and what extra information would make the most difference?

Simplistic decision support assumes you only have one shot at making a decision. The expert system makes a prognostication, and then the human accepts or overrules its advice.

But in the real world, decision-making is often a more extended process. So the recruiter should be able to ask the AI application some follow-up questions. What if we bring the candidate in for another interview? What if we run some aptitude tests? How much difference would each of these options make to our confidence level?

When recruiting people for a given job, it is not just that the recruiters don't know enough about the candidate, they also may not have much detail about the requirements of the job. Exactly what challenges will the successful candidate face, and how will they interact with the rest of the team? So instead of shortlisting the candidates that score most highly on a given set of measures, it may be more helpful to shortlist candidates with a range of different strengths and weaknesses, as this will allow interviewers to creatively imagine how each will perform. So there are a lot more probabilistic calculations we could get the algorithms to perform, if we can feed enough historical data into the machine learning hopper.

Ross sees the true value of machine learning applications to be augmenting intelligence - helping people accomplish something. This means an effective collaboration between one or more people and one or more algorithms. Or what I call organizational intelligence.


Postscript (18 December 2017)

In his comment on Twitter, @AidanWard3 extends the analysis to multiple stakeholders.
This broader view brings some of the ethical issues into focus, including asymmetric information and algorithmic transparency


Jeanne Ross, The Fundamental Flaw in AI Implementation (Sloan Management Review, 14 July 2017)

No comments: