Showing posts with label performativity. Show all posts
Showing posts with label performativity. Show all posts

Monday, February 28, 2022

Does the Algorithm have the Last Word?

In my post on the performativity of data (August 2021), I looked at some of the ways in which data and information can make something true. In this post, I want to go further. What if an an algorithm can make something final.

I've just read a very interesting paper by the Canadian sociologist Arthur Frank, which traces the curious history of a character called Devushkin - from a story by Gogol via another story by Dostoevsky into some literary analysis by Bakhtin.

In Dostoevsky's version, Devushkin complained that Gogol's account of him was complete and final, leaving him no room for change or development, hopelessly determined and finished off, as if he were already quite dead.

For Bakhtin, all that is unethical begins and ends when one human being claims to determine all that another is and can be; when one person claims that the other has not, cannot, and will not change, that she or he will die just as she or he always has been. Frank

But that's pretty much what many algorithms do. Machine learning algorithms extrapolate from historical data, captured and coded in ways that reinforce the past, while more traditionally programmed algorithms simply betray the opinions and assumptions of their developers. For example, we see recruitment algorithms that select men with a certain profile, while rejecting women with equal or superior qualifications. Because that's what's happened in the past, and the algorithm has no way of escaping from this.

 


The inbuilt bias of algorithms has been widely studied. See for example Safiya Noble and Cathy O'Neil.

David Beer makes two points in relation to the performativity of algorithms. Firstly through their material interventions.

Algorithms might be understood to create truths around things like riskiness, taste, choice, lifestyle, health and so on. The search for truth becomes then conflated with the perfect algorithmic design – which is to say the search for an algorithm that is seen to make the perfect material intervention.

And secondly through what he calls discursive interventions.

The notion of the algorithm is part of a wider vocabulary, a vocabulary that we might see deployed to promote a certain rationality, a rationality based upon the virtues of calculation, competition, efficiency, objectivity and the need to be strategic. As such, the notion of the algorithm can be powerful in shaping decisions, influencing behaviour and ushering in certain approaches and ideals.

As Massimo Airoldi argues, both of these fall under what Bourdieu calls Habitus - which means an inbuilt bias towards the status quo. And once the algorithm has decided your fate, what chance do you have of breaking free?



Massimo Airoldi, Machine Habitus: Towards a sociology of algorithms (Polity Press, 2022)

David Beer, The social power of algorithms (Information, Communication & Society, 20:1, 2017) 1-13, DOI: 10.1080/1369118X.2016.1216147

Safiya Noble, Algorithms of Oppression (New York University Press, 2018)

Cathy O'Neil, Weapons of Math Destruction (2016)

Arthur W. Frank, What Is Dialogical Research, and Why Should We Do It? (Qual Health Res 2005; 15; 964) DOI: 10.1177/1049732305279078

Carissa Véliz, If AI Is Predicting Your Future, Are You Still Free? (Wired, 27 December 2021)

Related posts: Could we switch the algorithms off? (July 2017), Algorithms and Governmentality (July 2019), Algorithmic Bias (March 2021), On the performativity of data (August 2021)

Thursday, August 12, 2021

On the performativity of data

The philosopher J.L. Austin observed that sometimes words didn't merely describe reality, they enacted something. A commonly cited example is that when a suitably authorized person pronounces a couple married, it is the speaking of these words that makes them true. Austin called this a performative utterance; later writers usually refer to this as performativity.

In this post, I want to explore some ways in which data and information may be performative. 

 

In my previous post on Data as Pictures, I mentioned the self-fulfilling power of labels. For example, when a person is labelled and treated as a potential criminal, this may make it more difficult for them to live as a law-abiding citizen, and they are therefore steered towards a life of crime. Thus the original truth of the data becomes almost irrelevant, because the data creates its own truth. Or as Bowker and Star put it, "classifications ... have material force in the world" (p39).

Many years ago, I gave a talk at King's College London which included some half-formed thoughts on the philosophy of information. I included some examples where it might seem rational to use information even if you don't believe it.

Keynes attributed the waves of optimism and pessimism that sweep through a market to something he called animal spirits. Where there is little real information, even false information may be worth acting upon. So imagine that a Wall Street astrologer publishes a daily star chart of the US president, and this regularly affects the stock market. Not because many people actually believe in astrology, but because many people want to be one step ahead of the few people who do believe in astrology. Even if nobody takes astrology seriously, but they all think that other people might take it seriously, then they will collectively act as if they do take it seriously. Fiction functioning as truth.

(There was an astrologer in the White House during the Reagan administration, so this example didn't seem so far-fetched at that time. And I have now found a paper that suggests a correlation between astrology and stock markets.)

For my second example, I imagined the head of a sugar corporation going on television to warn the public about a possible shortage of sugar. Consumers typically respond to this kind of warning by stockpiling, leaving the supermarket shelves empty of sugar. So this is another example of a self-fulfilling prophecy - a speech act that created its own truth.

I then went on to imagine the converse. Suppose the head of the sugar corporation went on television to reassure the public that there was no possibility of a sugar shortage. A significant number of consumers could reason either that the statement is false, or that even if the statement is true many consumers won't believe it. So to be on the safe side, better buy a few extra bags of sugar. Result - sugar shortage.

So here we seem to have a case where two opposite statements can appear to produce exactly the same result.


Back in the 1980s I was talking about opinions, from a person with a known status or reputation, published or broadcast in what we now call traditional media. So what happens when these opinions are disconnected from the person and embedded in dashboards and algorithms? 

It's not difficult to find examples where data produces its own reality. If a recommendation algorithm identifies a new item as a potential best-seller, this item will be recommended to a lot of people and - not surprisingly - it becomes a best-seller. Obviously this doesn't work all the time, but it is hard to deny that these algorithms contribute significantly to the outcomes that they appear to predict. Meanwhile YouTube identifies people who may be interested in extreme political content, some of whom then become interested in extreme political content. And then there's Facebook's project to "connect the world". There are real-world effects here, generated by patterns of data.

 

Another topic to consider is the effects produced by measurement and targets. On the one hand, there is a view that measuring performance helps to motivate improvements, which is why you often see performance dashboards prominantly displayed in offices. On the other hand, there is a widespread concern that excessive focus on narrowly defined targets ("target culture") distorts or misdirects performance - for example, teachers teaching to the test. Hannah Fry's article contains several examples of this, which is sometimes known as Goodhart's Law. Either way, there is an expectation that measuring something has a real-world effect, whether positive or negative.

If you can think of any other examples of the performativity of data, please comment below. 



Geoffrey Bowker and Sarah Leigh Star, Sorting Things Out (MIT Press, 1999)

Hannah Fry, What Data Can't Do (New Yorker, 22 March 2021)

Wilfred M. McClay, Performative- How the meaning of a word became corrupted (Hedgehog Review 23/2, Summer 2021) 

Aurora Murgea, Mercury Retrograde Effect in Capital Markets: Truth or Illusion? (Timisoara Journal of Economics and Business, 13 October 2016) 

Richard Veryard, Speculation and Information: The Epistemology of Stock Market Fluctuations (Invited presentation, King's College London, 16 November 1988). Warning - the theory needs a complete overhaul, but the examples are interesting.

Wikipedia: Animal Spirits, Goodhart's Law, Performativity, Target Culture

Stanford Encyclopedia of Philosophy: J.L. Austin, Speech Acts

Related posts: Target Setting: What You Measure Is What You Get (April 2005), Ethical Communication in a Digital Age (November 2018), Algorithms and Governmentality (July 2019), Data as Pictures (August 2021), Can Predictions Create Their Own Reality (August 2021), Does the algorithm have the last word? (February 2022). Rob Barratt of Bodmin kindly contributed a poem on target culture in the comments below my Target Setting post.

Links added 27 August 2021, astrology link added 3 April 2022