The tyranny of the number-crunchers: technology isn’t value-neutral, so why should algorithms be?

How many times do we hear the phrase “the numbers don’t lie?”

Thoughtful piece this morning from Seth Godin talks about the way people point to algorithms and shrug helplessly whenever they’re faced with an unpleasant result. Hey, what can you do? The numbers were crunched and this is what the data tell us. So we’re not putting a branch in this neighbourhood because we’re not happy with the demographics, and people with certain characteristics get singled out for different treatment because they meet the profile, and, well, you know where this goes.

But that’s not the end of the argument. Some years ago I was part of a discussion involving technological innovation; a very smart and funny guy had just written a book crediting pornography, fast food, and militarism for many of the technological advances finding their way into our daily lives.

It was and still is a great read, but at the time, I found one of its basic assumptions problematic: the idea that technology is value-neutral. It’s neither good nor bad, the argument goes: it’s how we use it that matters. As I argued, this

… ignores the fact that like any other human endeavour — politics, literature, economics, religion, art — it reflects and embodies the values of the society from which it springs. Surveillance can be tied to the impulse to control; the flip side of efficiency can be read as exploitation. All these are reflections of a centralized, hierarchical and stratified approach. Who’s to say a more egalitarian society wouldn’t exhibit very different forms of technology?

The same argument applies to algorithms, or data formulae, or spreadsheets, or mathematical models. Because the laws of mathematics don’t change —

Richard Burton as O'Brien in 1984.
Two and two equal four? Not necessarily.

two and two will always equal four — there’s a temptation to assume that any argument grounded in data, numbers, or mathematics can be divorced from emotion, sentiment, politics, or any other subjective value judgment. And because of this, it’s too easy to think of them as immutable forces of nature, over which we have no control.

But this can be misleading. Like any other human invention, mathematics and statistics are subject to the values, preferences, and assumptions of the people who design and use them, and the social and political conditions from which they arise. Context matters.

So when we’re looking at mathematical models, risk assessments, balance sheets, or algorithms, we need to ask ourselves: what influenced their design? Why are these elements valued, but not those? Why is so much importance placed on this factor, but not that? And at the bottom, more likely than not, we’re likely to find subjective value judgments. Once again, let’s remind ourselves of why critical thinking matters, because those algorithms and models are just as dependent on underlying assumptions, framing, and discourse as anything else.

Especially worthwhile here: Cathy O’Neil’s book on Weapons of Math Destruction. A recent piece in the Guardian cites her work in breaking down the ostensible impartiality of “the numbers:”

Contrary to popular opinion that algorithms are purely objective, O’Neil explains in her book that “models are opinions embedded in mathematics”. Think Trump is a hopeless candidate? That will affect your calculations. Think black American men are all criminal thugs? That affects the models being used in the criminal justice system, too.

Ultimately algorithms, according to O’Neil, reinforce discrimination and widen inequality, “using people’s fear and trust of mathematics to prevent them from asking questions”.

Back to Godin:

Google’s search results and news are the work of human beings. Businesses thrive or suffer because of active choices made by programmers and the people they work for. They have conflicting goals, but the lack of transparency leads to hiding behind the algorithm.

It’s not just businesses, but also human beings who match certain criteria and demographically quantifiable communities as well. Algorithms, equations, and mathematical formulae are all crafted with certain assumptions in mind, so bear that in mind next time you hear the argument “it’s just what the data show.”

It’s worth lingering on this, what with all the current controversy about the orange-haired guy and “fake news.” We’ve written previously about Facebook’s role, and its responsibility for the content it serves up; the image at the top of this post is a screencap from my feed just a few minutes ago. I’ve already posted it with the caption

Hey, algorithm boffins: why is Breitbart, the preferred safe space for Nazis, surfacing in my Trending items?

I’m not a mathematician or a statistics expert or an algorithm writer, but I can’t understand how anyone could profile me as the sort of person likely to click on that.

Godin again:

The priority of which Facebook news come up is the work of a team of people. The defense of, “the results just follow the algorithm,” is a weak one, because there’s more than one valid algorithm, more than one set of choices that can be made, more than one set of priorities.

The culture (our politics, our standards, our awareness of what’s happening around us) is being aggressively rewired by what we see, and what we see comes via an algorithm, one that was written by people.

There’s the takeaway. Algorithms and spreadsheets and quantitative analyses may operate according to the laws of mathematics, but they’re designed by people. And people are fallible. People have biases. People operate according to value choices and cultural norms and assumptions that they don’t always acknowledge or even recognize.

It’s never just about the numbers.

Sign up for regular updates

* indicates required