We are your friends
I recently attended an "e-Government Question Time" session, organised in connection with this conference. There were some good points made: one speaker stressed the importance of engaging with the narratives which people build rather than assuming that the important facts can be read off from an accumulation of data; one questioner called the whole concept of 'e-government' into question, pointing out that the stress seemed to be entirely on using the Web/email/digital TV/texting/etc as a mechanism for delivering services rather than as a medium for democratic exchanges. Much more typical, though, was the spin which the mediator put on this question as he passed it on to the panel:
That's a very good question - what about democracy? And conversely, if it's all democracy where does that leave leadership?
The evening was much less about democracy than it was about leadership - or rather, management. This starting-point produced some strikingly fallacious arguments, particularly in the field of privacy. The following statements were all made by one panellist; I won't single him out, as they were all endorsed by other panellists - and, in some cases, members of the audience. (And no, identifying him as male doesn't narrow it down a great deal. The people in the hall were 3:1 male to female (approximately), the people on stage 6:1 (precisely).)
I like to protect my own privacy, but I'm in the privileged position of having assets to protect. When you're looking at people who have got nothing, and in many cases aren't claiming benefits to which they're entitled, I don't think safeguarding their privacy should be our main concern.
At first blush this argument echoes the classic Marxist critique of the bourgeois definition of human rights - if we have the right to privacy, what about the right to a living wage? But instead of going from universalism to a broader (and hence more genuine) universalism, we've ended up with the opposite of universalism: you and I can worry about privacy, but it doesn't apply to them. Superficially radical, or at least populist - you can just hear David Blunkett coming out with something similar - this is actually a deeply reactionary argument: it treats the managed as a different breed from the people who manage them (I like to protect my own privacy, but...). Management Fallacy 1: 'they're not like us'.
We're talking about improving people's life chances. We need to make personal information more accessible - to put more access to personal information in the hands of the people who can change people's lives for the better.
Management Fallacy 2: 'we mean well'. If every intervention by a public servant were motivated by the best interests of the citizens, safeguards against improper intervention would not be required. And if police officers never stepped out of line, there'd be no need for a Police Complaints Commission. In reality, good intentions cannot be assumed: partly because the possibility of a corrupt or malicious individual getting at your data cannot be ruled out; partly because government agencies have other functions as well as safeguarding the citizen's interests, and those priorities may occasionally come into conflict; and partly because a government agency's idea of your best interests may not be the same as yours (see Fallacy 3). All of which means that the problem needs to be addressed at the other end, by protecting your data from people who don't have a specific reason to use it - however well-intentioned those people may be. One questioner spoke wistfully of the Data Protection Act getting in the way of creative, innovative uses of data. It's true that data mining technology now makes it possible to join the dots in some very creative and innovative ways. But if it's data about me, I don't think prior consent is too much to ask - and I don't think other people are all that different (see Fallacy 1).
I've got no objection to surrendering some of my civil liberties, so-called
Have to stop you there. Management Fallacy 3: 'it looks all right to me'. The speaker was a local government employee: a private individual. His policy for handling his own private data doesn't concern me. But I would hope that, before he came to apply that policy more generally, he would reflect on how the people who would be affected might feel about surrendering their civil liberties, so-called. (Perhaps he could consult them, even.)
Carry on:
I've got no objection to surrendering some of my civil liberties, so-called, if it's going to prevent another Victoria Climbie case.
Management Fallacy 4: 'numbers don't lie'. (Or: 'Everything is measurable and what can be measured can be managed'.) This specific example is a common error with statistics, which can be illustrated with the example of a hypothetical test for the AIDS virus. Let's say that you've got an HIV test which is 95% accurate - that is, out of every 100 people with HIV it will correctly identify 95 and mis-identify 5, and similarly for people who do not carry the virus. And let's say that you believe, from other sources, that 1,000 people in a town of 100,000 carry the virus. You administer the test to the entire town. If your initial assumption is correct, how many positive results will you get? And how confident can you be, in percentage terms, that someone who tests positive is actually HIV-positive?
The answers are 5900 and 16.1%. The test would identify 950 of the 1000 people with the virus, but it would also misidentify 4950 people who did not have it: consequently, anyone receiving a positive test result would have a five in six chance of actually being HIV-negative. What this points to is a fundamental problem with any attempt to identify rare phenomena in large volumes of data. If the frequency of the phenomenon you're looking for is, in effect, lower than the predictable rate of error, any positive result is more likely to be an error than not.
Contra McKinsey, I would argue that not everything can or should be measured, let alone managed on the basis of measurement. (If the data-driven approach to preventing another Climbie case sounds bad, imagine it with the addition of performance targets.) Some phenomena - particularly social phenomena - are not amenable to being captured through the collection of quantitative data, and shouldn't be treated as if they were.
What all these fallacies have in common is a self-enclosed, almost solipsistic conception of the task of management. With few exceptions, the speakers (and the questioners) talked in terms of meeting people's needs by delivering a pre-defined service with pre-defined goals, pre-defined techniques, pre-defined identities (me service provider, you service recipient). There were only occasional references to the exploratory, dialogic approach of asking people what their needs were and how they would like them to be met - despite the possibilities for work in this area which new technologies have created. But then, management is not dialogue.
Social software may start with connecting data, but what it's really about is connecting people - and connecting them in dialogue, on a basis of equality. If this goal gets lost, joining the dots may do more harm than good.
That's a very good question - what about democracy? And conversely, if it's all democracy where does that leave leadership?
The evening was much less about democracy than it was about leadership - or rather, management. This starting-point produced some strikingly fallacious arguments, particularly in the field of privacy. The following statements were all made by one panellist; I won't single him out, as they were all endorsed by other panellists - and, in some cases, members of the audience. (And no, identifying him as male doesn't narrow it down a great deal. The people in the hall were 3:1 male to female (approximately), the people on stage 6:1 (precisely).)
I like to protect my own privacy, but I'm in the privileged position of having assets to protect. When you're looking at people who have got nothing, and in many cases aren't claiming benefits to which they're entitled, I don't think safeguarding their privacy should be our main concern.
At first blush this argument echoes the classic Marxist critique of the bourgeois definition of human rights - if we have the right to privacy, what about the right to a living wage? But instead of going from universalism to a broader (and hence more genuine) universalism, we've ended up with the opposite of universalism: you and I can worry about privacy, but it doesn't apply to them. Superficially radical, or at least populist - you can just hear David Blunkett coming out with something similar - this is actually a deeply reactionary argument: it treats the managed as a different breed from the people who manage them (I like to protect my own privacy, but...). Management Fallacy 1: 'they're not like us'.
We're talking about improving people's life chances. We need to make personal information more accessible - to put more access to personal information in the hands of the people who can change people's lives for the better.
Management Fallacy 2: 'we mean well'. If every intervention by a public servant were motivated by the best interests of the citizens, safeguards against improper intervention would not be required. And if police officers never stepped out of line, there'd be no need for a Police Complaints Commission. In reality, good intentions cannot be assumed: partly because the possibility of a corrupt or malicious individual getting at your data cannot be ruled out; partly because government agencies have other functions as well as safeguarding the citizen's interests, and those priorities may occasionally come into conflict; and partly because a government agency's idea of your best interests may not be the same as yours (see Fallacy 3). All of which means that the problem needs to be addressed at the other end, by protecting your data from people who don't have a specific reason to use it - however well-intentioned those people may be. One questioner spoke wistfully of the Data Protection Act getting in the way of creative, innovative uses of data. It's true that data mining technology now makes it possible to join the dots in some very creative and innovative ways. But if it's data about me, I don't think prior consent is too much to ask - and I don't think other people are all that different (see Fallacy 1).
I've got no objection to surrendering some of my civil liberties, so-called
Have to stop you there. Management Fallacy 3: 'it looks all right to me'. The speaker was a local government employee: a private individual. His policy for handling his own private data doesn't concern me. But I would hope that, before he came to apply that policy more generally, he would reflect on how the people who would be affected might feel about surrendering their civil liberties, so-called. (Perhaps he could consult them, even.)
Carry on:
I've got no objection to surrendering some of my civil liberties, so-called, if it's going to prevent another Victoria Climbie case.
Management Fallacy 4: 'numbers don't lie'. (Or: 'Everything is measurable and what can be measured can be managed'.) This specific example is a common error with statistics, which can be illustrated with the example of a hypothetical test for the AIDS virus. Let's say that you've got an HIV test which is 95% accurate - that is, out of every 100 people with HIV it will correctly identify 95 and mis-identify 5, and similarly for people who do not carry the virus. And let's say that you believe, from other sources, that 1,000 people in a town of 100,000 carry the virus. You administer the test to the entire town. If your initial assumption is correct, how many positive results will you get? And how confident can you be, in percentage terms, that someone who tests positive is actually HIV-positive?
The answers are 5900 and 16.1%. The test would identify 950 of the 1000 people with the virus, but it would also misidentify 4950 people who did not have it: consequently, anyone receiving a positive test result would have a five in six chance of actually being HIV-negative. What this points to is a fundamental problem with any attempt to identify rare phenomena in large volumes of data. If the frequency of the phenomenon you're looking for is, in effect, lower than the predictable rate of error, any positive result is more likely to be an error than not.
Contra McKinsey, I would argue that not everything can or should be measured, let alone managed on the basis of measurement. (If the data-driven approach to preventing another Climbie case sounds bad, imagine it with the addition of performance targets.) Some phenomena - particularly social phenomena - are not amenable to being captured through the collection of quantitative data, and shouldn't be treated as if they were.
What all these fallacies have in common is a self-enclosed, almost solipsistic conception of the task of management. With few exceptions, the speakers (and the questioners) talked in terms of meeting people's needs by delivering a pre-defined service with pre-defined goals, pre-defined techniques, pre-defined identities (me service provider, you service recipient). There were only occasional references to the exploratory, dialogic approach of asking people what their needs were and how they would like them to be met - despite the possibilities for work in this area which new technologies have created. But then, management is not dialogue.
Social software may start with connecting data, but what it's really about is connecting people - and connecting them in dialogue, on a basis of equality. If this goal gets lost, joining the dots may do more harm than good.