Cloud Street

Thursday, September 29, 2005

Know what I mean

Back here, I wrote:
Tagging, I'm suggesting, isn't there to tell us about stuff: it's there to tell us about what people say about stuff. As such, it performs rather poorly when you're asking "where is X?" or "what is X?", and it comes into its own when you're asking "what are people saying about X?"
This relates back to my earlier argument that all knowledge is cloud-shaped, and that tagging is simply giving us a live demonstration of how the social mind works. In other words, all there is is "what people are saying about X" - but some conversations have been going on longer than others. Some conversations, in fact, have developed assumptions, artefacts, structures and systems within and around which the conversation has to take place. The conversation carried on in the medium of tagging isn't at that stage yet, perhaps, but it will be - the interesting question is about the nature of those artefacts and structures.

Now (with thanks to Anne Galloway) over to Dan Sperber.
When say, vervet monkeys communicate among themselves, one vervet monkey might spot a leopard and emit an alarm cry that indicates to the other monkeys in his group that there's a leopard around. The other vervet monkeys are informed by this alarm cry of the presence of a leopard, but they're not particularly informed of the mental state of the communicator, and they don't give a damn about it. The signal puts them in a cognitive state of knowledge about the presence of a leopard, similar to that of the communicating monkey — here you really have a smooth coding-decoding system.

In the case of humans, when we speak we're not interested per se in the meaning of the words, we register what the word means as a way to find out what the speaker means. Speaker’s meaning is what's involved. Speaker’s meaning is a mental state of the speaker, an intention he or she has to share with us some content. Human communication is based on the ability we have to attribute mental state to others, to want to change the mental states of others, and to accept that others change ours.

When I communicate with you I am trying to change your mind. I am trying to act on your mental state. I'm not just putting out a kind of signal for you to decode. And I do that by providing you with evidence of a mental state in which I want to put you in and evidence of my intention to do so. The role of what is often known in cognitive science as "theory of mind," that is the uniquely human ability to attribute complex mental states to others, is as much a basis of human communication as is language itself.

I am full of admiration for the mathematical theory of information and communication, the work of Shannon, Weaver, and others, and it does give a kind of very general conceptual framework which we might take advantage of. But if you apply it directly to human communication, what you get is a mistaken picture, because the general model of communication you find is a coding-decoding model of communication, as opposed to this more constructive and inferential form of communication which involves inferring the mental state of others, and that's really characteristic of humans.
[...]
For Dawkins, you can take the Darwinian model of selection and apply it almost as is to culture. Why? Because the basic idea is that, just as genes are replicators, bits of culture that Dawkins called “memes” are replicators too. If you take the case of population genetics, the causal mechanisms involved split into two subsets. You have the genes, which are extremely reliable mechanisms of replication. On the other hand, you have a great variety of environmental factors — including organisms which are both expression of genes and part of their environment — environmental factors that affect the relative reproductive success of the genes. You have then on one side this extremely robust replication mechanism, and on the other side a huge variety of other factors that make these competing replication devices more or less successful. Translate this into the cultural domain, and you'll view memes, bits of culture, as again very strong replication devices, and all the other factors, historical, ecological, and so on, as contributing to the relative success of the memes.

What I'm denying, and I've mentioned this before, is that there is a basis for a strong replication mechanism either in cognition or in communication. It's much weaker than that. As I said, preservative processes are always partly constructive processes. When they don’t replicate, this does not mean that they make an error of copying. Their goal is not to copy. There are transformation in the process of transmission all the time, and also in the process of remembering and retrieving past, stored information, and these transformations are part of the efficient working of these mechanisms. In the case of cultural evolution, this yields a kind of paradox. On the one hand, of course, we have macro cultural stability — we do see the same dish being cooked, the same ideologies being adopted, the same words being used, the same song being sung. Without some relatively high degree of cultural stability — which was even exaggerated in classical anthropology — the very notion of culture wouldn't make sense.

How then do we reconcile this relative macro stability at the cultural level, with a lack of fidelity at the micro level? ... The answer, I believe, is linked precisely to the fact that in human, transmission is achieved not just by replication, but also by construction. ... Although indeed when things get transmitted they tend to vary with each episode of transmission, these variations tend to gravitate around what I call "cultural attractors", which are, if you look at the dynamics of cultural transmission, points or regions in the space of possibilities, towards which transformations tend to go. The stability of cultural phenomena is not provided by a robust mechanism of replication. It's given in part, yes, by a mechanism of preservation which is not very robust, not very faithful (and it's not its goal to be so). And it’s given in part by a strong tendency for the construction — in every mind at every moment — of new ideas, new uses of words, new artifacts, new behaviors, to go not in a random direction, but towards attractors. And, by the way, these cultural attractors themselves have a history.
There's more - much more - but what I've quoted brings out two key points. Firstly, communication is not replication: in conversation, there is no smooth transmission of information from speaker to listener, but a continuing collaborative effort to present, construct, re-present and reconstruct shared mental models. The overlap between this and the 'knowledge cloud' model is evident. Secondly, construction has a context: the process of model-building (or 'thinking' as we scientists sometimes call it) is always creative, always innovative, and always framed by pre-existing cultural 'attractors'. And these cultural attractors themselves have a history - you could say that people make their own mental history, but they do not do so in circumstances of their own choosing...

This is tremendously powerful stuff - from my (admittedly idiosyncratic) philosophical standpoint it suggests a bridge between Schutz, Merleau-Ponty and Bourdieu (and I've been looking for one of those for ages). My only reservation relates to Sperber's stress on speaker's meaning ... a mental state of the speaker. I think it would enhance Sperber's model, rather than marring it, to focus on mental models as they are constructed within communication rather than as they exist within the speaker's skull - in other words, to bracket the existence of mental states external to communicative social experience. On this point Schutz converges, oddly, with Wittgenstein.

Sperber's argument tends to underpin my intuition on tagging and knowledge clouds: if all communication is constructive - if there is no simple transmission or replication of information - then conversation really is where knowledge develops, or more precisely where knowledge resides. Sperber also helps explain the process by which some conversations become better-established than others; we can see this as a feedback process, involving the development of a domain-specific set of 'attractors'. These would perhaps serve as a version of Rorty's 'final vocabulary': a shared and unquestionable set of assumptions, a domain-specific backdrop without which the conversation would make no sense.

One final thought from Sperber:
The idea of God isn't a supernatural idea. If the idea of God were supernatural, then religion would be true.
Well, I liked it.

1 Comments:

Post a Comment

<< Home