Monday, October 22, 2018

emergent morality


There's old work from Kropotkin from observing animal behaviour and seeing both coopeation, and sacrifice repeated exposure that suggested (without invoking any magic/superstition) that, while the gene may be selfish, that isn't all there is to society.

A simple one-shot game theoretic approach doesn't deal with this, so people moved on to iterated games, and most famously (at least from my reading) showed that the prisoner's dilemma is not a dilemma at all when you consider multiple iterations (repeat offenders learn to "trust" one another).

At a fancy level, this is sometimes ascribed to a theory of mind, where "you think that I think that you think that I think...so lets call the whole thing off" - actually, this is a short cut - you don't need a theory of mind to explain cooperative strategies in dumb animals-  you just need a population carrying out the iterative procedure - the cooperative strategy has higher survival value over the multiple encounters and multiple individuals. What "empathy" does, is simply allow planning, so you don't have to go through all these iterations to learn the better soluton - you just imagine them. So instead of being dead like both Iago and Othello, or Macbeth and his wife, you choose life.

There are exceptions - the lone indvidual making a single encounter is incented to renege on this. In social terms, this is why villages distrusted travellers - they know that the traveller is going to try and game their trust and not have to put up with any tit-for-tat strategy, as they will be long gone before the second screen. I'm wondering if this also explinas why people get more "conservative" as they get older - they have less to lose as they approach death in terms of retaliation or exclusion.

So this is explored quantitatively in some interesting real world scenarios in this paper on what I'm calling  emergent morality

Now just how does this get adopted (or rejected) as a social norm/ethic? Well, we need to run the population dynamics together with some model of encounters - how many people are a) only going to meet a group just once or b) going to exit the game (i.e. die) real soon?

This would then give us a (stable?) distribution of cooperative versus selfish behaviour. Note here when I say "selfish", I mean rational selfish in a short term way - the cooperative players are also rational selfish, but in a longer term sense (they iteratate, whether really or imaginatively)

We can then extend this to include a small number of mutants (bad apples) that engage in Byzantine behaviour (Loki, disrupters etc). And then we could use this to design mechanisms for society that lead to fair collective outcomes (aka maximise social welfare) despite some fraction of selfish, and some (usually small) fraction of byzantine players. Such algorithms exist (see the literature on BAR Fault Tolerance for Cooperative Services ) but assumes that you "just" use the altruistic players to improve performance of the system designed for selfish/rational and byzantine/bad nodes) -

I'm thinking more, how do you build such algorithms for systems like Wikipedia, or social media content moderation, or even liquid, full online democracy?


No comments:

Blog Archive

About Me

My photo
misery me, there is a floccipaucinihilipilification (*) of chronsynclastic infundibuli in these parts and I must therefore refer you to frank zappa instead, and go home