> of your interaction history with others. A nym who's lying 
> too much will 
> have accrue negative mana very quickly.

But other people might be encline to tag along anyway. A reputation
system will identify nyms with bad reputation alright, but how will
people *use* this system ? Favorable reputation is nothing per se,
it only becomes useful by what others make of it, and reputation is
not a single measure. People will have different reactions to the
actions of another person. If someone advocates killing blacks, say,
his reputation will grow to those who have the same opinions, but go
down with those who have the opposite opinion. What I'm coming at is
that a reputation system only allows a nym to build up a reputation.
People then react to it.

> > overwhelming probability that a group will form around some people,
> > who have charisma, or who can give others something, whether it is
> > power, money (or ability to "get" stuff), or just about anything
> > people would want. Some of these groups will want power.
> 
> I don't see how this is relevant to our conversation.

Your point, I believe, was that the ability to have knowledge of
others' actions would lead to increased cooperation. That goes both
ways. Groups of people can cooperate to work against another group
of cooperating people. People assess other's reputations on different
grounds, so people would be attracted to different groups, based on
the subjective assessment they make on the various traits displayed
by a person/nym.

> > I'm not sure what you mean by "mutually identifyable" agents. If
> > you mean that people seeking power by reducing other's freedoms,
> 
> No, mutually identifyable means exactly that: ability to tell 
> that you've 
> interacted with that agent before. In human agents this means 
> ability to 
> recall some other monkey's biometrics.

OK, that was my second possibility. I'm just not sure that it could
work so well in a larger scenario. Reputation systems, AFAIK, have
only be used in small scenarios: you observe an agent which does one
thing, then you extrapolate the probability of this agent's actions
based on that knowledge. The observed actions are very narrow, and
I'm unsure it would scale well, and unsure it would prevent people
fucking other people over for power as happens now.

-- 
Vincent Penquerc'h 

Reply via email to