> When I have some free time and if Don is interested, I may run some 
> simulations for the method I proposed and post some results.  Don, 
> please let me know whether your mind is made up already.

I am pretty sure I will stick with my original idea now of just choosing
the best of N random selections.

Christoph Birk did a simulation and sent the  program he used.   I used
it to try variations of that scenario. 

What I have decided on, based on several different simulations,  is the
following:

   1.  Fix N at 3

   2.  Pick one unpaired player P at random.

   3.  Loop 3 times doing the following:

   4.     Select a potential unpaired opponent R as a candidate opponent
          for P,  randomly.  Opponent R must not also be P. 

   5.     Compare the rating difference of the current selection to the
          best selection found so far.

   6.     If the difference is smaller, R is new best candidate.

   7.  Repeat whole procedure until all players are paired.

There are several differences between this and the simulation Christoph
did, and I modified his program accordingly and run simulations for
everything I tried.   

One of the differences is that I decided not to use top down.  Top down
did not appear to be as fair as random selection.   

Another  difference is the selection procedure.  Christoph's program was
careful to choose N DIFFERENT candidate opponents,  my procedure just
makes N random choices, even if 2 or more of the choices happen to hit
on the same player (as long as it's not the player being paired.)

Christoph supplied different formula's for dynamically altering N based
on my suggestion that this might be tried,  but after running a number
of simulations (and reasoning on it) it's easy to see that as N goes up,
the less rating variety you will tend to play.   With a fixed N you will
get consistent rating variation assuming approximately equal rating
distributions of the active players on CGOS.    On the other hand, if
you let it increase with the number of players,  you will be more and
more likely to play those very close to your own rating as the number of
players increases.   There are good arguments for both methods,  but I
think I prefer the simpler choice, consistent behavior and easier to
explain.

It also seems a little better to choose according to rating rather than
rank.  I went to the trouble to model the rating distribution of CGOS in
my experiments.  I used all the players on CGOS who had played enough to
get fully established ratings. 
 
If you are the strongest player on CGOS, your odds of playing the second
strongest player of course decreases as the number of players on CGOS
increases (if you fix N.)    But a fixed constant of N for the number of
selection trials will tend to make the odds of choosing a player within
a 100 points of you constant, no matter how many players are on CGOS and
assuming a similar distribution of ratings.   I considered N=4 too, but
it's starting to get way too large - you tend to get pushed into a tight
little box where you only play only 2 or 3 players most of the time if
there are not many players on CGOS.  

Did I post about my tournament pairing results?   I did a full blown
simulation of all the CGOS players using the elimination selection idea
many days ago.   My suspicions were confirmed that it doesn't have the
behavior one would like.  A very strong player is very likely to play
very weak players (on average) and the distribution of opponents has
ugly pockets in it.   For instance the best player may (as an example)
be significantly more likely to play an 800 rated player than a 1500
player and so on.   The graph of who you are expected to play is not a
smooth curve, it is a series of progressively higher mountains.   This
was even the case when I randomized the pairings which is not the
correct way to pair elimination but would be fine for pairing
purposes ... if only it works.    It's a shame, I loved the idea and
thought it to be quite elegant and beautiful.  

I'm not really overly concerned about the "mountain" behavior, it's
still a logical pairing system but you must play opponent with disparate
ratings an alarming percentage of the time.

So I did not try the swiss variation of this.   The swiss has very much
in common with elimination, and you can expect similar anomalies if you
are using it just for pairing players.  

There are variations of swiss that could be tried but I won't.  One idea
is called "accelerated pairings", where you basically start a tournament
with the assumption that the first round has already been played and
there were no upsets.  Of course you don't actually SCORE the phantom
round but do this for pairing purposes only - it doesn't change the
number of rounds you actually play.

So I guess I am back to KISS, where I should have stayed :-) 

- Don
 


On Sun, 2006-10-15 at 23:51 -0500, Matt Gokey wrote:
> There may be some confusion about what the assumptions and goals are for 
> the CGOS pairing objectives.  I am hearing conflicting statements.  So I 
> for one am unsure ;-)
> 
> Don Daily wrote (from Re: [computer-go] A new pairing system idea for 
> CGOS, 10/8/2006):
>  >> Your basic idea is sound - but it's all in the formula for point 2. I
>  >> would not base it on the rating, I would base it on the RANKING
>  >
>  >>> Why the ranking over the rating? Ranking eliminates the real
>  >>> strength differences.  A cluster of 3 players near 1700 and then
>  >>> one at 1300 looks much different than rank 1, 2, 3, 4.
>  >
>  > Because a cluster of 2 or 3 bots near 2100, all of them Mogo versions,
>  > would mean all or mostly mogo vs mogo. One of the design goals of CGOS
>  > is variety. I don't want pairing based on rating pockets.
> 
> In the current thread Don also wrote:
>  > [snip]
>  > I would prefer that it's a fairly rare event for a 2100 player to play
>  > a 500 player unless no other players are available [snip]
> 
> And I'd have to go back though the messages, but I'm pretty sure I've 
> heard it stated several times that CGOS should prefer pairing players of 
> similar "strength" more often than those of dissimilar "strength" 
> (whatever this means).  Sometimes ratings are used and sometimes 
> rankings are used, even though the two have different meanings and 
> implications.
> 
> If there are clusters of ratings in the pool with shelves (which there 
> likely will be) then pairing by ranking will likely result in pairing 
> significantly different strength players more often than you want 
> (because some closely ranked players will be far apart in relative 
> rating strength).  In the "real" world the ratings will not be 
> distributed evenly.  Also there may not be drastic drop-offs from the 
> 2100 to 500 range, but there will certainly be clusters and thus 
> drop-offs.
> 
> Both methods (by rating or by ranking) will tend to favor pairing 
> different but similar strength versions of the same program together 
> (i.e. the Mogo example Don cited), since presumably if all the versions 
> of Mogo have ratings near 2100 then they will be ranked closely together 
> as well.  But is this really a big concern?  Are there multiple versions 
> of the same program being played at the same time commonly?  And if 
> there are and they are of similar strength then why wouldn't one want to 
> pair them, like any others? And how would a rank based method improve this?
> 
> Don then also wrote:
> > So I'm starting to believe I will use one of Christoph Birk's formulas
> > with best of N selection:  N=1+sqrt(NP-npaired-2)
> > 
> > With this formula (as well as the others) and pairing from the top-down,
> > I think gives CGOS a nice distribution.  
> The only issues I see with this technique, if I understand it correctly, 
> is that some players can never be paired, and that clusters will be 
> ignored, therefore in some cases pairing weak programs with stronger 
> ones too frequently.
> 
> IMO we need to be more explicit.  Don asked about how he should do 
> pairing on CGOS so I was just offering a suggestion, but without 
> describing the problem explicitly and the objectives it makes it 
> difficult.  Here are some key assumptions/requirements I thought of that 
> apply to CGOS and that would create an effective pairing distribution:
> 1.  The pool of players will not be uniformly distributed by rating, 
> there will likely be clusters and isolated points.
> 2.  The pool of available players can change between each round.
> 3.  In any round, it should be possible that any available player can be 
> paired with any other available player.  The probability of this 
> happening for any specific pairing depends on the pairing algorithm, but 
> between any two players it should not be zero nor 1 (unless of course 
> there are only two players in the pool).
> 4.  In each round, pairings of players of similar strength are preferred 
> with the probability of the pairings correlated to the relative strength 
> (however defined).  How specifically this probability/selection is made 
> depends on the algorithm and its sensitivity to the relative strengths.
> 
> I don't know if these are the same assumptions and objectives everyone 
> had in mind or not nor if they are even valid or complete, but it would 
> make sense to define/communicate what we want to achieve before we try 
> to devise a method for it.  Don? Anyone else?
> 
> The rating-weighted random pairing method I suggested was meant to 
> handle these assumptions and satisfy these requirements.
> 
> To test the pairing methods, I suggest something similar to what Don did 
> with N evenly distributed players and N current CGOS players, plus also 
> creating sets of N randomly distributed players; and then run many 
> simulations of sample pairings on these groups for different values of N 
> and plot/analyze the results.
> 
> When I have some free time and if Don is interested, I may run some 
> simulations for the method I proposed and post some results.  Don, 
> please let me know whether your mind is made up already.
> 
> Matt
> _______________________________________________
> computer-go mailing list
> computer-go@computer-go.org
> http://www.computer-go.org/mailman/listinfo/computer-go/

_______________________________________________
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Reply via email to