Edward,
We usually associate playing strength directly with the software, but
it's clear that this is not really correct. We have to consider the
whole game playing system, the machine or machines it runs on as well
as the software. CGOS doesn't really distinguish this.
To truly evaluati
On Dec 10, 2007, at 11:53 AM, Edward de Grijs wrote:
> Nobody really believes ratings are 100% "right on the money"
accurate.
>
> But it's silly not to use the most correct method possible. Ratings
> are "a very useful approximation to reality" and you might as
well get
> as close to th
> Nobody really believes ratings are 100% "right on the money" accurate.> > But
> it's silly not to use the most correct method possible. Ratings> are "a very
> useful approximation to reality" and you might as well get> as close to that
> reality as you can. > > > - Don
But then we have to t
Dave Dyer wrote:
> Arguing whether method "A" or method "B" rates a program more
> correctly is really close to arguing how many angels can dance
> on the head of a Pin. Ratings, at best, are based on mathematical
> models with many simplifying assumptions. Ratings are not reality.
>
Nobody
> (p1,p2,h,t,r) [player 1, player 2, handicap, time, result]
i should have said that i mean "time" here to be the
actual date/time that the contest occurred, since skill
can (and often does) change over time.
also the p1,p2 should be taken to be ordered, so that we
know who was black and who was
> Ratings are not reality.
i think that we can probably say that a rating system
for, say, 19x19 go with komi relative to handicap and
time controls roughly the same for each contest (or not,
you choose!) is anything that turns a set of:
(p1,p2,h,t,r) [player 1, player 2, handicap, time, result]
Arguing whether method "A" or method "B" rates a program more
correctly is really close to arguing how many angels can dance
on the head of a Pin. Ratings, at best, are based on mathematical
models with many simplifying assumptions. Ratings are not reality.