On 31-mrt-08, at 20:28, Don Dailey wrote:
You could be
blind-siding the program.
I think this is the crux of the matter. Not just in MC but in Go
programming in general. If you add 'strong' knowledge you can create
blind-spots. For example, I guess a ko rarely gets played during
playout
Hi Lukasz
In RĂ©mi's paper about Bradly Terry models he found a way to give a
comparable gamma score to things that were different, for instance:
capturing a stone v.s. a given 3x3 pattern.
His model is much more general, but has less patterns (not at all
around 200K patterns of my system). Additi
On Mon, Mar 31, 2008 at 03:12:39PM -0700, Christoph Birk wrote:
>
> On Mar 31, 2008, at 1:05 PM, Don Dailey wrote:
>>
>>
>> Christoph Birk wrote:
>>>
>>> On Mar 31, 2008, at 10:48 AM, Mark Boon wrote:
I don't know about this. I'm pretty sure MoGo checks if the stone can
make at least two
A recurrent concept popping up in discussions on how to improve
playouts is "balance". So I would like to try to share my philosophy
behind the playouts of Valkyria and how I define "balance" and how it
relates to the evaluation of go positions.
*Background
In an old school program the eva
don,
> But I also discovered that there seems to be no benefit whatsoever in
> removing them from the play-outs.I have no real explanation for
> this. But it does tell me that the play-outs are very different in
> nature from the tree - you cannot just use the same algorithms for
> sele
Hi Magnus,
Your post makes a great deal of sense. I agree with all the points you
have stated. I don't think you have ever made an illogical post like
most of us have (including myself) and they are always well thought out
and worded.
I have a response to this comment:
Still I think pred
Quoting Don Dailey <[EMAIL PROTECTED]>:
I have a response to this comment:
Still I think predicting the best moves is very important in the
tree part, but this may be much less important in the playouts, and
perhaps even detrimental as some people have experienced.
A class of "bad"
steve uurtamo wrote:
> don,
>
>
>> But I also discovered that there seems to be no benefit whatsoever in
>> removing them from the play-outs.I have no real explanation for
>> this. But it does tell me that the play-outs are very different in
>> nature from the tree - you cannot just
I think there was some confusion in Don's post on ``out of atari'' in
play-outs.
For one thing, I do not agree with the maximal information argument.
Testing ``out of atari'' moves is not good because they might be good,
or might be bad, but merely because they might be good. By contrast, you
shoul
Hi Jacques
>
> No. for a reason I don't understand, I get something like:
>
> Distribution fit expected 0.1 found 0.153164
> Distribution fit expected 0.2 found 0.298602
> Distribution fit expected 0.3 found 0.433074
> Distribution fit expected 0.4 found 0.551575
> Distribution fit expected 0.5 fo
Do you have a link to those papers?
-Josh
> My go-programming efforts are very much concentrated on patterns.
> (maybe I have been influenced by the Kojima-papers)
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mail
Jonas Kahn wrote:
> I think there was some confusion in Don's post on ``out of atari'' in
> play-outs.
> For one thing, I do not agree with the maximal information argument.
>
This is more a theory than an argument. Maybe I didn't express it very
well either.
It's a pretty solid principle i
Don,
I'd strongly agree. You must know whether ladders work
or not, whether a nakade play works or not, whether
various monkey jumps and hanes and so forth succeed or
not. In and of themselves, few moves are objectively
good or bad in any sense - one has to try them and see
what happens.
Some fo
terry mcintyre wrote:
> Don,
>
> I'd strongly agree. You must know whether ladders work
> or not, whether a nakade play works or not, whether
> various monkey jumps and hanes and so forth succeed or
> not. In and of themselves, few moves are objectively
> good or bad in any sense - one has to tr
14 matches
Mail list logo