Re: [computer-go] Great Wall Opening by Bruce Wilcox

2009-10-17 Thread David Ongaro

Ingo Althöfer schrieb:

Now I made some autoplay tests, starting from the end position
given in the appendix of this mail.
* one game with Leela 3.16; Black won.
* four games with MFoG 12.016; two wins each for Black and White.
So there is some indiciation that the Great Wall works even
for bots, who are not affected by psychology.

I would like to know how other bots perform in autoplay
after this opening.
  
Have you tried some random Setup for the first 5 stones from Black and 
compared the results? If there's no significant difference, I can't see 
the point in your question.


Regards

David

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


[computer-go] Re: Great Wall Opening by Bruce Wilcox

2009-10-17 Thread Ingo Althöfer
David Ongaro wrote:
>Ingo Althöfer schrieb:
>> Now I made some autoplay tests, starting from the end position
>> given in the appendix of this mail.
>> * one game with Leela 3.16; Black won.
>> * four games with MFoG 12.016; two wins each for Black and White.
>> So there is some indiciation that the Great Wall works even
>> for bots, who are not affected by psychology.
>> ...
>   
> Have you tried some random Setup for the first 5 stones from Black 
> and compared the results? 

Yes, with MFoG: first 5 moves by Black on random points - vs -
first 4 moves by White on the 4,4-points.

Result was clear advantage for White.

> If there's no significant difference, I 
> can't see the point in your question.

So, now you should see the point ;-)

Ingo.

-- 
GRATIS für alle GMX-Mitglieder: Die maxdome Movie-FLAT!
Jetzt freischalten unter http://portal.gmx.net/de/go/maxdome01
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Great Wall Opening by Bruce Wilcox

2009-10-17 Thread Petr Baudis
On Fri, Oct 16, 2009 at 08:55:34PM +0200, "Ingo Althöfer" wrote:
> In the year 2000 I bought the book
> "EZ-GO: Oriental Strategy in a Nutshell",
> by Bruce and Sue Wilcox. Ki Press; 1996.
> 
> I can only recommend it for the many fresh ideas.
> A few days ago I found time again to read in it.
> 
> This time I was impressed by Bruce Wilcox's strange 
> opening "Great Wall", where Black starts with a loose 
> wall made of 5 stones, spanning over the whole board.
> 
> Bruce proposes to play this setup as a surprise weapon,
> even against stronger opponents.
> 
> Now I made some autoplay tests, starting from the end position
> given in the appendix of this mail.
> * one game with Leela 3.16; Black won.
> * four games with MFoG 12.016; two wins each for Black and White.
> So there is some indiciation that the Great Wall works even
> for bots, who are not affected by psychology.

In general, especially in environment so stochastic as MCTS, these are
awfully small samples. To get even into a +-10% confidence interval, you
need at least 100 (that is, ONE HUNDRED) games. Otherwise, the results
aren't statistically meaningful at all, as I have myself painfully
discovered so often ;-) - they can be too heavily distorted.

-- 
Petr "Pasky" Baudis
A lot of people have my books on their bookshelves.
That's the problem, they need to read them. -- Don Knuth
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Great Wall Opening by Bruce Wilcox

2009-10-17 Thread Don Dailey
2009/10/17 Petr Baudis 

> On Fri, Oct 16, 2009 at 08:55:34PM +0200, "Ingo Althöfer" wrote:
> > In the year 2000 I bought the book
> > "EZ-GO: Oriental Strategy in a Nutshell",
> > by Bruce and Sue Wilcox. Ki Press; 1996.
> >
> > I can only recommend it for the many fresh ideas.
> > A few days ago I found time again to read in it.
> >
> > This time I was impressed by Bruce Wilcox's strange
> > opening "Great Wall", where Black starts with a loose
> > wall made of 5 stones, spanning over the whole board.
> >
> > Bruce proposes to play this setup as a surprise weapon,
> > even against stronger opponents.
> >
> > Now I made some autoplay tests, starting from the end position
> > given in the appendix of this mail.
> > * one game with Leela 3.16; Black won.
> > * four games with MFoG 12.016; two wins each for Black and White.
> > So there is some indiciation that the Great Wall works even
> > for bots, who are not affected by psychology.
>
> In general, especially in environment so stochastic as MCTS, these are
> awfully small samples. To get even into a +-10% confidence interval, you
> need at least 100 (that is, ONE HUNDRED) games. Otherwise, the results
> aren't statistically meaningful at all, as I have myself painfully
> discovered so often ;-) - they can be too heavily distorted.
>

100 Games doesn't even tell you much unless the difference is pretty large.


In the testing I do, 10,000 games between players are required before I can
start thinking about making a decision.   When I tune an evaluation
function, (and search algorithms) for chess by playing games against various
opponents,   many small but useful evaluation parameters contribute less
than 10 ELO points to the strength.   10,000 isn't really enough to accept
some changes but I take it as a matter of faith once the error margins are
+/-  a few ELO points.   I have to do this due to the limited resources I
have available. If the change is of the nature where it slows the
program down but appears to make up for it with extra quality, I am even
more paranoid about accepting it because a few "random" slowdowns that have
a chance to weaken the program can kill it.

I have found it very common to get what might seem to be a convincing lead
after 200 or 300 games, only to see it come crashing down.   I have ramped
up the strength of the program by over 100 ELO with a large number of
small  ELO  improvements, but if I start accepting larger error margins the
changes become almost random.

Of course a few hundred games is plenty if you are talking about a major
improvement.

I know people who claim they can look at the games themselves and make a
good judgment.   I don't even begin to believe that, because the human brain
is so suggestive.  If you know what change you made and you look at games,
it's very difficult to stop the brain from interpreting many of the moves in
terms of the change.However it's still useful to look at games if you
use great caution but mainly to look for bugs and side-effects and when you
think you seem something you have to chase it down to see if you saw what
you think you saw!

- Don





>
> --
>Petr "Pasky" Baudis
> A lot of people have my books on their bookshelves.
> That's the problem, they need to read them. -- Don Knuth
> ___
> computer-go mailing list
> computer-go@computer-go.org
> http://www.computer-go.org/mailman/listinfo/computer-go/
>
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [computer-go] Re: Great Wall Opening by Bruce Wilcox

2009-10-17 Thread David Ongaro

Ingo Althöfer schrieb:

David Ongaro wrote:
  

Ingo Althöfer schrieb:


Now I made some autoplay tests, starting from the end position
given in the appendix of this mail.
* one game with Leela 3.16; Black won.
* four games with MFoG 12.016; two wins each for Black and White.
So there is some indiciation that the Great Wall works even
for bots, who are not affected by psychology.
...
  
  
Have you tried some random Setup for the first 5 stones from Black 
and compared the results? 



Yes, with MFoG: first 5 moves by Black on random points - vs -
first 4 moves by White on the 4,4-points.

Result was clear advantage for White.
  


So you tested just one game!?

If there's no significant difference, I 
can't see the point in your question.



So, now you should see the point ;-)
  


I see disappearing my illusion, that no professor would consider this to 
have any statistical relevance, let alone significance.


Regards

David

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Re: Great Wall Opening by Bruce Wilcox

2009-10-17 Thread Michael Alford

David Ongaro wrote:

Ingo Althöfer schrieb:

David Ongaro wrote:
 

Ingo Althöfer schrieb:
   

Now I made some autoplay tests, starting from the end position
given in the appendix of this mail.
* one game with Leela 3.16; Black won.
* four games with MFoG 12.016; two wins each for Black and White.
So there is some indiciation that the Great Wall works even
for bots, who are not affected by psychology.
...
  
  Have you tried some random Setup for the first 5 stones from Black 
and compared the results? 


Yes, with MFoG: first 5 moves by Black on random points - vs -
first 4 moves by White on the 4,4-points.

Result was clear advantage for White.
  


So you tested just one game!?

If there's no significant difference, I can't see the point in your 
question.



So, now you should see the point ;-)
  


I see disappearing my illusion, that no professor would consider this to 
have any statistical relevance, let alone significance.


Regards

David

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/




FYI, I have seen variations of the "great wall" played many times, 
usually by my Chinese friends. I have seen large knight moves, small 
knight moves, one-space jumps, and combinations of these moves. It is 
always white that plays this way, and it's a teaching game, the object 
being to demonstrate to the weaker player the truth of the saying "who 
controls the center wins the game". They would never play this way in an 
even game, making the moves in the center at the start of the game to 
play the great wall pattern is considered the same as giving handicap.


Michael
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


[computer-go] monte carlo

2009-10-17 Thread Folkert van Heusden
People,

I'm trying to implement a monthecarlo algorithm in my go program. Now
the results are dramatic: the elo-rating of my go program drops from
1150 to below 700. I tried:
 - evaluate the number of captured stone
 - evaluate strategic elements (without MC this strategic eval gives
   that 1150 elo).
Currently my program can evaluate 500 scenes per second and I let it
"think" for 5 seconds.
What could be the cause of this dramatic results? Wrong evaluation? Not
enough nodes processed?


Folkert van Heusden

-- 
To MultiTail einai ena polymorfiko ergaleio gia ta logfiles kai tin
eksodo twn entolwn. Prosferei: filtrarisma, xrwmatismo, sygxwneysi,
diaforetikes provoles. http://www.vanheusden.com/multitail/
--
Phone: +31-6-41278122, PGP-key: 1F28D8AE, www.vanheusden.com
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] monte carlo

2009-10-17 Thread Petr Baudis
  Hi!

On Sat, Oct 17, 2009 at 05:02:33PM +0200, Folkert van Heusden wrote:
> I'm trying to implement a monthecarlo algorithm in my go program. Now
> the results are dramatic: the elo-rating of my go program drops from
> 1150 to below 700. I tried:
>  - evaluate the number of captured stone
>  - evaluate strategic elements (without MC this strategic eval gives
>that 1150 elo).
> Currently my program can evaluate 500 scenes per second and I let it
> "think" for 5 seconds.
> What could be the cause of this dramatic results? Wrong evaluation? Not
> enough nodes processed?

  It's not clear what do you mean by the "evaluation", and how do you
integrate montecarlo to the rest of your program, so it's hard to
comment. But it takes some time to weed out some pretty basic bugs which
make your program play horribly but yet not make it lose every single
game - watch your program's evaluation and the montecarlo playouts
closely for anything fishy.

-- 
Petr "Pasky" Baudis
A lot of people have my books on their bookshelves.
That's the problem, they need to read them. -- Don Knuth
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Great Wall Opening by Bruce Wilcox

2009-10-17 Thread Petr Baudis
On Sat, Oct 17, 2009 at 08:36:13AM -0400, Don Dailey wrote:
> 2009/10/17 Petr Baudis 
> 
> > On Fri, Oct 16, 2009 at 08:55:34PM +0200, "Ingo Althöfer" wrote:
> > > In the year 2000 I bought the book
> > > "EZ-GO: Oriental Strategy in a Nutshell",
> > > by Bruce and Sue Wilcox. Ki Press; 1996.
> > >
> > > I can only recommend it for the many fresh ideas.
> > > A few days ago I found time again to read in it.
> > >
> > > This time I was impressed by Bruce Wilcox's strange
> > > opening "Great Wall", where Black starts with a loose
> > > wall made of 5 stones, spanning over the whole board.
> > >
> > > Bruce proposes to play this setup as a surprise weapon,
> > > even against stronger opponents.
> > >
> > > Now I made some autoplay tests, starting from the end position
> > > given in the appendix of this mail.
> > > * one game with Leela 3.16; Black won.
> > > * four games with MFoG 12.016; two wins each for Black and White.
> > > So there is some indiciation that the Great Wall works even
> > > for bots, who are not affected by psychology.
> >
> > In general, especially in environment so stochastic as MCTS, these are
> > awfully small samples. To get even into a +-10% confidence interval, you
> > need at least 100 (that is, ONE HUNDRED) games. Otherwise, the results
> > aren't statistically meaningful at all, as I have myself painfully
> > discovered so often ;-) - they can be too heavily distorted.
> >
> 
> 100 Games doesn't even tell you much unless the difference is pretty large.

Well, this is simple math. With 100 bernoulli trials, your
95%-confidence interval is at ~ +-10% if your rates are around 50%.
Of course, if the results you want to compare are closer than within
20%, you will need more trials. :-)

When I'm too lazy to compute this for myself or for some reason don't
use gogui-twogtp that computes the error (confidence_interval/1.96) for
me, I find http://statpages.org/confint.html pretty handy for quick
calculations.

(To convert win rates to ELO differences, I found
http://www.chesselo.com/probabil.html useful, but I don't find ELO too
useful for basic improvements testing, since I compare only winrates
against a single reference player.)

-- 
Petr "Pasky" Baudis
A lot of people have my books on their bookshelves.
That's the problem, they need to read them. -- Don Knuth
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/