On Thu, 2 Apr 2009, Erik van der Werf wrote:
On Wed, Apr 1, 2009 at 9:03 PM, Matthew Woodcraft
wrote:
Erik van der Werf wrote:
Jonas Kahn wrote:
No there is no danger. That's the whole point of weighting with N_{s,a}.
N_{s,a} = number of times the node s has been visited, starting
On Tue, 31 Mar 2009, Matthew Woodcraft wrote:
Jonas Kahn wrote:
You might be interested by this article, for a very complete and tested
answer. Plus the idea of grouping, but a good part of the effect seems
to me to be giving a heuristic pre-value to moves, which might be done more
efficiently
You might be interested by this article, for a very complete and tested
answer. Plus the idea of grouping, but a good part of the effect seems
to me to be giving a heuristic pre-value to moves, which might be done more
efficiently otherwise:
eprints.pascal-network.org/archive/4571/01/8057.pdf
Although Tei and Aoba Professionals explained the match at the
front stage with a projection, the game was so complicated that I
couldn't see which is winning until near the end. Another semi-final
match, my Fudo Go vs Katsunari, also was shown on the screen but in a
small picture at upper right
Part of the problems stem from that playouts are weak, and more
specifically notably weaker than the program itself.
To begin with, a consequence is that most areas of the board are less
clear than they should to playouts. This entails, I think, a preference
for probable points against sure point
Wasn't it today that Crazystone had a match against a professional
player? During the FIT2008 conference at Keio University?
Does anyone know the result and if the game is available somewhere?
Jonas
___
computer-go mailing list
computer-go@computer-go.
Congratulations to Mogo team!
Twenty years from now, in ``a computer go history''
August 7th 2008: First victory of computer against pro with 9 handicap.
By the way, the surge in strength with the 800 processors with respect
to the quadcore (old) MogoBot, seemed relatively low, when comparing to
>
> So I believe a better approach is a heavy playout approach with NO
> tree. Instead, rules would evolve based on knowledge learned from each
> playout - rules that would eventually move uniformly random moves into
> highly directed ones. All-moves-as-first teaches us that in the
> general
> > By contrast, you
> > should test (in the tree) a kind of move that is either good or average,
> > but not either average or bad, even if it's the same amount of
> > information. In the tree, you look for the best move. Near the root at
> > least; when going deeper and the evaluation being less
On Wed, Apr 02, 2008 at 02:13:45PM +0100, Jacques BasaldĂșa wrote:
> Jonas Kahn wrote:
>
> > I guess you have checked that with your rules for getting probability
> > distributions out of gammas, the mean of the probability of your move 1
> > was that that you observed
Hi Jacques
>
> No. for a reason I don't understand, I get something like:
>
> Distribution fit expected 0.1 found 0.153164
> Distribution fit expected 0.2 found 0.298602
> Distribution fit expected 0.3 found 0.433074
> Distribution fit expected 0.4 found 0.551575
> Distribution fit expected 0.5 fo
I think there was some confusion in Don's post on ``out of atari'' in
play-outs.
For one thing, I do not agree with the maximal information argument.
Testing ``out of atari'' moves is not good because they might be good,
or might be bad, but merely because they might be good. By contrast, you
shoul
>> Typically, how many parameters do you have to tune ? Real or two-level ?
>
> I guess I have 10 real valued and 10 binary ones. There are probably a lot
> of stuff that are ahrd coded and could be parameterized.
>
> Here I am also completely ignoring playouts that have hundreds of handtuned
> p
On Tue, Mar 11, 2008 at 09:05:01AM +0100, Magnus Persson wrote:
> Quoting Don Dailey <[EMAIL PROTECTED]>:
>>
>> When the child nodes are allocated, they are done all at once with
>> this code - where cc is the number of fully legal child nodes:
>
> In valkyria3 I have "supernodes" that contains an
On Mon, Mar 10, 2008 at 01:03:02PM -0700, Christoph Birk wrote:
> On Mon, 10 Mar 2008, Petr Baudis wrote:
>>> MoGo displays the depth of the principle variation in the stderr stream.
>>
>> I have been wondering, does that include _any_ nodes, or only these
>> above certain number of playouts? What
On Mon, Mar 10, 2008 at 02:33:03AM -0400, Michael Williams wrote:
> Jonas Kahn wrote:
>> out, kos can go on for long. I don't know what depth is attained in the
>> tree (by the way, I would really like to know), but I doubt it is that
>
> MoGo displays the depth of the
> I think the general outline is that you pre-test groups first to see if
> a self-atari move is "interesting."It's worthy of additional
> consideration if the stones it is touching have limited liberties and
> the group you self-atari is relatively small.Then you could go on to
> other tes
> But correct ko threats playing has nothing to do with the playout part :
> Since it is a strategic concept that involves global understanting, It is
> handled by the UCT tree part.
Yes and no.
Theoretically, that's the work of the UCT part. But, as Steve pointed
out, kos can go on for long. I
There is much high-level data to be found within the MC runs, such as
whether a group is alive or not, etc.
Now, I don't know if it is easy to inject it back within the
simulations.
Another approach (not excluding the first one) would be to gather much
lower-level data.
It's especially sad that t
> I don't see that, but then again I am not a very strong player
> myself. What I notice is that it plays very "normal" until it's
> pretty obvious that it's losing, not just when it varies slightly from
> 50% but when it doesn't vary much from zero. However, it does play
> more desperately
> From my observaion, mc chooses good moves if and only if the winning
> rate is near 50%. Once it gets loosing, it plays bad moves. Surely
> it's an illusion but it helps to prevent them.
If it's more important to avoid being too pessimistic (ie low estimated
winning rates), there are two wa
> # One question: where _aya_ comes from or stands for? If my guess is
> correct, you are confusing Hiroshi, author of Aya, and I, Hideki,
> author of GGMC :). I'm sorry if I'm wrong.
I did. Sorry for the confusion. :(
Jonas
___
computer-go mailing
> delta_komi = 10^(K * (number_of_empty_points / 400 - 1)),
> where K is 1 if winnig and is 2 if loosing. Also, if expected
> winning rate is between 45% and 65%, Komi is unmodified.
There's one thing I don't like at all, there: you could get positive
evaluation when losing, and hence play conser
> http://ewh.ieee.org/cmte/cis/mtsc/ieeecis/tutorial2007/Bruno_Bouzy_2007.pdf
>
> Page 89, "which kind of outcome". This method is better than the above
> and similar to what Jonas seems to propose. The improvement is minor.
By looking at their proposal (45 * win + score), in contrast to mine,
th
> These ideas are all old,
I never said they were new. I wanted to give a mathematical argument on
them.
What would have been new would have been methods with filters applied on
the \hat{p}_i. However, though I am pretty sure I could make them more
efficient with little data, that's certainly not
> You have basically 2 cases when losing. One case is that the program
> really is busted and is in a dead lost position.The other case is
> that the program THINKS it's lost but really is winning (or at least has
> excellent chances.) In the first case, we are simply playing for a
> mir
The professional player who commented the game between Katsunari and
Crazy Stone thought that at the end of fuseki, Katsunari was ahead.
I wonder: even if it might not be optimal, does Crazy Stone play what is
best for him, that is, what he knows best how to use ?
I mean, if Crazy Stone played aga
> I experimented with something similar a while ago, using the
> publicly available mogo and manipulating komi between moves.
>
> If its win probability fell below a certain threshold (and the move
> number wasn't too high), I told it to play on the assumption that it
> would receive a few points m
> The idea of using f(score) instead of sign(score) is interesting. Long
> ago, I tried tanh(K*score) on 9x9 (that was before the 2006 Olympiad, so
> it may be worth trying again), and I found that the higher K, the stronger
> the program. Still, I believe that other f may be worth trying.
In f
Hi there
I am new here, but have read the list for a few monthes.
I am a mathematician, finishing my PhD on quantum statistics (that is
statistics on quantum objects, quantum information, etc.).
So do not expect me to write any code, but I could have suggestions for
heuristics in the choice of mov
30 matches
Mail list logo