Hi,
darkforest lost against Koichi Kobayashi with 3 handicaps.
Next game, Zen vs Kobayashi will be played also with 3 handicaps.
Hiroshi Yamashita
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/compute
What are good programs for playing go at different sizes?
Many Faces has 7x7 - 19x19, including the even-sized sizes, so that covers
me for a lot. Whereas when buying CrazyStone HD on the iPad, I was
disappointed that only 9x9, 13x13, and 19x19 were in there - would it have
been that difficult t
Conv net should be robust. From image processing domian, these are feature
detectors (shape in case of go) that are invariant to translations (moving
shape left right up down along board). Enlarging board wouldnt put bot at
disadvantage in evaluating local positions.
On Tuesday, March 22, 2016,
On 3/22/2016 5:21 PM, Lukas van de Wiel wrote:
It would reduce Alphago, because there is less training material in
the form of high-dan-games, to train the policy network.
It would also reduce the skill of a human opponent, because (s)he
would have less experience on a larger board, just as Al
It would reduce Alphago, because there is less training material in the
form of high-dan-games, to train the policy network.
It would also reduce the skill of a human opponent, because (s)he would
have less experience on a larger board, just as AlphaGo.
It would be fun to see which can adapt bett
On 3/22/2016 11:25 AM, Tom M wrote:
I suspect that even with a similarly large training sample for
initialization that AlphaGo would suffer a major reduction in apparent
skill level.
i think a human would also.
The CNN would require many more layers of convolution;
the valuation of positions
FYI. We have translated 3 posts by Li Zhe 6p into English.
https://massgoblog.wordpress.com/2016/03/11/lee-sedols-strategy-and-alphagos-weakness/
https://massgoblog.wordpress.com/2016/03/11/game-2-a-nobody-could-have-done-a-better-job-than-lee-sedol/
https://massgoblog.wordpress.com/2016/03/15/bef
> ...
> Pro players who are not familiar with MCTS bot behavior will not see this.
I stand by this:
>> If you want to argue that "their opinion" was wrong because they don't
>> understand the game at the level AlphaGo was playing at, then you can't
>> use their opinion in a positive way either.
Hi Darren,
"Darren Cook"
> ... But, there were also numerous moves where
> the 9-dan pros said, that in *their* opinion, the moves were weak/wrong.
> E.g. wasting ko threats for no reason. Moves even a 1p would never make.
>
> If you want to argue that "their opinion" was wrong because they don't
"Lucas, Simon M"
> my point is that I *think* we can say more (for example
> by not treating the outcome as a black-box event,
> but by appreciating the skill of the individual moves)
* Human professional players were full of praise for some of
AlphaGo's moves, for instance move 37 in game 2.
> ... we witnessed hundreds of moves vetted by 9dan players, especially
> Michael Redmond's, where each move was vetted.
This is a promising approach. But, there were also numerous moves where
the 9-dan pros said, that in *their* opinion, the moves were weak/wrong.
E.g. wasting ko threats for no
Ko is what makes this game difficult, from a theoretical point of view.
I suspect ko+unresolved groups is where it's at.
s.
On Mar 22, 2016 11:25 AM, "Tom M" wrote:
> I suspect that even with a similarly large training sample for
> initialization that AlphaGo would suffer a major reduction in a
This is somewhat moot - if any moves had been significantly and obviously
weak to any observers, the results wouldn't have been 4-1.
I.e. One bad move out of 5 games would give roughly the same strength
information as one loss out of 5 games; consider that the kibitzing was
being done in real time
I suspect that even with a similarly large training sample for
initialization that AlphaGo would suffer a major reduction in apparent
skill level. The CNN would require many more layers of convolution;
the valuation of positions would be much more uncertain; play in the
corner, edges, and center w
I think you are reinforcing Simon's original point; i.e. using a more fine
grained approach to statically approximate AlphaGo's ELO where fine grained
is degree of vetting per move and/or a series of moves. That is a
substantially larger sample size and each sample will have a pretty high
degree of
I am sorry, but I think this discussion is a bit pointless.
While I write these 3 lines and you read them, AlphGo got 20 ELO
points stronger. :-)
Thomas
On Tue, 22 Mar 2016, Lucas, Simon M wrote:
Still an interesting question is how one could make
more powerful inferences by observing the
Given the minimal sample size, bothering over this question won't amount to
much. I think the proper response is that no one thought we'd see this
level of play at this point in our AI efforts and point to the fact that we
witnessed hundreds of moves vetted by 9dan players, especially Michael
Redmo
another interesting question is to judge the bot's strength
by watching the facial gestures and body language of Lee Sedol
with each move...
On Tue, Mar 22, 2016 at 11:46 AM, Álvaro Begué
wrote:
>
>
> On Tue, Mar 22, 2016 at 1:40 PM, Nick Wedd wrote:
>
>> On 22 March 2016 at 17:20, Álvaro Begué
On Tue, Mar 22, 2016 at 1:40 PM, Nick Wedd wrote:
> On 22 March 2016 at 17:20, Álvaro Begué wrote:
>
>> A very simple-minded analysis is that, if the null hypothesis is that
>> AlphaGo and Lee Sedol are equally strong, AlphaGo would do as well as we
>> observed or better 15.625% of the time. Tha
Still an interesting question is how one could make
more powerful inferences by observing the skill of
the players in each action they take rather than just
the final outcome of each game.
If you saw me play a single game of tennis against Federer
you’d have no doubt as to which way the next 100 g
On 22 March 2016 at 17:20, Álvaro Begué wrote:
> A very simple-minded analysis is that, if the null hypothesis is that
> AlphaGo and Lee Sedol are equally strong, AlphaGo would do as well as we
> observed or better 15.625% of the time. That's a p-value that even social
> scientists don't get exci
A very simple-minded analysis is that, if the null hypothesis is that
AlphaGo and Lee Sedol are equally strong, AlphaGo would do as well as we
observed or better 15.625% of the time. That's a p-value that even social
scientists don't get excited about. :)
Álvaro.
On Tue, Mar 22, 2016 at 12:48 PM
Statistical significance requires a null hypothesis... I think it's
probably easiest to ask the question of if I assume an ELO difference of x,
how likely it's a 4-1 result?
Turns out that 220 to 270 ELO has a 41% chance of that result.
>= 10% is -50 to 670 ELO
>= 1% is -250 to 1190 ELO
My numbers
> I'm not sure if we can say with certainty that AlphaGo is significantly
> better Go player than Lee Sedol at this point. What we can say with
> certainty is that AlphaGo is in the same ballpark and at least roughly
> as strong as Lee Sedol. To me, that's enough to be really huge on its
> own ac
my point is that I *think* we can say more (for example
by not treating the outcome as a black-box event,
but by appreciating the skill of the individual moves)
From: Computer-go [mailto:computer-go-boun...@computer-go.org] On Behalf Of
uurtamo .
Sent: 22 March 2016 16:25
To: computer-go
Subject
On Tue, Mar 22, 2016 at 04:00:41PM +, Lucas, Simon M wrote:
> With AlphaGo winning 4 games to 1, from a simplistic
> stats point of view (with the prior assumption of a fair
> coin toss) you'd not be able to claim much statistical
> significance, yet most (me included) believe that
> AlphaGo i
Simon,
There's no argument better than evidence, and no evidence available to us
other than *all* of the games that alphago has played publicly.
Among two humans, a 4-1 result wouldn't indicate any more or less than this
4-1 result, but we'd already have very strong elo-type information about
bot
Hi all,
I was discussing the results with a colleague outside
of the Game AI area the other day when he raised
the question (which applies to nearly all sporting events,
given the small sample size involved)
of statistical significance - suggesting that on another week
the result might have been 4
sgf files have been made available on the 2nd day Finals games:
http://jsb.cs.uec.ac.jp/~igo/results_2ndday/final.zip
Tokumoto
On Sun, Mar 20, 2016 at 11:27 PM, Hideki Kato
wrote:
> Dear Ingo,
>
> >Hi Hiroshi,
> >
> >thanks for the many updates.
> >
> >On another site I read that the bits on r
29 matches
Mail list logo