On Wed, Mar 09, 2016 at 09:05:48PM -0800, David Fotland wrote:
> I predicted Sedol would be shocked. I'm still routing for Sedol. From
> Scientific American interview...
>
> Schaeffer and Fotland still predict Sedol will win the match. “I think the
> pro will win,” Fotland says, “But I think t
This time I think game was tougher. Though too weak to judge. At the end
sacrifice a fistfull stones does puzzle me, but again way too weak to
analyze it.
It seem Lee Sedol is lucky if he wins a game
2016-03-10 12:39 GMT+02:00 Petr Baudis :
> On Wed, Mar 09, 2016 at 09:05:48PM -0800, David Fotla
In the press conference (https://youtu.be/l-GsfyVCBu0?t=5h40m00s), Lee
Sedol said that while he saw some questionable moves by AlphaGo in the
first game, he feels that the second game was a near-perfect play by
AlphaGo and he did not feel ahead at any point of the game.
On Thu, Mar 10, 2016 at 12:
Very impressive results so far!
If it's going to be a clean sweep, I hope we will get to see some handicap
games :-)
Erik
On Thu, Mar 10, 2016 at 12:04 PM, Petr Baudis wrote:
> In the press conference (https://youtu.be/l-GsfyVCBu0?t=5h40m00s), Lee
> Sedol said that while he saw some questiona
Hello,
Von: "Erik van der Werf"
> Very impressive results so far!
indeed, almost unbelievable.
> If it's going to be a clean sweep, I hope we will get to see some handicap
> games :-)
I have another proposal, IF a clean sweep will happen:
There was an announcement three days ago by
On 10.03.2016 00:45, Hideki Kato wrote:
such as solving complex semeai's and double-ko's, aren't solved yet.
To find out Alphago's weaknesses, there can be, in particular,
- this match
- careful analysis of its games
- Alphago playing on artificial problem positions incl. complex kos,
complex
I was surprised the Lee Sedol didn't take the game a bit further to probe
AlphaGo and see how it responded to [...complex kos, complex ko fights,
complex sekis, complex semeais, ..., multiple connection problems, complex
life and death problems] as ammunition for his next game. I think he was so
as
I just realized that game 2 happened last night. ARGH! Stupid timezone
error.
On Thu, Mar 10, 2016 at 9:19 AM, Jim O'Flaherty
wrote:
> I was surprised the Lee Sedol didn't take the game a bit further to probe
> AlphaGo and see how it responded to [...complex kos, complex ko fights,
> complex sek
> I was surprised the Lee Sedol didn't take the game a bit further to probe
> AlphaGo and see how it responded to [...complex kos, complex ko fights,
> complex sekis, complex semeais, ..., multiple connection problems, complex
> life and death problems] as ammunition for his next game.
In fact in
> In fact in game 2, white 172 was described [1] as the losing move,
> because it would have started a ko. ...
"would have started a ko" --> "should have instead started a ko"
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.o
One question is whether Lee Sedol knows about these weaknesses.
Another question is whether he will exploit those weaknesses.
Lee has a very simple style of play that seems less ko-oriented
than other players, and this may play into the hands of Alpha.
Michael Wing
I was surprised the Lee Sedol
Congratz to AlphaGo, once more!
This is getting scary! :-)
Lukas
On Fri, Mar 11, 2016 at 12:40 AM, "Ingo Althöfer" <3-hirn-ver...@gmx.de>
wrote:
> Hello,
>
>
> Von: "Erik van der Werf"
> > Very impressive results so far!
>
> indeed, almost unbelievable.
>
>
> > If it's going to be a clean sweep
Congratulations indeed.
Although I must admit I have mixed feelings about this, that it is Google,
using enormous resources, that got there first.
marco
> On 10 Mar 2016, at 19:38, Lukas van de Wiel
> wrote:
>
> Congratz to AlphaGo, once more!
> This is getting scary! :-)
>
> Lukas
>
>>
The same here, with other people having built the foundations of go AIs,
and going from neural networks to MCTS, and now back-ish again...
But that is how is how science works. Eventually these two wins are the
reward of decades of culminated work by many people working on go AI.
AlphaGo is the Che
On 10.03.2016 16:48, Darren Cook wrote:
in game 2, black 43 and 45 were described as "a little
heavy". It did seem (to my weak eyes) to turn out poorly. I'm curious if
this was a real mistake by AlphaGo, or if it was already happy it was
leading, and this was the one it felt led to the safest wi
Yes, but they are not some random cherry picking third party; have a look
on the top authors of the paper - David Silver, Aja Huang, Chris Maddison..
Regards,
Josef
Dne čt 10. 3. 2016 19:47 uživatel Lukas van de Wiel <
lukas.drinkt.t...@gmail.com> napsal:
> The same here, with other people havin
I doubt that the human-perceived weaknesses in AlphaGo are really
weaknesses - after the second game it seems more like AlphaGo has
"everything under control".
Professional players will still find moves to criticize, but I want to see
proof that any such move would change the fate of the game :-)
My 2 cent:
Recent strong computer programs never loose by a few points. They are either
crashed before the end game starts (because when being clearly behind they play
more
desperate and weaker moves because they mainly get negative feadback from
their search with mostly loosing branches and ri
On Thu, Mar 10, 2016 at 07:20:11PM +, Josef Moudrik wrote:
> Yes, but they are not some random cherry picking third party; have a look
> on the top authors of the paper - David Silver, Aja Huang, Chris Maddison..
Also, they aren't merely wrapping engineering around existing science
and putting
Quick question - how, mechanically, is the opening being handled by alpha
go and other recent very strong programs? Giant hand-entered or
game-learned joseki books?
Thanks,
steve
On Mar 10, 2016 12:23 PM, "Thomas Wolf" wrote:
> My 2 cent:
>
> Recent strong computer programs never loose by a few
>From reading their article, AlphaGo makes no difference at all between
start, middle and endgame.
Just like any other position, the empty (or almost empty, or almost full)
board is just another game position in which it chooses (one of) the most
promising moves in order to maximize her chance of w
But at the start of the game the statistical learning of infinitessimal
advantages of one opening move compared to another opening move is less
efficient than the learning done in the middle and end game.
On Thu, 10 Mar 2016, Sorin Gherman wrote:
From reading their article, AlphaGo makes no di
If that's the case, then they should be able to give opinions on best first
moves, best first two move combos, and best first three move combos. That'd
be interesting to see. (Top 10 or so of each).
s.
On Mar 10, 2016 12:37 PM, "Sorin Gherman" wrote:
> From reading their article, AlphaGo makes n
For that reason I guess that AlphaGo opening style is mostly influenced by
the net that is trained on strong human games, while as the game progresses
the MC rollouts have more and more influence in choosing a move.
Is my understanding way off?
On Mar 10, 2016 12:40 PM, "Thomas Wolf" wrote:
> But
The most surprising fact, to me, is that it's possible to apply "reinforce"
on such a large scale. Reinforce is not new, but even with millions of cores
I did not expect this to be possible. I would have assumed that reinforce
would
just produce random noise when applied at such a scale :-)
On Th
With at most 2x361 or so different end scores but 10^{XXX} possible different
games, there are at least in the opening many moves with the same optimal
outcome. The difference between these moves is not the guaranteed score (they
are all optimal) but the difficulty to play optimal after that move.
I think we are going to see a case of human professionals having drifted
into a local optima in at least three areas:
1) Early training around openings is so ingrained in their acquiring
their skill (optimal neural plasticity window), there has been very little
new discovery around the first thir
Amen to Don Dailey. He would be so proud.
From: Computer-go [mailto:computer-go-boun...@computer-go.org] On Behalf Of Jim
O'Flaherty
Sent: Thursday, March 10, 2016 6:49 PM
To: computer-go@computer-go.org
Subject: Re: [Computer-go] Finding Alphago's Weaknesses
I think we are going to see a
blockquote, div.yahoo_quoted { margin-left: 0 !important; border-left:1px
#715FFA solid !important; padding-left:1ex !important; background-color:white
!important; } According to the paper, AlphaGo did not use an opening book at
all, in the version which played Fan Hui.
Hypothetically, they c
2016-03-11 11:42 GMT+09:00 terry mcintyre :
> Hypothetically, they could have grafted one on. I read a report that the
> first move in game 2 vs. Lee Sedol took only seconds. On the other hand,
> it's first move in game 1 took a longer while. We can only speculate.
This is easy to explain. AlphaGo
Not to put too fine a point on it, but there's not very many two or
three-move combos on an empty board. As staggering as it is, I'm inclined
to believe without further evidence that there's no book or just a very
light book.
s.
On Mar 10, 2016 7:50 PM, "Seo Sanghyeon" wrote:
> 2016-03-11 11:42
Undoubtedly many things happened since October, but Wired article
includes an interesting quote on AlphaGo's time management.
http://www.wired.com/2016/03/googles-ai-wins-first-game-historic-match-go-champion/
"At the lunch prior to the match, Hassabis also said that since
October, he and his tea
He was already in Byo-yomi, so perhaps he didn’t have an accurate count. This
might explain why he looked upset at move 175. He might have realized his
mistake.
David
> -Original Message-
> From: Computer-go [mailto:computer-go-boun...@computer-go.org] On Behalf
> Of Darren Cook
> Sent
According to the paper *Mastering the Game of Go with Deep Neural Networks
and **Tree Search*, the main part of both the policy and value network is a
5*5 conv layer followed by eleven 3*3 conv layer. Therefore, after the last
conv layer, the maximum "information propagation length" is (5-1)/2 +
11
A stack of 11 3x3 convolutional layers and a single 5x5 layer with no
pooling actually corresponds to effectively a 27x27 kernel, which is
obviously large enough to cover the entire board. (Your value of 13 is only
the distance from the center of the filter to the edge).
On Thu, Mar 10, 2016 at 1
Points at the center of the board indeed depends on the full board, but
points near the edge does not.
On Fri, Mar 11, 2016 at 3:03 PM Vincent Zhuang wrote:
> A stack of 11 3x3 convolutional layers and a single 5x5 layer with no
> pooling actually corresponds to effectively a 27x27 kernel, which
36 matches
Mail list logo