That looks very interesting. Looking forward to some implementation of this
filtering down to the common ML libs.
On Mon, Mar 19, 2018 at 2:39 PM, Stefan Kaitschick
wrote:
> Is this something LeelaZero might consider using?
> https://arxiv.org/pdf/1803.05407.pdf
> The last diagram is looking ver
>
>
> The basic explanation for why this is not straightforward is that you
> never want your program to consider moves in the direction of
> low-probability wins, no matter how large margins they might have; the
> MC measurement function is very noisy with regards to individual samples.
>
I do
Just a wild guess, but I assume they'll go for the latest winner of the UEC
Cup as far as AI entrants are concerned.
On Tue, Nov 29, 2016 at 3:46 PM, "Ingo Althöfer" <3-hirn-ver...@gmx.de>
wrote:
> Hi Hideki,
>
> that sounds very interesting.
>
> > Nihon Kiin created a new Go tournament, "World G
That sounds very promising. Any chance some of the improvements will filter
down into the current commercial version in the form of update patches?
On Wed, Nov 23, 2016 at 11:03 PM, Hideki Kato
wrote:
> Thanks David.
>
> It's now.
>
> In the same afternoon, Zen vs Yonil Ha 6p was played on KGS a
A_j_a, of course. Sorry for messing this up.
On Wed, Jun 22, 2016 at 8:27 PM, Michael Markefka
wrote:
> Aya, thank you for giving us some insight into AlphaGo. We are all
> very much looking forward to it
>
> On Wed, Jun 22, 2016 at 5:29 PM, Aja Huang wrote:
>>
>>
>&
Aya, thank you for giving us some insight into AlphaGo. We are all
very much looking forward to it
On Wed, Jun 22, 2016 at 5:29 PM, Aja Huang wrote:
>
>
> 2016-06-22 12:29 GMT+01:00 "Ingo Althöfer" <3-hirn-ver...@gmx.de>:
>>
>> Hi,
>>
>> the timetable for the conference "Computers and Games 2016"
ith many
> parameters, you have enough to train a model with fewer parameters.
>
> Álvaro.
>
>
> On Sun, Jun 12, 2016 at 5:52 AM, Michael Markefka <
> michael.marke...@gmail.com> wrote:
>
>> Might be worthwhile to try the faster, shallower policy network as a
&
Might be worthwhile to try the faster, shallower policy network as a
MCTS replacement if it were fast enough to support enough breadth.
Could cut down on some of the scoring variations that confuse rather
than inform the score expectation.
On Sun, Jun 12, 2016 at 10:56 AM, Stefan Kaitschick
wrote
That is awesome! Looking forward to it!
On Mon, May 16, 2016 at 9:50 AM, Rémi Coulom wrote:
> Hi,
>
> I am very happy to announce that Hajin Lee will play a live commented game
> against Crazy Stone on Sunday, at 8PM Korean time. The game will take place
> on KGS, and she will make live comment
Can I flag this as spam?
On Tue, Apr 19, 2016 at 11:23 PM, djhbrown . wrote:
> 6D out of the blue is no mean achievement,... 60+ years ago, the
> market for gizmos in UK was flooded with cheap Japanese copies of
> European products; but whilst innovation and product quality
> improvement by Euro
Then again DNNs also manage feature extraction on unlabeled data with
increasing levels of abstraction towards upper layers. Perhaps one
could apply such a specifically trained DNN to artificial board
situations that emphasize specific concepts and examine the network's
activation, trying to map ac
Not a definite solution yet, but more of a call to action here: Would
anyone be interested contributing to a well-maintained computer go
news site? I would consider that a useful service that is currently
lacking. I'd be happy to contribute news articles and links.
On Thu, Mar 17, 2016 at 4:16 PM,
This online book by Michael Nielsen is a fantastic resource:
http://neuralnetworksanddeeplearning.com/
It builds everything from the ground up in easily digested chunks. All
the required math is in there, but can be skipped if just a general
understanding and basis for application is desired. High
Hi Petr,
to clarify a bit:
pylearn2 specifically comes with a script to convert a model trained
on a GPU into a version that runs on the CPU. This doesn't work very
well though and the documentation points that out too. According to
the dev commens that is down to how Theano, the framework pylear
Would be nice to have it as an option. My desktop PC and my laptop
both have CUDA-enabled graphics, and that isn't uncommon anymore.
Also, if you are training on a GPU you can probably avoid a lot of
hassle if you expect to run it on a GPU as well. I don't know how
other NN implementations handle
Hello everyone,
in the wake of AlphaGo using a DCNN to predict expected winrate of a
move, I've been wondering whether one could train a DCNN for expected
territory or points successfully enough to be of some use (leaving the
issue of win by resignation for a more in-depth discussion). And,
whethe
That sounds like it'd be the MSE as classification error of the eventual result.
I'm currently not able to look at the paper, but couldn't you use a
softmax output layer with two nodes and take the probability
distribution as winrate?
On Thu, Feb 4, 2016 at 8:34 PM, Álvaro Begué wrote:
> I am no
On Mon, Feb 1, 2016 at 1:44 PM, Hideki Kato wrote:
> I was, btw, really surprised when Zen beat fj with two stones
> handi.
> http://files.gokgs.com/games/2016/1/31/Zen19X-fj.sgf
>
> Hideki
On the DGoB forums fj stated, possibly in jest, that this was an even
game, as he had had a glass of wine f
On Mon, Feb 1, 2016 at 10:19 AM, Darren Cook wrote:
> It seems [1] the smart money might be on Lee Sedol:
In the DeepMind press conferences (
https://www.youtube.com/watch?v=yR017hmUSC4 -
https://www.youtube.com/watch?v=_r3yF4lV0wk ) Demis Hassabis stated,
that he was quietly confident.
I assume
I agree.
It might be interesting to set this up a while after the Lee Sedol
matches if Ke Jie still holds the #1 spot at at that time. After
beating the best player of the past ten years, beating the currently
best player would in a way complete AlphaGo's victory over current
human Go ability.
On
On Thu, Jan 28, 2016 at 3:14 PM, Stefan Kaitschick
wrote:
> That "value network" is just amazing to me.
> It does what computer go failed at for over 20 years, and what MCTS was
> designed to sidestep.
Thought it worth a mention: Detlef posted about trying to train a CNN
on win rate as well in F
That would make my writing nonsense of course. :)
Thanks for the pointer.
On Thu, Jan 28, 2016 at 12:26 PM, Xavier Combelle
wrote:
>
>
> 2016-01-28 12:23 GMT+01:00 Michael Markefka :
>>
>> I find it interesting that right until he ends his review, Antti only
>> prais
;d be more than satisfied.
On Thu, Jan 28, 2016 at 7:42 AM, Robert Jasiek wrote:
> Congratulations to the researchers!
>
> On 27.01.2016 21:10, Michael Markefka wrote:
>>
>> I really do hope that this also turns into a good analysis and
>> teaching tool for human p
I find it interesting that right until he ends his review, Antti only
praises White's moves, which are the human ones. When he stops, he
even considers a win by White as basically inevitable.
Now Fan Hui either blundered badly afterwards, or more promising, it
could be hard for humans to evaluate
I really do hope that this also turns into a good analysis and
teaching tool for human player. That would be a fantastic benefit from
this advancement in computer Go.
On Wed, Jan 27, 2016 at 9:08 PM, Aja Huang wrote:
> 2016-01-27 18:46 GMT+00:00 Aja Huang :
>>
>> Hi all,
>>
>> We are very excited
wrote:
> I doubt that the illegal moves would fall away since every professional
> would retake the ko... if it was legal
>
>
> On 2015-12-09 4:59, Michael Markefka wrote:
>>
>> Thank you for the feedback, everyone.
>>
>>
>> Regarding the CPU-
> >
>>> > On Tue, Dec 8, 2015 at 5:17 PM Petr Baudis wrote:
>>> >
>>> > > Hi!
>>> > >
>>> > > In case someone is looking for a starting point to actually
>>> > > implement
>>> > > Go r
Hello Detlef,
I've got a question regarding CNN-based Go engines I couldn't find
anything about on this list. As I've been following your posts here, I
thought you might be the right person to ask.
Have you ever tried using the CNN for complete playouts? I know that
CNNs have been tried for move
I would love to have something like this.
I would appreciate some way to configure depth levels and variable
branching factors for move generation as well as scoring playouts
using the NN.
Regards,
Michael
On Wed, Apr 29, 2015 at 3:37 PM, Josef Moudrik wrote:
> Hi!
>
> I am playing around with
I was thinking about bootstrapping possibilities, and wondered whether
it would be possible to use a shallower mimic net for positional
evaluation playouts from a specific depth on after having generated
positions with a certain branching factor that typically allows the
actual pro move to be inclu
I hope DolBaram has a good showing this year. Probably the most
promising contender gunning for Zen and CrazyStone.
On Fri, Mar 13, 2015 at 8:32 PM, Martin Mueller wrote:
> The 8th UEC Cup will start in a few hours. The top two programs get to play
> Cho Chikun on the 17th of March in Densei-sen.
Brilliant! Thank you, both of you, Peter and Claus!
-Mike
Claus Reinke wrote:
Now, for the technical matter: Could somebody please point me to a quick rundown of how modern
Go engines exactly utilize multicore environments and the workload is segregated and
distributed? I don't have any sig
Hideki Kato wrote:
Don Dailey: <[EMAIL PROTECTED]>:
On Thu, 2008-10-02 at 19:17 +0200, Michael Markefka wrote:
So, when are we going to see distributed computing? [EMAIL PROTECTED],
[EMAIL PROTECTED], [EMAIL PROTECTED] With Go engines that scale well to increased
processing capacity, i
I think I'll respond here as not to further detract from David
congratulory thread. :)
While not addressing the replies separately, rest assured that I've read
them all.
Quickly picking up on what Claus wrote here, I agree that there might be
some kind of "prestige angle" to exploit to get som
So, when are we going to see distributed computing? [EMAIL PROTECTED],
[EMAIL PROTECTED], [EMAIL PROTECTED] With Go engines that scale well to increased
processing capacity, imagine facilitating a few thousand PCs to do the
computing. For good measure, [EMAIL PROTECTED] as about 800,000 nodes on
35 matches
Mail list logo