On Sun, Sep 13, 2009 at 01:02:40AM +0200, Vincent Diepeveen wrote:
>
> On Sep 10, 2009, at 12:55 AM, Michael Williams wrote:
>
> >Very interesting stuff. One glimmer of hope is that the memory
> >situations should improve over time since memory grows but Go
> >boards stay the same size.
> >
>
>
On Sep 13, 2009, at 10:19 AM, Petr Baudis wrote:
On Sun, Sep 13, 2009 at 01:02:40AM +0200, Vincent Diepeveen wrote:
On Sep 10, 2009, at 12:55 AM, Michael Williams wrote:
Very interesting stuff. One glimmer of hope is that the memory
situations should improve over time since memory grows bu
On Sun, Sep 13, 2009 at 10:48:12AM +0200, Vincent Diepeveen wrote:
>
> On Sep 13, 2009, at 10:19 AM, Petr Baudis wrote:
> >Just read the nVidia docs. Shifting has the same cost as addition.
> >
>
> Document number and url?
http://developer.download.nvidia.com/compute/cuda/2_3/toolkit/docs/NVIDIA
A document of 2 weeks ago where they write at least *something*,
not bad from nvidia, knowing they soon have to give lessons to
topcoders :)
It's not really systematic approach though. We want a list of all
instructions with latencies and throughput latency
that belong to it. Also lookup tim
Orego is currently using the MoGo policy (escape, local patterns,
capture, random). Including these as priors helps a little, but just
including them in the playouts helps a lot, even with time (rather
than # of playouts) fixed.
Peter Drake
http://www.lclark.edu/~drake/
On Sep 12, 2009,