Some quick comments:
I did store the search tree with early versions of Valkyria, but then
I gave it up.
Problems:
1) Searching deeper did not seem to overcome inherent fuseki weaknesses
2) The memory cost became too high
Advantage:
1) Playing fast in the opening saves time. This is very go
Sparks are flying, but I don't think that either of you are onto the
exact truth. Don: do you play Go the same way you play chess? I don't
think I do. The "opening", in computer terms, for me only gets to move
20 if there is a long Joseki involved. It happens too often that
somebody "tries" s
I concur with Don; for the early moves, this is likely to be helpful. On a
19x19 board, the first ten or fifteen moves of a pro game often follow fairly
well-known patterns, but it's not enough to simply memorize the patterns; there
is deep knowledge which explains why one 3,4 point is better th
On Tue, May 12, 2009 at 8:17 PM, Dave Dyer wrote:
>
> >
> >If I use persistent storage and do that search again in another game, I
> can start exactly where I left off and generate 50,000 more nodes. It will
> be the same as if I did 100,000 nodes instead of 50,000 nodes.Or put
> another
>> more nodes. It will be the same as if I did 100,000 nodes instead
>> of 50,000 nodes.Or put another way, it will be the same as if
>> I spent 20 seconds on this move instead of 10 seconds.
> ...
> Consider move 20 (for example). If you saved every "move 20" node
> you ever encountered, h
>>
>>But then MCTS is invalid. The point is that you do spend time learning that
>>these nodes are not relevant, so you might as well try to remember that.
It is invalid. It's just a heuristic that is working within the current domain.
>>If you are playing a game of chess and fall for a trap
>
>If I use persistent storage and do that search again in another game, I can
>start exactly where I left off and generate 50,000 more nodes. It will be
>the same as if I did 100,000 nodes instead of 50,000 nodes.Or put another
>way, it will be the same as if I spent 20 seconds on thi
On Tue, May 12, 2009 at 6:47 PM, Dave Dyer wrote:
>
> >
> >I assume Dave Dyer does not understand alpha beta pruning either, or he
> would not assume the branching factor is 361.
>
> The branch at the root is about (361-move number) - you have to consider
> all top level moves. A/B only kicks in
On Tue, May 12, 2009 at 6:33 PM, Dave Dyer wrote:
>
> An essential feature of monte carlo is that it's search space is
> random and extremely sparse, so consequently opportunity to re-use
> nodes is also extremely sparse.
That depends. Monte Carlo only expands node it considered promising and
>
>I assume Dave Dyer does not understand alpha beta pruning either, or he would
>not assume the branching factor is 361.
The branch at the root is about (361-move number) - you have to consider
all top level moves. A/B only kicks in by lowering the average branching
factor at lower levels.
If
An essential feature of monte carlo is that it's search space is
random and extremely sparse, so consequently opportunity to re-use
nodes is also extremely sparse.
On the other hand, if the search close to the root is not sparse, my
previous arguments about the number of nodes and the number of t
It's possible for the tree to become too narrow. On a 9x9 board, you might be
able to say that there are only one or two playable moves, but on 19x19, I
doubt that any pro would claim that the options are that narrow, even
accounting for symmetry, It's common to hear that "some pros play A, some
On Tue, May 12, 2009 at 6:05 PM, Don Dailey wrote:
> And for MCTS it is much lower than 10.
>
>
> 2009/5/12 terry mcintyre
>
>> In the opening, among reasonably clueful players, the branching factor is
>> much closer to 10 than to 361.
>>
>
I assume Dave Dyer does not understand alpha beta pruni
And for MCTS it is much lower than 10.
2009/5/12 terry mcintyre
> In the opening, among reasonably clueful players, the branching factor is
> much closer to 10 than to 361.
>
> Terry McIntyre
>
> On general principles, when we are looking for a solution of a social
> problem, we must expect to
I don't think you have any understanding of what I'm suggesting.
You don't actually store the whole tree, you store whatever part of it is
generated by the program and that is an infinitesimal subset.I have
noticed that many times you tend to think in purely theoretical terms when
it was compl
In the opening, among reasonably clueful players, the branching factor is much
closer to 10 than to 361.
Terry McIntyre
On general principles, when we are looking for a solution of a social problem,
we must expect to reach conclusions quite opposed to the usual opinions on the
subject; othe
You have made the assumption that the move that your opponent selected was on average explored equally as much as all of the other moves. That seems a bit
pessimistic. One would expect that the opponent selected a strong move and one would also expect that your tree explored that strong move mor
At 02:13 PM 5/12/2009, Michael Williams wrote:
>Where does your 99% figure come from?
1/361 < 1%
by endgame there are still easily 100 empty spaces
on the board.
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailm
At 02:13 PM 5/12/2009, Michael Williams wrote:
>Where does your 99% figure come from?
1/361 < 1%
by endgame there are still easily 100 empty spaces
on the board.
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailm
Where does your 99% figure come from?
Dave Dyer wrote:
Storing an opening book for the first 10 moves requires
331477745148242200 nodes. Even with some reduction for symmetry,
I don't see that much memory becoming available anytime soon, and you still
have to evaluate them somehow.
Ac
It often gets interrupted by me so that I can change some code, etc. And I often break backwards compatibility, so I have to delete the file and start from
scratch. In the past it has run for up to around 24 hours, but that was an older, slower version. I just kicked-off a 7x7 run. I expect it
Storing an opening book for the first 10 moves requires
331477745148242200 nodes. Even with some reduction for symmetry,
I don't see that much memory becoming available anytime soon, and you still
have to evaluate them somehow.
Actually storing a tree, except for extremely limited speci
2009/5/12 terry mcintyre
> Are we approaching a point where it would be practical to precompute the
> opening tree to some depth, cache the results on SSD, and incrementally
> improve that knowledge based upon subsequent games?
>
I have had a theory for a long time that the best way to build a "
How long has it been pondering?
Terry McIntyre
On general principles, when we are looking for a solution of a social problem,
we must expect to reach conclusions quite opposed to the usual opinions on the
subject; otherwise it would be no problem. We must expect to have to attack,
not what
That's basically what I'm doing. Except that there is no depth limit and only the parts of the tree that you need get loaded back into memory. It's not a
playing engine yet so it can't build the tree as it plays games. Currently it just ponders the empty board.
terry mcintyre wrote:
Are we a
Just a reminder that epsilon trick (invented by Jakub Pawlewicz) can
be used to avoid excessive memory usage (reuse memory) without
significant performance loss. It has been tested for proof number
search, but there is no reason for it to behave differently in MCTS.
Lukasz Lew
On Tue, May 12, 200
Those numbers are the average after the tree has grown to 1B nodes. I'm sure the cache hates me. Each tree traversal will likely make several reads from
random locations in a 50 GB file.
Don Dailey wrote:
So you are saying that use disk memory for this?
This could be pretty deceiving if m
So you are saying that use disk memory for this?
This could be pretty deceiving if most of your reads and writes are
cached.What happens when your tree gets much bigger than available
memory?
- Don
On Tue, May 12, 2009 at 1:18 PM, Michael Williams <
michaelwilliam...@gmail.com> wrote:
> I
Are we approaching a point where it would be practical to precompute the
opening tree to some depth, cache the results on SSD, and incrementally improve
that knowledge based upon subsequent games?
Terry McIntyre
On general principles, when we are looking for a solution of a social problem,
In my system, I can retrieve the children of any node at a rate of about 100k
nodes/sec.
And I can save nodes at a rate of over 1M nodes/sec (this is much faster
because in my implementation, the operation is sequential on disk).
Those numbers are from 6x6 testing.
Don Dailey wrote:
This is
This is probably a good solution. I don't believe the memory has to be
very fast at all because even with light playouts you are doing a LOT of
computation between memory accesses.
All of this must be tested of course. In fact I was considering if disk
memory could not be utilized as a kind
Memory-aware algorithms take advantage of the varying access characteristics.
Long, long ago, computer memory was actually a rotating drum; each instruction
chained to the next location; it was worth a lot of effort to place the
instructions in such a manner that they'd be where you need them wh
cool, that's what i was wondering -- that you'd have to treat it
as something inbetween ram and a HD.
thanks,
s.
On Tue, May 12, 2009 at 12:48 PM, Michael Williams
wrote:
> It depends on how you use it and how much you pay for it. If you get a
> high-end Intel SSD, you can treat it however you
It depends on how you use it and how much you pay for it. If you get a high-end Intel SSD, you can treat it however you like. But I can't afford that. I got
a cheap SSD and so I had shape my algorithm around which kind of disk operations it likes and which ones it doesn't.
steve uurtamo wrot
is the ssd fast enough to be practical?
s.
On Tue, May 12, 2009 at 12:41 PM, Michael Williams
wrote:
> Don Dailey wrote:
>>
>> On Tue, May 12, 2009 at 12:16 PM, Michael Williams
>> mailto:michaelwilliam...@gmail.com>> wrote:
>>
>> I have a trick ;)
>>
>> I am currently creating MCTS trees
Don Dailey wrote:
On Tue, May 12, 2009 at 12:16 PM, Michael Williams
mailto:michaelwilliam...@gmail.com>> wrote:
I have a trick ;)
I am currently creating MCTS trees of over a billion nodes on my 4GB
machine.
Ok, I'll bite.What is your solution?
I use an SSD. There are m
On Tue, May 12, 2009 at 12:16:46PM -0400, Michael Williams wrote:
> I have a trick ;)
>
> I am currently creating MCTS trees of over a billion nodes on my 4GB
> machine.
That is the easy part. Can you also (decompress and) read it after you have
created it?
- Heikki
(ha-ha, only serious)
All,
let me chip in with some additional thoughts about massively parallel
hardware.
I recently implemented Monte Carlo playouts on CUDA, to run them on the
GPU. It was more or less a "naive" implementation (read: a more or less
straight port with optimised memory access patterns). I am hope
On Tue, May 12, 2009 at 12:16 PM, Michael Williams <
michaelwilliam...@gmail.com> wrote:
> I have a trick ;)
>
> I am currently creating MCTS trees of over a billion nodes on my 4GB
> machine.
Ok, I'll bite.What is your solution?
- Don
>
>
> ___
Compression tricks will only take you so far. Assuming you can get 2 to 1,
for instance, that doesn't scale. It will put the problem off for 1
generation for instance.It's not something you can keep doing - it's a 1
time thing but the memory vs CPU power thing may be constant.
So while it
I have a trick ;)
I am currently creating MCTS trees of over a billion nodes on my 4GB machine.
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/
This is a great post, and some good observations. I agree with your
conclusions that CPU power is increasing faster than memory and memory
bandwidth. Let me give you my take on this.
In a nutshell, I believe memory will increasingly become the limiting
factor no matter what direction we go.
increasing memory is more expensive than increasing cpu speed
at this point. there was an addressing issue with 32bit machines,
but that shouldn't be too much of an issue anymore. most people
want to pay less than or equal to the price of their last machine
whenever they buy one, though, so compa
Summary: The trend in computer systems has been for CPU power to grow much
faster than memory size. The implication of this trend for MCTS computer go
implementations is that "heavy" playouts will have a significant cost
advantage
in the future.
I bought a Pentium D 3GHz system a few years back.
The Projects link (http://fuego.sourceforge.net/projects.html) on the Fuego
site (http://fuego.sourceforge.net/) is broken.
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/
See
http://www.grappa.univ-lille3.fr/icga/event_info.php?id=35
Hideki
Ingo Althöfer: <20090512121021.73...@gmx.net>:
>Hello,
>
>can someone from the guys in Pamplona please
>let us know on which days and at which hours
>the games of the Olympiad are played?
>
>Which of those games can be followed
Hello,
can someone from the guys in Pamplona please
let us know on which days and at which hours
the games of the Olympiad are played?
Which of those games can be followed on KGS?
Thanks in advance, Ingo.
--
Neu: GMX FreeDSL Komplettanschluss mit DSL 6.000 Flatrate + Telefonanschluss
für nur 1
47 matches
Mail list logo