On Mon, May 10, 2010 at 4:47 PM, Mark Boon <[email protected]> wrote:

> On Mon, May 10, 2010 at 10:33 AM, Don Dailey <[email protected]> wrote:
> > There is no need for this unless it is a speed optimization.   You can
> find
> > any child by generating the hash key of each child.   It seems ridiculous
> to
> > me to store an average of at least 100 pointers per node.
>
> It's possible of course I have misunderstood how hash-tables work or
> are implemented. How, from a position in the hash-table, do you get to
> the next position with the best win-rate? I doubt the hash-key is
> generated on-the-fly as that would be very expensive since it would
> entail the administration of making a full move, counting (some form
> of) liberties and removing captives. So I always assumed they are
> stored, probably as hash-keys, in the node. But then I see no
> difference between a pointer or a hash-key. Just a different name for
> the same thing.
>

It's extremely cheap to generate the hash key of a child move if you use
zobrist hashing.   You just add the random number of the piece/point you are
placing.    Only a fraction of the moves requires substantially more work.


As far as the caching issue is concerned,  (to answer Peter) I assumed that
all goes out the window with any of these schemes.   I don't know what Peter
is proposing as an alternative.   Tree's with pointers are not exactly cache
friendly and in Mark's scheme he has trees with pointers and an additional
non-cache friendly hash table with lists of pointers.   Are those lists of
pointers also linked lists?

With heavy playouts,  I believe accessing the child nodes using ANY scheme
is relatively cheap - when compared to the time you spend playing an entire
game using heavy playouts.   It might start to matter if you are using a
really fast uniform random super light playout, but no strong program does
that anymore.

If you have the memory I'm not saying it's wrong to maintin lists of
pointers for children,  but I'm a rather simplistic engineer, I had
constantly adding more and more data structure complexity when I can find
something simple that works, is more cache friendly, and is easier to
understand.

Another point to keep in mind is that the newest computers tend to have
pretty good size data cache - but regardless of how big it is one has to
consider whether it's wise to continue to push it by making each node bigger
and bigger just to have a poiner for every occasion lying around taking up
space.

Don






>
> Mark
> _______________________________________________
> Computer-go mailing list
> [email protected]
> http://dvandva.org/cgi-bin/mailman/listinfo/computer-go
>
_______________________________________________
Computer-go mailing list
[email protected]
http://dvandva.org/cgi-bin/mailman/listinfo/computer-go

Reply via email to