I serialised some very large Markov models (tens to low hundreds of
megabytes) for my PhD using java serialisation. A couple of hints:

*) they can be faster if you compress them (I used the standard Java
libraries). Disk access was the limiting factor in my case and
compression (I got 80% compression routinely) eased this bottleneck.

*) write functions to allow standard tail recursion optimisations to
performed by the compiler. I admit I never actually tested the
effectiveness of this, but it should improve with successive
generations of compiler. http://en.wikipedia.org/wiki/Tail_recursion

*) I only ever serialised classes of trivial structure which other
classes acted upon. I don't recall serialisation ever breaking. If the
classes you are serialising are the same classes you're changing every
time you make a minor change to your algorithm, that many change. Yes,
this breaks the vision of objects as both the data and the algorithms
acting on them.

*) If your data structure is very deep (which I imagine it is, if
you're seeing stack overflows), you may be better to store pointers to
every node in a hashtable (which is iterated though by the serialiser)
and serialise that.

*) The command line option "-Xss" controls the size of the stack (with
the Sun JVM). Use it just like the other memory options.

*) as pointed out elsewhere, this is probably not an ideal archival format.

cheers
stuart
_______________________________________________
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Reply via email to