Thanks Perrin, > Compression (using zlib) tends to speed things up a bit.
So at what point (1k, 10k, 100k) might the overhead of a decompress on a frozen chunk make real sense? -- If we compressed every frozen 1k item (requiring decompress everytime), might this only add unnecessary overhead? On Monday 19 September 2005 12:19, Perrin Harkins wrote: > If it has to work across multiple machines, you will need to use a > daemon like MySQL. If it's local to one machine, BerkeleyDB or > Cache::FastMmap can beat it. Compression (using zlib) tends to speed > things up a bit when pushing huge amounts of data into MySQL across a > socket connection, so you might want to add that in as well. > > - Perrin