On Mon, 2005-09-19 at 13:09 -0700, Bill Whillers wrote:
> So at what point (1k, 10k, 100k) might the overhead of a decompress on a
> frozen chunk make real sense?
You have to benchmark it yourself with your own data and network to see.
- Perrin
Thanks Perrin,
> Compression (using zlib) tends to speed things up a bit.
So at what point (1k, 10k, 100k) might the overhead of a decompress on a
frozen chunk make real sense? -- If we compressed every frozen 1k item
(requiring decompress everytime), might this only add unnecessary overhead?
On Mon, 2005-09-19 at 12:02 -0700, Bill Whillers wrote:
> From what I've learned (mostly from the generous people on this list), our
> local mysql, Storable and other usages does a great job at meeting those
> needs. In reality, the ideal case would be if our data were non-changing and
> could
> Why wrap the stored object in a database?
Thanks Matthew -- Our data is somewhat volatile but since our frozen objects
can get pretty big, like others we're always looking for an all-around better
"shared" solution for volatile data which is required during almost every
session.
From what I'