> I'm not quite sure what this test should show ?

For me, the test shows how writing to a gzip compressed
pool completely kills interactive desktop performance.

At least when using an usb keyboard and mouse.
(I've not yet tested with a ps/2 keyboard & mouse; or
a SPARC box)

> Compressing random data is the perfect way to generate heat.
> After all, compression working relies on input entropy being low.
> But good random generators are characterized by the opposite - output 
> entropy being high. Even a good compressor, if operated on a good random
> generator's output, will only end up burning cycles, but not reducing the
> data size.

Whatever I write to the gzip compressed pool
(128K of /dev/urandom random data, or 128K of a 
buffer filled with completely with **** characters, or 
the first 128K from /etc/termcap),  the Xorg / Gnome
desktop becomes completely unusable while 
writing to such a gzip compressed zpool / zfs.

With an "lzjb" compressed zpool / zfs the system
remains more or less usable...

> Hence, is the request here for the compressor module
> to 'adapt', kind of first-pass check the input data whether it's
> sufficiently low-entropy to warrant a compression attempt ?
> 
> If not, then what ?

I'm not yet sure what the problem is. But it sure would be nice
if a gzip compressed zpool / zfs wouldn't kill interactive desktop
performance as is does now.
 
 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to