Am Thu, Apr 27, 2023 at 04:58:02PM +0200 schrieb tastytea:

> > Does the transparent compression incur an overhead cost in processing,
> > memory use, or disk writes?  I feel like it certainly has to at least
> > use more memory.  Sorry if that's an RTFM question.
> 
> it'll use more cpu and memory, but disk writes and reads will be lower,
> because it compresses it on the fly.

The lzo algorithm which is used by default incurs a negligible performance 
penalty. Give it a try: take some big file, e.g. a video and then:
(with $FILE being the name of the file to compress)

Compression-optimised algorithms:
time gzip -k $FILE  # will take long with medium benefit
time xz -k $FILE    # will take super long
time bzip2 -k $FILE # will take also long-ish

Runtime-optimised algorithms:
time lz -k $FILE    # will go very very fast, but compression is relat. low
time zstd $FILE     # will go fast with better compression (comp. effort 3)
time zstd -6 $FILE  # will go fast-ish with more compression

> it should detect early if a file is not compressible and stop.

AFAIK, zfs compresses the beginning of a file and only if that yields a 
certain benefit, the entire file will be compressed.

-- 
Grüße | Greetings | Qapla’
Please do not share anything from, with or about me on any social network.

The realist knows what he wants; the idealist wants what he knows.

Attachment: signature.asc
Description: PGP signature

Reply via email to