On Fri, 2003-03-07 at 18:57, Rus Foster wrote:
This is possible but *only* if the file is very compressable [...]
While in _theory_ I'll buy the argument, in reality I have never yet seen a 90:1 compression ratio on any file, ever, on any platform, or with any algorithm. About the best I've seen in real life is 30:1.
If the file is mostly empty, a decent compression algorithm should compress all of the emtpy parts into a very small set:
[EMAIL PROTECTED]:~]$ dd if=/dev/zero of=test.zero bs=1024 count=102400 102400+0 records in 102400+0 records out [EMAIL PROTECTED]:~]$ ls -l test.zero -rw-rw-r-- 1 gordon gordon 104857600 Mar 7 23:28 test.zero [EMAIL PROTECTED]:~]$ bzip2 test.zero [EMAIL PROTECTED]:~]$ ls -l test.zero.bz2 -rw-rw-r-- 1 gordon gordon 113 Mar 7 23:28 test.zero.bz2
Almost 1000000:1 ratio.
This happens in real life when a program opens a file, seeks way in and writes something at the end. It's common when you write data at a pre-calculated offset like (uid * sizeof(struct)).
-- redhat-list mailing list unsubscribe mailto:[EMAIL PROTECTED] https://listman.redhat.com/mailman/listinfo/redhat-list