On 12 April 2010 20:13, Szymon Janc <szy...@janc.net.pl> wrote: > Dnia poniedziałek 12 kwiecień 2010 o 19:30:00 Vladimir 'φ-coder/phcoder' > Serbinenko napisał(a): >> > You can also buffer the block offsets so that you can seek in the part >> > of the file you have seen already. >> >> Decompressor is stateful so you'll need to save the sate as well which >> may eat more RAM than decompressing the whole uncompressed file. It also >> doesn't solve the issue of retrieving e.g. the last character from >> compressed file without retrieving all preceding ones > > I think that Michal was refering to cache already read blocks metadata (which > is not a bad idea if file is often seek back and forward).
I am not sure we are talking about the same thing here. Most compression formats don't compress the whole file, they compress only fixed sized parts of the file = blocks. This is because there is a fixed limit on the dictionary size and other technical reasons. The other reason is that if you have a tar archive or something like that it looks differently in different places so applying blanket compression to the whole thing might not be that much of a win anyway. So you could seek in the file if you knew where these blocks start but they are different size after compression and it is not required to store an index in the compressed file (and probably is not in most) as the next block simply starts after the previous one and you know the previous block ended when you finish decompressing it. However, there is no reason to throw away the information once you obtain it by decompressing some blocks. I guess there is no more metadata to be gained at this level but you could mean other blocks at some other level. Thanks Michal _______________________________________________ Grub-devel mailing list Grub-devel@gnu.org http://lists.gnu.org/mailman/listinfo/grub-devel