Darxus <[EMAIL PROTECTED]> writes: | On 28 Oct 1998, Gary L. Hennigan wrote: | | > That was an excellent idea, unfortunately Darxus has already tried | > this and it didn't work for him. Perhaps gzip tries to read the whole | > file and even though, in your case, the file is truncated it'll do | > what it can. In Darxus' case that means it's trying to read past the | > 2GB limit and that's a no-no under 80x86 based Linux systems. | > | > However, now knowing that gzip will in fact decompress a file that's | > lost it's tail, Darxus could try to write a little C program that calls | > truncate() to truncate his file to around 2GB (a little less might be | > a good idea) and see what he can do with it. | > | > Of course I'd treat this idea as a last resort. I have NO idea if | > Darxus can copy that file, for a backup before trying the truncate() | > thing, and it'd be a Bad Thing (TM) if he truncated the existing file | > only to find out it wouldn't work. Plus, I don't know if truncate() | > will work on a file greater than 2GB? | | Unfortunately I cannot back it up. I only have 4.3gb of fat32 space | total. And it would be my guess that truncate() would need to use the | function that I'm guessing every other program is failing on because it | can only handle 2^31 bytes. I'd love to just see somebody who knows what | they're doing patch the libs to be able to handle, say, 2^63 byte files... | Is that even doable ? Or would an unsigned long int work ? | | It's sad, out of this 2.6gb file, I'm only interested in probably less | than a megabyte of it :) -- dunno if those files are at the beginning or | the end.
A fellow user, via personal email, suggested you might try the Unix "head" utility. Something like: head --bytes 1900m |gzip -d -c|tar tf - I'm running low on ideas. I posted your question to the kernel mailing list and I'll let you know if anything comes back. So far only a suggestion for using WinZip. Gary