From: Mark Panzer <[EMAIL PROTECTED]>
> I'm guessing Win95 can go out to 4GB 2^32 so maybe if you tried to > extract this archive in Win it still wouldn't work (BTW: how did this > file get created? The whole thing would have to be over 4GB uncompressed > (right?)) Well here's what you can assume, 1. The file you want is in > this archive but you cannot seek past 2.0GB 2. It would be a very large > effort to recompile libc for >2GB (and all of the associated programs). > 3. You can access this file via M$ related utilities. Winzip was able to open the .tgz file, but needed to extract the .tar file to a temporary directory in order to access its contents, which I do not have enough space for. I created the file with a command something like "tar -zcvf /mnt/c/home.tgz /home" -- /mnt/c is a fat32 filesystem. Apparently these utilities can handle appending to files over 2gb, as long as they were less when the process started. > Here's my idea: Try installing DOSEMU and finding a DOS based tar/gz > extractor. Since many DOS programs write directly to the hardware, if > they support a standard 4 byte (2^32) file pointer they might be able to > extract it for you. It's just an idea, but I'm not posititve it will > work for you. DOS based tar/gz utilities could be quite useful, if they do not have the 2gb limit that the unix flavors have. ------------------------------------------------------------ From: " Raymond A. Ingles" <[EMAIL PROTECTED]> > Well, darn. Okay, I know you can access the first 2GB of it. If you chop > off the back .6GB, and make the file under 2GB, "tar xzvf" and all the > other lovely suggestions will extract just fine up until the cut. Yeah, but *how* do I access the 1st 2gb ? The split program can't even open it up to split it. cat can't even open it. Maybe there's a DOS/windows program that can split it ? > There *is* support for >2GB files somewhere, but I think you'll have to > do some web searches or hit the [EMAIL PROTECTED] mailing list > for info on where and how. (Or, as has also been suggested, find a 64-bit > machine. :-/ ) Is this actually a kernel issue then ? ------------------------------------------------------------ From: Mike Touloumtzis <[EMAIL PROTECTED]> > The 2GB limit is not imposed by ext2fs; it's a limitation of the VFS > (Virtual Filesystem) layer in the kernel. Linus has so far resisted > the extension of file sizes to 64 bit on 32 bit architectures, because > he doesn't like the code that gcc generates for 64-bit arithmetic. So > I'm kind of curious as to how you could generate a >2GB file under > Linux at all, under any filesystem. Okay, so it is a kernel issue. Is it *only* a kernel issue ? If this is fixed in the kernel, does it need to be fixed elsewhere ? Perhaps this could be an option in the kernel compile configuration ? My only guess as to *how* it happened is that, when tar/gz started creating the file, it was under 2gb (file size of 0), and since it didn't have to re-open it above the 2gb mark, it just kept blindly appending, without thinking about it. Winzip was able to open the gzip layer, so it's intact. ------------------------------------------------------------ From: Joey Hess <[EMAIL PROTECTED]> > Mike Touloumtzis wrote: > > The 2GB limit is not imposed by ext2fs; it's a limitation of the VFS > > (Virtual Filesystem) layer in the kernel. > > Given that, it seems to me that if you could write the file directly to a > large block device (like a spare 2.6 gb hard drive partition), you could > then access it, since linux wouldn't be going through the VFS layer. > > I know you don't have a spare HD, and I can't think how you could move your > file from the ext2 drive it's on to be directly on a block device anyway, > since linux refuses to read the whole thing, so this is only a partial > solution.. (It's currently on a fat32 filesystem) Okay, you've inspired some sick, SICK, thoughts. I hope you're happy with yourself. :) Here goes... Okay, if I can do a raw copy of this file to an unformatted hard drive, it would be neet, because I, supposedly, could then untar/gzip it directly from that raw device. The problem is that I do not have an extra drive. I have sufficient space on my linux drive, but I cannot copy this file to an ext2 filesystem, as it will suddenly have a filesize of 0, which is not particularly useful to me. Here's where it gets sick. I have, in the past, done twisted things with unix. One of them was creating a fat filesystem in a file on an ext2 filesystem -- a loopback device. I have no freaking idea why I would have done such a thing. Scratch that... probably had something to do with dosemu. Or not. I dunno, but it was cool. And I have about 5.7gb free on my ext2 filesystem, so I could create a sufficiently large fat32 filesystem in my ext2 filesystem in wich to store it. More complications: 1) I cannot then just use the "cp" command to move it from my 4.3gb hard drive to my fat32 filesystem on my linux drive, because it's got the 2gb limitation that all other things seem to have. 2) I would not be able to copy between the 2 fat32 filesystems under windows, as windows cannot mount an ext2 filesystem, let alone a fat32 filesystem *inside* an ext2 filesystem. 3) If I *could* get this file into the fat32 loopback filesystem, and then format my 4.3gb hard drive, I would not be able to use dd to do a raw copy to that hard drive, because of the 2gb limit. Hmm... thoughts.... DOSEMU ?? Perhaps... I should be able to get dosemu to mount both fat32 filesystems, *and* use the DOS copy command. Right ? I could then format my 4.3gb hard drive, and then I'd still have the task of doing a raw copy of my 2.6gb home.tgz file to my 4.3gb hard drive. I still would not be able to do it with dd. The question now seems to be, would the dos rawrite program be able to handle copying a 2.6gb file from a fat32 loopback device to a hard drive under dosemu ? ------------------------------------------------------------ From: Hamish Moffatt <[EMAIL PROTECTED]> > Perhaps you can use the cygnus GNU/Win32 programs to do it under Windows; > they might let you do the pipe, or GNU tar would have the z option. This is a thought that had occurred to me -- I use cygwin32 here at work sometimes, since I'm forced to use a win95 machine. But I was thinking that the cygwin32 versions would have the same 2gb limit.. ? I had forgotten to bring up this possibility, thank you. Oh, and it would be my guess that they'd be able to handle piping. > Even if ext2fs doesn't support >2Gb files, the libraries and kernel should > IMHO let you work with them on file systems that do, eg fat32. Well, we've established that the ext2 filesystem is noth the problem, and that the kernel, and maybe the libraries, are. ------------------------------------------------------------ I am appreciating this feedback/suggestions/etc. very much. Thank you all... it'll be nice when I find a solution :) In the meantime, if you reply to this, please reply only to the list. If you reply to both me, personally, and to the list, I will get 2 copies of your response in my IN.debian folder, and with the size that this thread has grown to, 1 is enough :) ________________________________________________________________________ ***PGP fingerprint = D5 EB F8 E7 64 55 CF 91 C2 4F E0 4D 18 B6 7C 27*** [EMAIL PROTECTED] / http://www.op.net/~darxus Chaos reigns.