On Sat, 13 Aug 2011, Bob Friesenhahn wrote:

On Sat, 13 Aug 2011, andy thomas wrote:
However, one of our users recently put a 35 Gb tar.gz file on this server and uncompressed it to a 215 Gb tar file. But when he tried to untar it, after about 43 Gb had been extracted we noticed the disk usage reported by df for that ZFS pool wasn't changing much. Using du -sm on the extracted archive directory showed that the size would increase over a period of 30 seconds or so and then suddenly drop back about 50 Mb and start increasing again. In other words it seems to be going into some sort of a loop and all we could do was to kill tar and try again when exactly the same thing happened after 43 Gb had been extracted.

What 'tar' program were you using? Make sure to also try using the Solaris-provided tar rather than something like GNU tar.

Using the default Solaris tar fixed the problem!

I've tended to use GNU tar on Solaris as apparently there was a bug in the Solaris version of tar from very log ago where it would not extract files properly from tarfiles created on non-Solaris systems. Maybe this long-standing bug has been fixed at last?

Thanks a lot for your help,

Andy
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to