Re: [zfs-discuss] more ZFS recovery

2008-08-07 Thread Anton B. Rang
> How would you describe the difference between the file system > checking utility and zpool scrub? Is zpool scrub lacking in its > verification of the data? To answer the second question first, yes, zpool scrub is lacking, at least to the best of my knowledge (I haven't looked at the ZFS source

Re: [zfs-discuss] Kernel panic at zpool import

2008-08-07 Thread Marc Bevand
Borys Saulyak eumetsat.int> writes: > root omases11:~[8]#zpool import > [...] > pool: private > id: 3180576189687249855 > state: ONLINE > action: The pool can be imported using its name or numeric identifier. > config: > > private ONLINE > c7t60060160CBA21000A6D22553CA91DC11d0 ONLIN

Re: [zfs-discuss] Poor ZFS performance when file system is close to full

2008-08-07 Thread Andrey Dmitriev
there were 130G left on the zpool. df -h from before one if the file system was destroyed is in the original post. Some file system viewed that as 1% full, others as 94-97% (and some others with fairly random numbers), which is another mystery to me as well. Shouldn't all file systems have show

Re: [zfs-discuss] Poor ZFS performance when file system is close to full

2008-08-07 Thread Bob Friesenhahn
On Thu, 7 Aug 2008, Andrey Dmitriev wrote: > I am sure.. Nothing but this box ever accessed them. All NFS access > was stopped to the box. The RAID sets are identical (9 drive RAID5). > We tested the file system almost non-stop for almost two days and > never did I ever get it to write above 4

Re: [zfs-discuss] Poor ZFS performance when file system is close to full

2008-08-07 Thread Andrey Dmitriev
I am sure.. Nothing but this box ever accessed them. All NFS access was stopped to the box. The RAID sets are identical (9 drive RAID5). We tested the file system almost non-stop for almost two days and never did I ever get it to write above 4 megs (on average it was below 3 megs). The second I

Re: [zfs-discuss] Poor ZFS performance when file system is close to full

2008-08-07 Thread Bob Friesenhahn
On Thu, 7 Aug 2008, Andrey Dmitriev wrote: > We had a situation where write speeds to a ZFS consisting of 2 7TB > RAID5 LUNs came to a crawl. We have spent a good 100 men hours > trying to troubleshoot the issue eliminating HW issues. In the end, > when we whacked about 2TB out of 14, performan

Re: [zfs-discuss] more ZFS recovery

2008-08-07 Thread Victor Latushkin
Miles Nordin пишет: >> "r" == Ross <[EMAIL PROTECTED]> writes: > > r> Tom wrote "There was a problem with the SAS bus which caused > r> various errors including the inevitable kernel panic". It's > r> the various errors part that catches my eye, > > yeah, possibly, but there

[zfs-discuss] Poor ZFS performance when file system is close to full

2008-08-07 Thread Andrey Dmitriev
All, We had a situation where write speeds to a ZFS consisting of 2 7TB RAID5 LUNs came to a crawl. We have spent a good 100 men hours trying to troubleshoot the issue eliminating HW issues. In the end, when we whacked about 2TB out of 14, performance went back to normal (300megs+ vs 3 megs whe

Re: [zfs-discuss] more ZFS recovery

2008-08-07 Thread A Darren Dunham
On Thu, Aug 07, 2008 at 11:34:12AM -0700, Richard Elling wrote: > Anton B. Rang wrote: > > First, there are two types of utilities which might be useful in the > > situation where a ZFS pool has become corrupted. The first is a file system > > checking utility (call it zfsck); the second is a dat

Re: [zfs-discuss] more ZFS recovery

2008-08-07 Thread Richard Elling
[I think Miles and I seem to be talking about two different topics] Miles Nordin wrote: >> "re" == Richard Elling <[EMAIL PROTECTED]> writes: >> > > re> If your pool is not redundant, the chance that data > re> corruption can render some or all of your data inacces

Re: [zfs-discuss] more ZFS recovery

2008-08-07 Thread Bill Sommerfeld
On Thu, 2008-08-07 at 11:34 -0700, Richard Elling wrote: > How would you describe the difference between the data recovery > utility and ZFS's normal data recovery process? I'm not Anton but I think I see what he's getting at. Assume you have disks which once contained a pool but all of the uberb

Re: [zfs-discuss] more ZFS recovery

2008-08-07 Thread Bob Friesenhahn
On Thu, 7 Aug 2008, Miles Nordin wrote: I must apologize that I was not able to read your complete email due to local buffer overflow ... > someone who knows ZFS well like Pavel. Also, there is enough concern > for people designing paranoid systems to approach them with the view, > ``ZFS is not

Re: [zfs-discuss] more ZFS recovery

2008-08-07 Thread Richard Elling
Anton B. Rang wrote: >> From the ZFS Administration Guide, Chapter 11, Data Repair section: >> Given that the fsck utility is designed to repair known pathologies >> specific to individual file systems, writing such a utility for a file >> system with no known pathologies is impossible. >> > >

Re: [zfs-discuss] more ZFS recovery

2008-08-07 Thread Miles Nordin
> "r" == Ross <[EMAIL PROTECTED]> writes: r> Tom wrote "There was a problem with the SAS bus which caused r> various errors including the inevitable kernel panic". It's r> the various errors part that catches my eye, yeah, possibly, but there are checksums on the SAS bus, and

Re: [zfs-discuss] Shared ZFS in Multi-boot?

2008-08-07 Thread Bob Netherton
On Thu, 2008-08-07 at 09:16 -0700, Daniel Templeton wrote: > Is there a way that I can add the disk to a ZFS pool and have > the ZFS pool accessible to all of the OS instances? I poked through the > docs and searched around a bit, but I couldn't find anything on the topic. Yes. I do that all o

[zfs-discuss] Shared ZFS in Multi-boot?

2008-08-07 Thread Daniel Templeton
I have a machine with S10, Nevada, and OpenSolaris all installed on the same disk. My objective is to be able to share user data among all of the images. It's easy with S10 and Nevada because they're all in slices in the same partition. OpenSolaris has its own partition and so can't share da

Re: [zfs-discuss] more ZFS recovery

2008-08-07 Thread Cindy . Swearingen
Hi Richard, Yes, sure. We can add that scenario. What's been on my todo list is a ZFS troubleshooting wiki. I've been collecting issues. Let's talk soon. Cindy Richard Elling wrote: > Tom Bird wrote: > >> Richard Elling wrote: >> >> >> >>> I see no evidence that the data is or is not correct

Re: [zfs-discuss] ZFS on 32bit.

2008-08-07 Thread Ross
Hmm... it appears that my e-mail to the zfs list covering the problems has disappeared. I will send it again and cross my fingers. The basic problem I found was that with the Supermicro AOC-SAT2-MV8 card (using the marvell chipset), drive removals are not detected consistently by Solaris. The

Re: [zfs-discuss] ZFS on 32bit.

2008-08-07 Thread Brian D. Horn
1) I don't believe that any bug report has been generated despite various e-mails about this topic. 2) The marvell88sx driver has not been changed recently, so that if this problem actually exists, it is probably related to the sata framework. 3) Is this problem simply that when a device i

[zfs-discuss] Kernel panic at zpool import

2008-08-07 Thread Borys Saulyak
Hi, I have problem with Solaris 10. I know that this forum is for OpenSolaris but may be someone will have an idea. My box is crashing on any attempt to import zfs pool. First crash happened on export operation and since then I cannot import pool anymore due to kernel panics. Is there any way o

Re: [zfs-discuss] ZFS on 32bit.

2008-08-07 Thread Ross
> In the most recent code base (both OpenSolaris/Nevada and S10Ux with patches) > all the known marvell88sx problems have long ago been dealt with. I'd dispute that. My testing appears to show major hot plug problems with the marvell driver in snv_94. This message posted from opensolaris.org

Re: [zfs-discuss] more ZFS recovery

2008-08-07 Thread Volker A. Brandt
Anton B. Rang writes: > dumping out the raw data structures and looking at > them by hand is the only way to determine what > ZFS doesn't like and deduce what went wrong (and > how to fix it). http://www.osdevcon.org/2008/files/osdevcon2008-max.pdf :-) --

Re: [zfs-discuss] more ZFS recovery

2008-08-07 Thread Victor Latushkin
Miles Nordin wrote: >> "re" == Richard Elling <[EMAIL PROTECTED]> writes: >> "tb" == Tom Bird <[EMAIL PROTECTED]> writes: > > tb> There was a problem with the SAS bus which caused various > tb> errors including the inevitable kernel panic, the thing came > tb> back up with 3 ou

Re: [zfs-discuss] ZFS on 32bit.

2008-08-07 Thread Bryan Allen
+-- | On 2008-08-07 03:53:04, Marc Bevand wrote: | | Bryan, Thomas: these hangs of 32-bit Solaris under heavy (fs, I/O) loads are a | well known problem. They are caused by memory contention in the kernel heap. | Check

Re: [zfs-discuss] more ZFS recovery

2008-08-07 Thread Ross
Hi folks, Miles, I don't know if you have more information about this problem than I'm seeing, but from what Tom wrote I don't see how you can assume this is such a simple problem as an unclean shutdown? Tom wrote "There was a problem with the SAS bus which caused various errors including the i

Re: [zfs-discuss] more ZFS recovery

2008-08-07 Thread Victor Latushkin
> Would be grateful for any ideas, relevant output here: > > [EMAIL PROTECTED]:~# zpool import > pool: content > id: 14205780542041739352 > state: FAULTED > status: The pool metadata is corrupted. > action: The pool cannot be imported due to damaged devices or data. > The pool may b