I thought a ZFS file system wouldn't destroy a ZFS volume? Hmm, I'm not sure what to do now ...

First of all, this zfs volume Data/subversion1 has been working for a year and suddenly after a reboot of the Solaris server, running of the zpool export and zpool import command, I get problems with this ZFS volume?

Today I checked some more, after reading this guide: 
http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide

My main question is: Is my ZFS volume which is part of a zpool lost or can I recover it?

If I upgrade the Solaris server to the latest and do a zpool export and zpool import help?

All advices appreciated :-)

Here is some more information:

-bash-3.00$ zfs list -o name,type,used,avail,ratio,compression,reserv,volsize Data/subversion1
NAME                TYPE   USED  AVAIL  RATIO  COMPRESS  RESERV  VOLSIZE
Data/subversion1  volume  22.5K   511G  1.00x       off    250G     250G

I've also learned the the AVAIL column reports what's available in the zpool and NOT what's available in the ZFS volume.

-bash-3.00$ sudo zpool status -v
Password:
  pool: Data
 state: ONLINE
 scrub: scrub in progress, 5.86% done, 12h46m to go
config:

        NAME                     STATE     READ WRITE CKSUM
        Data                     ONLINE       0     0     0
          c4t5000402001FC442Cd0  ONLINE       0     0     0

errors: No known data errors

Interesting thing here is that the scrub process should be finished today but the progress is much slower than reported here. And will the scrub process help anything in my case?


-bash-3.00$ sudo fmdump
TIME                 UUID                                 SUNW-MSG-ID
Nov 15 2007 10:16:38 8aa789d2-7f3a-45d5-9f5c-c101d73b795e ZFS-8000-CS
Oct 14 09:31:40.8179 8c7d9847-94b7-ec09-8da7-c352de405b78 FMD-8000-2K

bash-3.00$ sudo fmdump -ev
TIME                 CLASS                                 ENA
Nov 15 2007 09:33:52 ereport.fs.zfs.io 0x915e6850ff400401 Nov 15 2007 09:33:52 ereport.fs.zfs.io 0x915e6850ff400401 Nov 15 2007 09:33:52 ereport.fs.zfs.io 0x915e6897db600401 Nov 15 2007 09:33:52 ereport.fs.zfs.io 0x915e688d11500401 Nov 15 2007 09:33:52 ereport.fs.zfs.io 0x915e68926e600401 Nov 15 2007 09:33:52 ereport.fs.zfs.io 0x915e6897db600401 Nov 15 2007 09:33:52 ereport.fs.zfs.io 0x915e68a3d3900401 Nov 15 2007 09:33:52 ereport.fs.zfs.io 0x915e68bc67400001 Nov 15 2007 09:33:52 ereport.fs.zfs.io 0x915e68d8bb600401 Nov 15 2007 09:33:52 ereport.fs.zfs.io 0x915e68da5b500001 Nov 15 2007 09:33:52 ereport.fs.zfs.io 0x915e68da5b500001 Nov 15 2007 09:33:52 ereport.fs.zfs.io 0x915e68f0c9800401 Nov 15 2007 09:33:52 ereport.fs.zfs.io 0x915e68da5b500001 Nov 15 2007 09:33:52 ereport.fs.zfs.io 0x915e6897db600401 Nov 15 2007 09:33:52 ereport.fs.zfs.io 0x915e68e981900001 Nov 15 2007 09:33:52 ereport.fs.zfs.io 0x915e68f0c9800401 Nov 15 2007 09:33:52 ereport.fs.zfs.io 0x915e690a11000401 Nov 15 2007 09:33:52 ereport.fs.zfs.io 0x915e68f0c9800401 Nov 15 2007 09:33:52 ereport.fs.zfs.io 0x915e690385500001 Nov 15 2007 09:33:52 ereport.fs.zfs.io 0x915e690385500001 Nov 15 2007 09:33:52 ereport.fs.zfs.io 0x915e690a11000401 Nov 15 2007 09:33:52 ereport.fs.zfs.io 0x915e692a4ca00001 Nov 15 2007 09:33:52 ereport.fs.zfs.io 0x915e68bc67400001 Nov 15 2007 09:33:52 ereport.fs.zfs.io 0x915e690a11000401 Nov 15 2007 09:33:52 ereport.fs.zfs.io 0x915e6850ff400401 Nov 15 2007 09:33:52 ereport.fs.zfs.io 0x915e6850ff400401 Nov 15 2007 09:33:52 ereport.fs.zfs.io 0x915e68bc67400001 Nov 15 2007 09:33:52 ereport.fs.zfs.io 0x915e690385500001 Nov 15 2007 09:33:52 ereport.fs.zfs.data 0x915e6850ff400401 Nov 15 2007 09:33:52 ereport.fs.zfs.io 0x915e68a3d3900401 Nov 15 2007 09:33:52 ereport.fs.zfs.io 0x915e68a3d3900401 Nov 15 2007 10:16:12 ereport.fs.zfs.vdev.open_failed 0x0533bb1b56400401 Nov 15 2007 10:16:12 ereport.fs.zfs.zpool 0x0533bb1b56400401 Oct 14 09:31:31.6092 ereport.fm.fmd.log_append 0x02eb96a8b6502801 Oct 14 09:31:31.8643 ereport.fm.fmd.mod_init 0x02ec89eadd100401


On 3. mars. 2009, at 08.10, Lars-Gunnar Persson wrote:

I've turned off iSCSI sharing at the moment.

My first question is: how can zfs report available is larger than reservation on a zfs volume? I also know that used mshould be larger than 22.5 K. Isn't this strange?

Lars-Gunnar Persson

Den 3. mars. 2009 kl. 00.38 skrev Richard Elling <richard.ell...@gmail.com >:

Lars-Gunnar Persson wrote:
Hey to everyone on this mailing list (since this is my first post)!

Welcome!


We've a Sun Fire X4100 M2 server running Solaris 10 u6 and after some system work this weekend we have a problem with only one ZFS volume.

We have a pool called /Data with many file systems and two volumes. The status of my zpool is:

-bash-3.00$ zpool status
pool: Data
state: ONLINE
scrub: scrub in progress, 5.99% done, 13h38m to go
config:

NAME STATE READ WRITE CKSUM
Data ONLINE 0 0 0
c4t5000402001FC442Cd0 ONLINE 0 0 0

errors: No known data errors


Yesterday I started the scrub process because I read that was a smart thing to do after a zpool export and zpool import procedure. I did this because I wanted to move the zpool to another OS installation but changed my mind and did a zpool import on the same OS as I did an export.

After checking as much information as I could find on the web, I was advised to to run the zpool scrub after an import.

Well, the problem now is that one volume in this zpool is not working. I've shared it via iscsi to a Linux host (all of this was working on Friday). The Linux host reports that it can't find a partition table. Here is the log from the Linux host:

Mar 2 11:09:36 eva kernel: SCSI device sdb: 524288000 512-byte hdwr sectors (268435 MB) Mar 2 11:09:36 eva kernel: SCSI device sdb: drive cache: write through Mar 2 11:09:36 eva kernel: SCSI device sdb: 524288000 512-byte hdwr sectors (268435 MB) Mar 2 11:09:37 eva kernel: SCSI device sdb: drive cache: write through
Mar 2 11:09:37 eva kernel: sdb: unknown partition table
Mar 2 11:09:37 eva kernel: Attached scsi disk sdb at scsi28, channel 0, id 0, lun 0


So I checked the status on my Solaris server and I found this information a bit strange;:

-bash-3.00$ zfs list Data/subversion1
NAME USED AVAIL REFER MOUNTPOINT
Data/subversion1 22.5K 519G 22.5K -

How can it bed 519GB available on a volume that is 250GB in size? Here are more details:

-bash-3.00$ zfs get all Data/subversion1
NAME PROPERTY VALUE SOURCE
Data/subversion1 type volume -
Data/subversion1 creation Wed Apr 2 9:06 2008 -
Data/subversion1 used 22.5K -
Data/subversion1 available 519G -
Data/subversion1 referenced 22.5K -
Data/subversion1 compressratio 1.00x -
Data/subversion1 reservation 250G local
Data/subversion1 volsize 250G -
Data/subversion1 volblocksize 8K -
Data/subversion1 checksum on default
Data/subversion1 compression off default
Data/subversion1 readonly off default
Data/subversion1 shareiscsi off local

It does not appear that Data/subversion1 is being shared via iscsi?
-- richard


_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to