And then the zdb process ends with:

Traversing all blocks to verify checksums and verify nothing leaked ...
out of memory -- generating core dump
Abort (core dumped)

hmm, what does that mean??


I also ran these commands:

-bash-3.00$ sudo fmstat
module ev_recv ev_acpt wait svc_t %w %b open solve memsz bufsz cpumem-retire 0 0 0.0 0.1 0 0 0 0 0 0 disk-transport 0 0 0.0 4.1 0 0 0 0 32b 0 eft 0 0 0.0 5.7 0 0 0 0 1.4M 0 fmd-self-diagnosis 0 0 0.0 0.2 0 0 0 0 0 0 io-retire 0 0 0.0 0.2 0 0 0 0 0 0 snmp-trapgen 0 0 0.0 0.1 0 0 0 0 32b 0 sysevent-transport 0 0 0.0 1520.8 0 0 0 0 0 0 syslog-msgs 0 0 0.0 0.1 0 0 0 0 0 0 zfs-diagnosis 301 0 0.0 0.0 0 0 2 0 120b 80b zfs-retire 0 0 0.0 0.3 0 0 0 0 0 0
-bash-3.00$ sudo fmadm config
MODULE                   VERSION STATUS  DESCRIPTION
cpumem-retire            1.1     active  CPU/Memory Retire Agent
disk-transport           1.0     active  Disk Transport Agent
eft                      1.16    active  eft diagnosis engine
fmd-self-diagnosis       1.0     active  Fault Manager Self-Diagnosis
io-retire                1.0     active  I/O Retire Agent
snmp-trapgen             1.0     active  SNMP Trap Generation Agent
sysevent-transport       1.0     active  SysEvent Transport Agent
syslog-msgs              1.0     active  Syslog Messaging Agent
zfs-diagnosis            1.0     active  ZFS Diagnosis Engine
zfs-retire               1.0     active  ZFS Retire Agent
-bash-3.00$ sudo zpool upgrade -v
This system is currently running ZFS version 4.

The following versions are supported:

VER  DESCRIPTION
---  --------------------------------------------------------
 1   Initial ZFS version
 2   Ditto blocks (replicated metadata)
 3   Hot spares and double parity RAID-Z
 4   zpool history

For more information on a particular version, including supported releases, see:

http://www.opensolaris.org/os/community/zfs/version/N

Where 'N' is the version number.


I hope I've provided enough information for all you ZFS experts out there.

Any tips or solutions in sight? Or is this ZFS gone completely?

Lars-Gunnar Persson


On 3. mars. 2009, at 13.58, Lars-Gunnar Persson wrote:

I run a new command now zdb. Here is the current output:

-bash-3.00$ sudo zdb Data
   version=4
   name='Data'
   state=0
   txg=9806565
   pool_guid=6808539022472427249
   vdev_tree
       type='root'
       id=0
       guid=6808539022472427249
       children[0]
               type='disk'
               id=0
               guid=2167768931511572294
               path='/dev/dsk/c4t5000402001FC442Cd0s0'
               devid='id1,s...@n6000402001fc442c6e1a0e9700000000/a'
               whole_disk=1
               metaslab_array=14
               metaslab_shift=36
               ashift=9
               asize=11801587875840
Uberblock

       magic = 0000000000bab10c
       version = 4
       txg = 9842225
       guid_sum = 8976307953983999543
       timestamp = 1236084668 UTC = Tue Mar  3 13:51:08 2009

Dataset mos [META], ID 0, cr_txg 4, 392M, 1213 objects
... [snip]

Dataset Data/subversion1 [ZVOL], ID 3527, cr_txg 2514080, 22.5K, 3 objects

... [snip]
Dataset Data [ZPL], ID 5, cr_txg 4, 108M, 2898 objects

Traversing all blocks to verify checksums and verify nothing leaked ...

and I'm still waiting for this process to finish.


On 3. mars. 2009, at 11.18, Lars-Gunnar Persson wrote:

I thought a ZFS file system wouldn't destroy a ZFS volume? Hmm, I'm not sure what to do now ...

First of all, this zfs volume Data/subversion1 has been working for a year and suddenly after a reboot of the Solaris server, running of the zpool export and zpool import command, I get problems with this ZFS volume?

Today I checked some more, after reading this guide: 
http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide

My main question is: Is my ZFS volume which is part of a zpool lost or can I recover it?

If I upgrade the Solaris server to the latest and do a zpool export and zpool import help?

All advices appreciated :-)

Here is some more information:

-bash-3.00$ zfs list -o name,type,used,avail,ratio,compression,reserv,volsize Data/ subversion1 NAME TYPE USED AVAIL RATIO COMPRESS RESERV VOLSIZE Data/subversion1 volume 22.5K 511G 1.00x off 250G 250G

I've also learned the the AVAIL column reports what's available in the zpool and NOT what's available in the ZFS volume.

-bash-3.00$ sudo zpool status -v
Password:
pool: Data
state: ONLINE
scrub: scrub in progress, 5.86% done, 12h46m to go
config:

      NAME                     STATE     READ WRITE CKSUM
      Data                     ONLINE       0     0     0
        c4t5000402001FC442Cd0  ONLINE       0     0     0

errors: No known data errors

Interesting thing here is that the scrub process should be finished today but the progress is much slower than reported here. And will the scrub process help anything in my case?


-bash-3.00$ sudo fmdump
TIME                 UUID                                 SUNW-MSG-ID
Nov 15 2007 10:16:38 8aa789d2-7f3a-45d5-9f5c-c101d73b795e ZFS-8000-CS
Oct 14 09:31:40.8179 8c7d9847-94b7-ec09-8da7-c352de405b78 FMD-8000-2K

bash-3.00$ sudo fmdump -ev
TIME                 CLASS                                 ENA
Nov 15 2007 09:33:52 ereport.fs.zfs.io 0x915e6850ff400401 Nov 15 2007 09:33:52 ereport.fs.zfs.io 0x915e6850ff400401 Nov 15 2007 09:33:52 ereport.fs.zfs.io 0x915e6897db600401 Nov 15 2007 09:33:52 ereport.fs.zfs.io 0x915e688d11500401 Nov 15 2007 09:33:52 ereport.fs.zfs.io 0x915e68926e600401 Nov 15 2007 09:33:52 ereport.fs.zfs.io 0x915e6897db600401 Nov 15 2007 09:33:52 ereport.fs.zfs.io 0x915e68a3d3900401 Nov 15 2007 09:33:52 ereport.fs.zfs.io 0x915e68bc67400001 Nov 15 2007 09:33:52 ereport.fs.zfs.io 0x915e68d8bb600401 Nov 15 2007 09:33:52 ereport.fs.zfs.io 0x915e68da5b500001 Nov 15 2007 09:33:52 ereport.fs.zfs.io 0x915e68da5b500001 Nov 15 2007 09:33:52 ereport.fs.zfs.io 0x915e68f0c9800401 Nov 15 2007 09:33:52 ereport.fs.zfs.io 0x915e68da5b500001 Nov 15 2007 09:33:52 ereport.fs.zfs.io 0x915e6897db600401 Nov 15 2007 09:33:52 ereport.fs.zfs.io 0x915e68e981900001 Nov 15 2007 09:33:52 ereport.fs.zfs.io 0x915e68f0c9800401 Nov 15 2007 09:33:52 ereport.fs.zfs.io 0x915e690a11000401 Nov 15 2007 09:33:52 ereport.fs.zfs.io 0x915e68f0c9800401 Nov 15 2007 09:33:52 ereport.fs.zfs.io 0x915e690385500001 Nov 15 2007 09:33:52 ereport.fs.zfs.io 0x915e690385500001 Nov 15 2007 09:33:52 ereport.fs.zfs.io 0x915e690a11000401 Nov 15 2007 09:33:52 ereport.fs.zfs.io 0x915e692a4ca00001 Nov 15 2007 09:33:52 ereport.fs.zfs.io 0x915e68bc67400001 Nov 15 2007 09:33:52 ereport.fs.zfs.io 0x915e690a11000401 Nov 15 2007 09:33:52 ereport.fs.zfs.io 0x915e6850ff400401 Nov 15 2007 09:33:52 ereport.fs.zfs.io 0x915e6850ff400401 Nov 15 2007 09:33:52 ereport.fs.zfs.io 0x915e68bc67400001 Nov 15 2007 09:33:52 ereport.fs.zfs.io 0x915e690385500001 Nov 15 2007 09:33:52 ereport.fs.zfs.data 0x915e6850ff400401 Nov 15 2007 09:33:52 ereport.fs.zfs.io 0x915e68a3d3900401 Nov 15 2007 09:33:52 ereport.fs.zfs.io 0x915e68a3d3900401 Nov 15 2007 10:16:12 ereport.fs.zfs.vdev.open_failed 0x0533bb1b56400401 Nov 15 2007 10:16:12 ereport.fs.zfs.zpool 0x0533bb1b56400401 Oct 14 09:31:31.6092 ereport.fm.fmd.log_append 0x02eb96a8b6502801 Oct 14 09:31:31.8643 ereport.fm.fmd.mod_init 0x02ec89eadd100401


On 3. mars. 2009, at 08.10, Lars-Gunnar Persson wrote:

I've turned off iSCSI sharing at the moment.

My first question is: how can zfs report available is larger than reservation on a zfs volume? I also know that used mshould be larger than 22.5 K. Isn't this strange?

Lars-Gunnar Persson

Den 3. mars. 2009 kl. 00.38 skrev Richard Elling <richard.ell...@gmail.com >:

Lars-Gunnar Persson wrote:
Hey to everyone on this mailing list (since this is my first post)!

Welcome!


We've a Sun Fire X4100 M2 server running Solaris 10 u6 and after some system work this weekend we have a problem with only one ZFS volume.

We have a pool called /Data with many file systems and two volumes. The status of my zpool is:

-bash-3.00$ zpool status
pool: Data
state: ONLINE
scrub: scrub in progress, 5.99% done, 13h38m to go
config:

NAME STATE READ WRITE CKSUM
Data ONLINE 0 0 0
c4t5000402001FC442Cd0 ONLINE 0 0 0

errors: No known data errors


Yesterday I started the scrub process because I read that was a smart thing to do after a zpool export and zpool import procedure. I did this because I wanted to move the zpool to another OS installation but changed my mind and did a zpool import on the same OS as I did an export.

After checking as much information as I could find on the web, I was advised to to run the zpool scrub after an import.

Well, the problem now is that one volume in this zpool is not working. I've shared it via iscsi to a Linux host (all of this was working on Friday). The Linux host reports that it can't find a partition table. Here is the log from the Linux host:

Mar 2 11:09:36 eva kernel: SCSI device sdb: 524288000 512-byte hdwr sectors (268435 MB) Mar 2 11:09:36 eva kernel: SCSI device sdb: drive cache: write through Mar 2 11:09:36 eva kernel: SCSI device sdb: 524288000 512-byte hdwr sectors (268435 MB) Mar 2 11:09:37 eva kernel: SCSI device sdb: drive cache: write through
Mar 2 11:09:37 eva kernel: sdb: unknown partition table
Mar 2 11:09:37 eva kernel: Attached scsi disk sdb at scsi28, channel 0, id 0, lun 0


So I checked the status on my Solaris server and I found this information a bit strange;:

-bash-3.00$ zfs list Data/subversion1
NAME USED AVAIL REFER MOUNTPOINT
Data/subversion1 22.5K 519G 22.5K -

How can it bed 519GB available on a volume that is 250GB in size? Here are more details:

-bash-3.00$ zfs get all Data/subversion1
NAME PROPERTY VALUE SOURCE
Data/subversion1 type volume -
Data/subversion1 creation Wed Apr 2 9:06 2008 -
Data/subversion1 used 22.5K -
Data/subversion1 available 519G -
Data/subversion1 referenced 22.5K -
Data/subversion1 compressratio 1.00x -
Data/subversion1 reservation 250G local
Data/subversion1 volsize 250G -
Data/subversion1 volblocksize 8K -
Data/subversion1 checksum on default
Data/subversion1 compression off default
Data/subversion1 readonly off default
Data/subversion1 shareiscsi off local

It does not appear that Data/subversion1 is being shared via iscsi?
-- richard


_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss




.--------------------------------------------------------------------------.
|Lars-Gunnar Persson | |IT- sjef | | | |Nansen senteret for miljø og fjernmåling | |Adresse : Thormøhlensgate 47, 5006 Bergen | |Direkte : 55 20 58 31, sentralbord: 55 20 58 00, fax: 55 20 58 01 | |Internett: http://www.nersc.no, e-post: lars- gunnar.pers...@nersc.no |
'--------------------------------------------------------------------------'

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


.--------------------------------------------------------------------------.
|Lars-Gunnar Persson | |IT- sjef | | | |Nansen senteret for miljø og fjernmåling | |Adresse : Thormøhlensgate 47, 5006 Bergen | |Direkte : 55 20 58 31, sentralbord: 55 20 58 00, fax: 55 20 58 01 | |Internett: http://www.nersc.no, e-post: lars- gunnar.pers...@nersc.no |
'--------------------------------------------------------------------------'

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to