On 3. mars. 2009, at 14.51, Sanjeev wrote:
Thank you for your reply.
Lars-Gunnar,
On Tue, Mar 03, 2009 at 11:18:27AM +0100, Lars-Gunnar Persson wrote:
-bash-3.00$ zfs list -o
name,type,used,avail,ratio,compression,reserv,volsize Data/
subversion1
NAME TYPE USED AVAIL RATIO COMPRESS RESERV
VOLSIZE
Data/subversion1 volume 22.5K 511G 1.00x off 250G
250G
This shows that the volume still exists.
Correct me if I am wrong here :
- Did you mean that the contents of the volume subversion1 are
corrupted ?
I'm not 100% sure if it's the content of this volume or if it's the
zpool that is corrupted. It was iSCSI exported to a Linux host where
it was formatted as an ext3 file system.
What does that volume have on it ? Does it contain a filesystem
which can
can be mounted on Solaris ? If so, we could try mounting it locally
on the
Solaris box. This is to rule out any iSCSI issues.
I don't think that Solaris supports mounting of ext3 file systems or ?
Also, do you have any snapshots of the volume ? If so, you could
rollback
to the latest snapshot. But, that would mean we lose some amount of
data.
Nope, No snapshots - since this is a subversion repository with
versioning built in. I didn't think I'll end up in this situation.
Also, you mentioned that the volume was in use for a year. But, I
see in the
above output that it has only about 22.5K used. Is that correct ? I
would
have expected it to be higher.
You're absolutely right, the 22.5K is wrong. That is why I suspect zfs
is doing something wrong ...
You should also check what 'zpool history -i ' says.
it says:
-bash-3.00$ sudo zpool history Data | grep subversion
2008-04-02.09:08:53 zfs create -V 250GB Data/subversion1
2008-04-02.09:08:53 zfs set shareiscsi=on Data/subversion1
2008-08-14.14:13:58 zfs set shareiscsi=off Data/subversion1
2008-08-29.15:08:50 zfs set shareiscsi=on Data/subversion1
2009-03-02.10:37:36 zfs set shareiscsi=off Data/subversion1
2009-03-02.10:37:55 zfs set shareiscsi=on Data/subversion1
2009-03-02.11:37:22 zfs set shareiscsi=off Data/subversion1
2009-03-03.09:37:34 zfs set shareiscsi=on Data/subversion1
and:
2009-03-01.11:26:22 zpool export -f Data
2009-03-01.13:21:58 zpool import Data
2009-03-01.14:32:04 zpool scrub Data
Thanks and regards,
Sanjeev
More info:
I just rebooted the SOlaris server and no change in status:
-bash-3.00$ zpool status -v
pool: Data
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
Data ONLINE 0 0 0
c4t5000402001FC442Cd0 ONLINE 0 0 0
errors: No known data errors
The scrubing has stopped and the zdb command crashed the server.
I've also learned the the AVAIL column reports what's available in
the
zpool and NOT what's available in the ZFS volume.
-bash-3.00$ sudo zpool status -v
Password:
pool: Data
state: ONLINE
scrub: scrub in progress, 5.86% done, 12h46m to go
config:
NAME STATE READ WRITE CKSUM
Data ONLINE 0 0 0
c4t5000402001FC442Cd0 ONLINE 0 0 0
errors: No known data errors
Interesting thing here is that the scrub process should be finished
today but the progress is much slower than reported here. And will
the
scrub process help anything in my case?
-bash-3.00$ sudo fmdump
TIME UUID SUNW-MSG-ID
Nov 15 2007 10:16:38 8aa789d2-7f3a-45d5-9f5c-c101d73b795e ZFS-8000-CS
Oct 14 09:31:40.8179 8c7d9847-94b7-ec09-8da7-c352de405b78 FMD-8000-2K
bash-3.00$ sudo fmdump -ev
TIME CLASS ENA
Nov 15 2007 09:33:52 ereport.fs.zfs.io
0x915e6850ff400401
Nov 15 2007 09:33:52 ereport.fs.zfs.io
0x915e6850ff400401
Nov 15 2007 09:33:52 ereport.fs.zfs.io
0x915e6897db600401
Nov 15 2007 09:33:52 ereport.fs.zfs.io
0x915e688d11500401
Nov 15 2007 09:33:52 ereport.fs.zfs.io
0x915e68926e600401
Nov 15 2007 09:33:52 ereport.fs.zfs.io
0x915e6897db600401
Nov 15 2007 09:33:52 ereport.fs.zfs.io
0x915e68a3d3900401
Nov 15 2007 09:33:52 ereport.fs.zfs.io
0x915e68bc67400001
Nov 15 2007 09:33:52 ereport.fs.zfs.io
0x915e68d8bb600401
Nov 15 2007 09:33:52 ereport.fs.zfs.io
0x915e68da5b500001
Nov 15 2007 09:33:52 ereport.fs.zfs.io
0x915e68da5b500001
Nov 15 2007 09:33:52 ereport.fs.zfs.io
0x915e68f0c9800401
Nov 15 2007 09:33:52 ereport.fs.zfs.io
0x915e68da5b500001
Nov 15 2007 09:33:52 ereport.fs.zfs.io
0x915e6897db600401
Nov 15 2007 09:33:52 ereport.fs.zfs.io
0x915e68e981900001
Nov 15 2007 09:33:52 ereport.fs.zfs.io
0x915e68f0c9800401
Nov 15 2007 09:33:52 ereport.fs.zfs.io
0x915e690a11000401
Nov 15 2007 09:33:52 ereport.fs.zfs.io
0x915e68f0c9800401
Nov 15 2007 09:33:52 ereport.fs.zfs.io
0x915e690385500001
Nov 15 2007 09:33:52 ereport.fs.zfs.io
0x915e690385500001
Nov 15 2007 09:33:52 ereport.fs.zfs.io
0x915e690a11000401
Nov 15 2007 09:33:52 ereport.fs.zfs.io
0x915e692a4ca00001
Nov 15 2007 09:33:52 ereport.fs.zfs.io
0x915e68bc67400001
Nov 15 2007 09:33:52 ereport.fs.zfs.io
0x915e690a11000401
Nov 15 2007 09:33:52 ereport.fs.zfs.io
0x915e6850ff400401
Nov 15 2007 09:33:52 ereport.fs.zfs.io
0x915e6850ff400401
Nov 15 2007 09:33:52 ereport.fs.zfs.io
0x915e68bc67400001
Nov 15 2007 09:33:52 ereport.fs.zfs.io
0x915e690385500001
Nov 15 2007 09:33:52 ereport.fs.zfs.data
0x915e6850ff400401
Nov 15 2007 09:33:52 ereport.fs.zfs.io
0x915e68a3d3900401
Nov 15 2007 09:33:52 ereport.fs.zfs.io
0x915e68a3d3900401
Nov 15 2007 10:16:12 ereport.fs.zfs.vdev.open_failed
0x0533bb1b56400401
Nov 15 2007 10:16:12 ereport.fs.zfs.zpool
0x0533bb1b56400401
Oct 14 09:31:31.6092 ereport.fm.fmd.log_append
0x02eb96a8b6502801
Oct 14 09:31:31.8643 ereport.fm.fmd.mod_init
0x02ec89eadd100401
On 3. mars. 2009, at 08.10, Lars-Gunnar Persson wrote:
I've turned off iSCSI sharing at the moment.
My first question is: how can zfs report available is larger than
reservation on a zfs volume? I also know that used mshould be larger
than 22.5 K. Isn't this strange?
Lars-Gunnar Persson
Den 3. mars. 2009 kl. 00.38 skrev Richard Elling
<richard.ell...@gmail.com>:
Lars-Gunnar Persson wrote:
Hey to everyone on this mailing list (since this is my first
post)!
Welcome!
We've a Sun Fire X4100 M2 server running Solaris 10 u6 and after
some system work this weekend we have a problem with only one ZFS
volume.
We have a pool called /Data with many file systems and two
volumes. The status of my zpool is:
-bash-3.00$ zpool status
pool: Data
state: ONLINE
scrub: scrub in progress, 5.99% done, 13h38m to go
config:
NAME STATE READ WRITE CKSUM
Data ONLINE 0 0 0
c4t5000402001FC442Cd0 ONLINE 0 0 0
errors: No known data errors
Yesterday I started the scrub process because I read that was a
smart thing to do after a zpool export and zpool import procedure.
I did this because I wanted to move the zpool to another OS
installation but changed my mind and did a zpool import on the
same OS as I did an export.
After checking as much information as I could find on the web, I
was advised to to run the zpool scrub after an import.
Well, the problem now is that one volume in this zpool is not
working. I've shared it via iscsi to a Linux host (all of this was
working on Friday). The Linux host reports that it can't find a
partition table. Here is the log from the Linux host:
Mar 2 11:09:36 eva kernel: SCSI device sdb: 524288000 512-byte
hdwr sectors (268435 MB)
Mar 2 11:09:36 eva kernel: SCSI device sdb: drive cache: write
through
Mar 2 11:09:36 eva kernel: SCSI device sdb: 524288000 512-byte
hdwr sectors (268435 MB)
Mar 2 11:09:37 eva kernel: SCSI device sdb: drive cache: write
through
Mar 2 11:09:37 eva kernel: sdb: unknown partition table
Mar 2 11:09:37 eva kernel: Attached scsi disk sdb at scsi28,
channel 0, id 0, lun 0
So I checked the status on my Solaris server and I found this
information a bit strange;:
-bash-3.00$ zfs list Data/subversion1
NAME USED AVAIL REFER MOUNTPOINT
Data/subversion1 22.5K 519G 22.5K -
How can it bed 519GB available on a volume that is 250GB in size?
Here are more details:
-bash-3.00$ zfs get all Data/subversion1
NAME PROPERTY VALUE SOURCE
Data/subversion1 type volume -
Data/subversion1 creation Wed Apr 2 9:06 2008 -
Data/subversion1 used 22.5K -
Data/subversion1 available 519G -
Data/subversion1 referenced 22.5K -
Data/subversion1 compressratio 1.00x -
Data/subversion1 reservation 250G local
Data/subversion1 volsize 250G -
Data/subversion1 volblocksize 8K -
Data/subversion1 checksum on default
Data/subversion1 compression off default
Data/subversion1 readonly off default
Data/subversion1 shareiscsi off local
It does not appear that Data/subversion1 is being shared via iscsi?
-- richard
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
----------------
Sanjeev Bagewadi
Solaris RPE
Bangalore, India
.--------------------------------------------------------------------------.
|Lars-Gunnar
Persson |
|IT-
sjef |
|
|
|Nansen senteret for miljø og
fjernmåling |
|Adresse : Thormøhlensgate 47, 5006
Bergen |
|Direkte : 55 20 58 31, sentralbord: 55 20 58 00, fax: 55 20 58
01 |
|Internett: http://www.nersc.no, e-post: lars-
gunnar.pers...@nersc.no |
'--------------------------------------------------------------------------'
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss