Thank you for your long reply. I don't believe that will help me get
my ZFS volume back though,
From my last reply to this list I confirm that I do understand what
the AVAIL column is reporting when running the zfs list command.
hmm, still confused ...
Regards,
Lars-Gunnar Persson
On 3. mars. 2009, at 11.26, O'Shea, Damien wrote:
Hi,
The reason zfs is saying that the available is larger is because in
Zfs the
size of the pool is always available to the all the zfs filesystems
that
reside in the pool. Setting a reservation will gaurntee that the
reservation
size is "reserved" for the filesystem/volume but you can change that
on the
fly.
You can see that if you create another filsystem within the pool
that the
reservation in use by your volume will have be deducted from the
available
size.
Like below:
r...@testfs create -V 10g testpool/test
r...@testfs get all testpool
NAME PROPERTY VALUE SOURCE
testpool type filesystem -
testpool creation Wed Feb 11 13:17 2009 -
testpool used 10.1G -
testpool available 124G -
testpool referenced 100M -
testpool compressratio 1.00x -
testpool mounted yes -
Here the available is 124g as the volume has been set to 10g from a
pool of
134g. If we set a reservation like this
r...@test1 set reservation=10g testpool/test
r...@test1 zfs get all testpool/test
NAME PROPERTY VALUE SOURCE
testpool/test type volume -
testpool/test creation Tue Mar 3 10:13 2009 -
testpool/test used 10G -
testpool/test available 134G -
testpool/test referenced 16K -
testpool/test compressratio 1.00x -
We can see that the available is now 134g, which is the avilable
size of the
rest of the pool + the 10g reservation that we have set. So in
theory this
volume can grow to the complete size of the pool.
So if we have a look at the availble space now in the pool we see
r...@test1# zfs get all testpool
NAME PROPERTY VALUE SOURCE
testpool type filesystem -
testpool creation Wed Feb 11 13:17 2009 -
testpool used 10.1G -
testpool available 124G -
testpool referenced 100M -
testpool compressratio 1.00x -
testpool mounted yes -
124g with 10g used to account for the size of the volume !
So if we now create another filesystem like this
r...@test1# zfs create testpool/test3
r...@test1# zfs get all testpool/test3
NAME PROPERTY VALUE SOURCE
testpool/test3 type filesystem -
testpool/test3 creation Tue Mar 3 10:19 2009 -
testpool/test3 used 18K -
testpool/test3 available 124G -
testpool/test3 referenced 18K -
testpool/test3 compressratio 1.00x -
testpool/test3 mounted yes -
We see that the total amount available to the filesystem is the
amount of the
space in the pool minus the 10g reservation. Lets set the
reservation to
something bigger.
r...@test1# zfs set volsize=100g testpool/test
r...@test1# zfs set reservation=100g testpool/test
r...@test1# zfs get all testpool/test
NAME PROPERTY VALUE SOURCE
testpool/test type volume -
testpool/test creation Tue Mar 3 10:13 2009 -
testpool/test used 100G -
testpool/test available 134G -
testpool/test referenced 16K -
So the available is still 134G, which is the rest of the pool + the
reservation set.
r...@test1# zfs get all testpool
NAME PROPERTY VALUE SOURCE
testpool type filesystem -
testpool creation Wed Feb 11 13:17 2009 -
testpool used 100G -
testpool available 33.8G -
testpool referenced 100M -
testpool compressratio 1.00x -
testpool mounted yes -
The pool however now only has 33.8G left, which should be the same
for all
the other filesystems in the pool.
Hope that helps.
-----Original Message-----
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org]on Behalf Of Lars-Gunnar
Persson
Sent: 03 March 2009 07:11
To: Richard Elling
Cc: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] ZFS volume corrupted?
*************************************
This e-mail has been received by the Revenue Internet e-mail
service. (IP)
*************************************
I've turned off iSCSI sharing at the moment.
My first question is: how can zfs report available is larger than
reservation on a zfs volume? I also know that used mshould be larger
than 22.5 K. Isn't this strange?
Lars-Gunnar Persson
Den 3. mars. 2009 kl. 00.38 skrev Richard Elling <richard.ell...@gmail.com
:
Lars-Gunnar Persson wrote:
Hey to everyone on this mailing list (since this is my first post)!
Welcome!
We've a Sun Fire X4100 M2 server running Solaris 10 u6 and after
some system work this weekend we have a problem with only one ZFS
volume.
We have a pool called /Data with many file systems and two volumes.
The status of my zpool is:
-bash-3.00$ zpool status
pool: Data
state: ONLINE
scrub: scrub in progress, 5.99% done, 13h38m to go
config:
NAME STATE READ WRITE CKSUM
Data ONLINE 0 0 0
c4t5000402001FC442Cd0 ONLINE 0 0 0
errors: No known data errors
Yesterday I started the scrub process because I read that was a
smart thing to do after a zpool export and zpool import procedure.
I did this because I wanted to move the zpool to another OS
installation but changed my mind and did a zpool import on the same
OS as I did an export.
After checking as much information as I could find on the web, I
was advised to to run the zpool scrub after an import.
Well, the problem now is that one volume in this zpool is not
working. I've shared it via iscsi to a Linux host (all of this was
working on Friday). The Linux host reports that it can't find a
partition table. Here is the log from the Linux host:
Mar 2 11:09:36 eva kernel: SCSI device sdb: 524288000 512-byte hdwr
sectors (268435 MB)
Mar 2 11:09:36 eva kernel: SCSI device sdb: drive cache: write
through
Mar 2 11:09:36 eva kernel: SCSI device sdb: 524288000 512-byte hdwr
sectors (268435 MB)
Mar 2 11:09:37 eva kernel: SCSI device sdb: drive cache: write
through
Mar 2 11:09:37 eva kernel: sdb: unknown partition table
Mar 2 11:09:37 eva kernel: Attached scsi disk sdb at scsi28,
channel 0, id 0, lun 0
So I checked the status on my Solaris server and I found this
information a bit strange;:
-bash-3.00$ zfs list Data/subversion1
NAME USED AVAIL REFER MOUNTPOINT
Data/subversion1 22.5K 519G 22.5K -
How can it bed 519GB available on a volume that is 250GB in size?
Here are more details:
-bash-3.00$ zfs get all Data/subversion1
NAME PROPERTY VALUE SOURCE
Data/subversion1 type volume -
Data/subversion1 creation Wed Apr 2 9:06 2008 -
Data/subversion1 used 22.5K -
Data/subversion1 available 519G -
Data/subversion1 referenced 22.5K -
Data/subversion1 compressratio 1.00x -
Data/subversion1 reservation 250G local
Data/subversion1 volsize 250G -
Data/subversion1 volblocksize 8K -
Data/subversion1 checksum on default
Data/subversion1 compression off default
Data/subversion1 readonly off default
Data/subversion1 shareiscsi off local
It does not appear that Data/subversion1 is being shared via iscsi?
-- richard
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
************************
This message has been delivered to the Internet by the Revenue
Internet e-mail service (OP)
*************************
.--------------------------------------------------------------------------.
|Lars-Gunnar
Persson |
|IT-
sjef |
|
|
|Nansen senteret for miljø og
fjernmåling |
|Adresse : Thormøhlensgate 47, 5006
Bergen |
|Direkte : 55 20 58 31, sentralbord: 55 20 58 00, fax: 55 20 58
01 |
|Internett: http://www.nersc.no, e-post: lars-
gunnar.pers...@nersc.no |
'--------------------------------------------------------------------------'
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss