Re: [zfs-discuss] How to obtain vdev information for a zpool?

2010-08-11 Thread Peter Taps
Hi James, Appreciate your help. Regards, Peter -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] File system ownership details of ZFS file system.

2010-08-11 Thread Ramesh Babu
Hi, I am looking for the file system ownership information of ZFS file system. I would like to know the amount of space used and number of files owned by each user in ZFS file system. I could get the user space using 'ZFS userspace' command. However i didn't find any switch to get Number of files

[zfs-discuss] one ZIL SLOG per zpool?

2010-08-11 Thread Chris Twa
I have three zpools on a server and want to add a mirrored pair of ssd's for the ZIL. Can the same pair of SSDs be used for the ZIL of all three zpools or is it one ZIL SLOG device per zpool? -- This message posted from opensolaris.org ___ zfs-discuss

Re: [zfs-discuss] ZFS p[erformance drop with new Xeon 55xx and 56xx cpus

2010-08-11 Thread michael schuster
On 08/12/10 04:16, Steve Gonczi wrote: Greetings, I am seeing some unexplained performance drop using the above cpus, using a fairly up-to-date build ( late 145). Basically, the system seems to be 98% idle, spending most if its time in this stack: unix`i86_mwait+0xd

Re: [zfs-discuss] zpool 'stuck' after failed zvol destory and reboot

2010-08-11 Thread Ville Ojamo
I am having a similar issue at the moment.. 3 GB RAM under ESXi, but dedup for this zvol (1.2 T) was turned off and only 300 G was used. The pool does contain other datasets with dedup turned on but are small enough so I'm not hitting the memory limits (been there, tried that, never again withou

Re: [zfs-discuss] ZFS and VMware

2010-08-11 Thread Tim Cook
> > > > My understanding is that if you wanted to use MS Cluster Server, you'd need > to use a LUN as an RDM for the quorum drive. VMDK files are locked when > open, so they can't typically be shared. VMware's Fault Tolerance gets > around this somehow, and I have a suspicion that their Lab Manager

Re: [zfs-discuss] ZFS and VMware

2010-08-11 Thread Saxon, Will
> -Original Message- > From: Tim Cook [mailto:t...@cook.ms] > Sent: Wednesday, August 11, 2010 10:42 PM > To: Saxon, Will > Cc: Edward Ned Harvey; ZFS Discussions > Subject: Re: [zfs-discuss] ZFS and VMware > > > I still think there are reasons why iSCSI would be > better than NFS

Re: [zfs-discuss] ZFS and VMware (and now, VirtualBox)

2010-08-11 Thread Erik Trimble
Actually, this brings up a related issue. Does anyone have experience with running VirtualBox on iSCSI volumes vs NFS shares, both of which would be backed by a ZFS server? -Erik On Wed, 2010-08-11 at 21:41 -0500, Tim Cook wrote: > > > > This is not entirely c

Re: [zfs-discuss] ZFS and VMware

2010-08-11 Thread Tim Cook
> > > > This is not entirely correct either. You're not forced to use VMFS. > It is entirely true. You absolutely cannot use ESX with a guest on a block device without formatting the LUN with VMFS. You are *FORCED* to use VMFS. You can format the LUN with VMFS, then put VM files inside the VMF

Re: [zfs-discuss] ZFS and VMware

2010-08-11 Thread Saxon, Will
> -Original Message- > From: zfs-discuss-boun...@opensolaris.org > [mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Tim Cook > Sent: Wednesday, August 11, 2010 8:46 PM > To: Edward Ned Harvey > Cc: ZFS Discussions > Subject: Re: [zfs-discuss] ZFS and VMware > > > > On Wed, Aug

[zfs-discuss] ZFS p[erformance drop with new Xeon 55xx and 56xx cpus

2010-08-11 Thread Steve Gonczi
Greetings, I am seeing some unexplained performance drop using the above cpus, using a fairly up-to-date build ( late 145). Basically, the system seems to be 98% idle, spending most if its time in this stack: unix`i86_mwait+0xd unix`cpu_idle_mwait+0xf1 u

Re: [zfs-discuss] ZFS and VMware

2010-08-11 Thread Tim Cook
On Wed, Aug 11, 2010 at 7:27 PM, Edward Ned Harvey wrote: > > From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > > boun...@opensolaris.org] On Behalf Of Paul Kraus > > > >I am looking for references of folks using ZFS with either NFS > > or iSCSI as the backing store for VMwa

Re: [zfs-discuss] ZFS and VMware

2010-08-11 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Paul Kraus > >I am looking for references of folks using ZFS with either NFS > or iSCSI as the backing store for VMware (4.x) backing store for I'll try to clearly separate what I know

[zfs-discuss] Backup zpool

2010-08-11 Thread Simone Caldana
Hello, I would like to backup my main zpool (originally called "data") inside an equally originally named "backup"zpool, which will also holds other kinds of backups. Basically I'd like to end up with backup/data backup/data/dataset1 backup/data/dataset2 backup/otherthings/dataset1 backup/othe

Re: [zfs-discuss] How to obtain vdev information for a zpool?

2010-08-11 Thread James C. McPherson
On 12/08/10 09:21 AM, Peter Taps wrote: Folks, When I create a zpool, I get to specify the vdev type - mirror, raidz1, raidz2, etc. How do I get back this information for an existing pool? The status command does not reveal this information: # zpool status mypool When this command is run, I

[zfs-discuss] How to obtain vdev information for a zpool?

2010-08-11 Thread Peter Taps
Folks, When I create a zpool, I get to specify the vdev type - mirror, raidz1, raidz2, etc. How do I get back this information for an existing pool? The status command does not reveal this information: # zpool status mypool When this command is run, I can see the disks in use. However, I don't

Re: [zfs-discuss] Need a link on data corruption

2010-08-11 Thread David Magda
On Aug 11, 2010, at 04:05, Orvar Korvar wrote: Someone posted about CERN having a bad network card which injected faulty bits into the data stream. And ZFS detected it, because of end-to-end checksum. Does anyone has more information on this? CERN generally uses Linux AFAICT: http:

Re: [zfs-discuss] autoreplace not kicking in

2010-08-11 Thread Giovanni Tirloni
On Wed, Aug 11, 2010 at 4:06 PM, Cindy Swearingen wrote: > Hi Giovanni, > > The spare behavior and the autoreplace property behavior are separate > but they should work pretty well in recent builds. > > You should not need to perform a zpool replace operation if the > autoreplace property is set.

Re: [zfs-discuss] ZFS and VMware

2010-08-11 Thread Saxon, Will
> -Original Message- > From: zfs-discuss-boun...@opensolaris.org > [mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Paul Kraus > Sent: Wednesday, August 11, 2010 3:53 PM > To: ZFS Discussions > Subject: [zfs-discuss] ZFS and VMware > >I am looking for references of folks

Re: [zfs-discuss] ZFS and VMware

2010-08-11 Thread Simone Caldana
Hi Paul, I am using EXSi 4.0 with a NFS-on-ZFS datastore running on OSOL b134. It previously ran on Solaris 10u7 with VMware Server 2.x. Disks are SATAs in a JBOD over FC. I'll try to summarize my experience here, albeit our system does not provide services to end users and thus is not very st

Re: [zfs-discuss] Raidz - what is stored in parity?

2010-08-11 Thread Peter Taps
Thank you, Eric. Your explanation is clear to understand. Regards, Peter -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Raidz - what is stored in parity?

2010-08-11 Thread Peter Taps
I am running ZFS file system version 5 on Nexenta. Peter -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Performance Testing

2010-08-11 Thread Marion Hakanson
p...@kraus-haus.org said: > Based on these results, and our capacity needs, I am planning to go with 5 > disk raidz2 vdevs. I did similar tests with a Thumper in 2008, with X4150/J4400 in 2009, and more recently comparing X4170/J4400 and X4170/MD1200: http://acc.ohsu.edu/~hakansom/thumper

[zfs-discuss] BugID 6961707

2010-08-11 Thread John D Groenveld
I'm stumbling over BugID 6961707 on build 134. OpenSolaris Development snv_134 X86 Via the b134 Live CD, when I try to "zpool import -f -F -n rpool" I get this helpful panic. panic[cpu4]/thread=ff006cd06c60: zfs: allocating allocated segment(offset=95698377728 size=16384) ff006cd06580

Re: [zfs-discuss] Problems with big ZFS send/receive in b134

2010-08-11 Thread Paul Kraus
On Wed, Aug 11, 2010 at 10:36 AM, David Dyer-Bennet wrote: > > On Tue, August 10, 2010 16:41, Dave Pacheco wrote: >> David Dyer-Bennet wrote: > >>> If that turns out to be the problem, that'll be annoying to work around >>> (I'm making snapshots every two hours and deleting them after a couple >>>

[zfs-discuss] ZFS and VMware

2010-08-11 Thread Paul Kraus
I am looking for references of folks using ZFS with either NFS or iSCSI as the backing store for VMware (4.x) backing store for virtual machines. We asked the local VMware folks and they had not even heard of ZFS. Part of what we are looking for is a recommendation for NFS or iSCSI, and all

Re: [zfs-discuss] zfs replace problems please please help

2010-08-11 Thread Mark J Musante
On Wed, 11 Aug 2010, seth keith wrote: NAME STATE READ WRITE CKSUM brick DEGRADED 0 0 0 raidz1 DEGRADED 0 0 0 c13d0 ONLINE 0 0 0 c4d0

[zfs-discuss] Performance Testing

2010-08-11 Thread Paul Kraus
I know that performance has been discussed often here, but I have just gone through some testing in preparation for deploying a large configuration (120 drives is a large configuration for me) and I wanted to share my results, both to share the results as well as to see if anyone sees anyth

Re: [zfs-discuss] zfs replace problems please please help

2010-08-11 Thread seth keith
this is for newbies like myself: I used using 'zdb -l' wrong, just using the drive name from 'zpool status' or format which is like c6d1, didn't work. I needed to add s0 to the end: zdb -l /dev/dsk/c6d1s0 gives me a good looking label ( I think ). The pool_guid values are the same for all

Re: [zfs-discuss] Raidz - what is stored in parity?

2010-08-11 Thread Marty Scholes
Peter wrote: > One question though. Marty mentioned that raidz > parity is limited to 3. But in my experiment, it > seems I can get parity to any level. > > You create a raidz zpool as: > > # zpool create mypool raidzx disk1 diskk2 > > Here, x in raidzx is a numeric value indicating the > d

Re: [zfs-discuss] autoreplace not kicking in

2010-08-11 Thread Cindy Swearingen
Hi Giovanni, The spare behavior and the autoreplace property behavior are separate but they should work pretty well in recent builds. You should not need to perform a zpool replace operation if the autoreplace property is set. If autoreplace is set and a replacement disk is inserted into the sam

Re: [zfs-discuss] zfs replace problems please please help

2010-08-11 Thread Mark J Musante
On Wed, 11 Aug 2010, Seth Keith wrote: When I do a zdb -l /dev/rdsk/ I get the same output for all my drives in the pool, but I don't think it looks right: # zdb -l /dev/rdsk/c4d0 What about /dev/rdsk/c4d0s0? ___ zfs-discuss mailing list zfs-discu

[zfs-discuss] strange permission problem with sharing over NFS

2010-08-11 Thread antst
I found strange issue. Let say I have zfs filesystem export/test1, which is shared over NFSv3 then I zfs create export/test1/test2 chown myuser /export/test1/test2 ls -l /export/test1/test2 (it will output that myusers is owner). But if I do ls -l on NFS client where /export/test1 is mounted, I

Re: [zfs-discuss] Raidz - what is stored in parity?

2010-08-11 Thread Peter Taps
Thank you all for your help. It appears my understanding of parity was rather limited. I kept on thinking about parity in memory where the extra bit would be used to ensure that the total of all 9 bits is always even. In case of zfs, the above type of checking is actually moved into checksum.

Re: [zfs-discuss] Raidz - what is stored in parity?

2010-08-11 Thread Eric D. Mudama
On Tue, Aug 10 at 21:57, Peter Taps wrote: Hi Eric, Thank you for your help. At least one part is clear now. I still am confused about how the system is still functional after one disk fails. The data for any given sector striped across all drives can be thought of as: A+B+C = P where A..C

[zfs-discuss] autoreplace not kicking in

2010-08-11 Thread Giovanni Tirloni
Hello, In OpenSolaris b111 with autoreplace=on and a pool without spares, ZFS is not kicking the resilver after a faulty disk is replaced and shows up with the same device name, even after waiting several minutes. The solution is to do a manual `zpool replace` which returns the following: # zpoo

Re: [zfs-discuss] zpool 'stuck' after failed zvol destory and reboot

2010-08-11 Thread Zachary Bedell
Just wanted to post a bit of closure to this thread quick... Most of the "import taking too long threads" I've found on the list tend to fade out without any definitive answer as to what went wrong. I needed something a bit more concrete to make me happy. After zfs send'ing everything to a fre

Re: [zfs-discuss] Problems with big ZFS send/receive in b134

2010-08-11 Thread David Dyer-Bennet
On Tue, August 10, 2010 16:41, Dave Pacheco wrote: > David Dyer-Bennet wrote: >> If that turns out to be the problem, that'll be annoying to work around >> (I'm making snapshots every two hours and deleting them after a couple >> of >> weeks). Locks between admin scripts rarely end well, in my e

Re: [zfs-discuss] Problems with big ZFS send/receive in b134

2010-08-11 Thread David Dyer-Bennet
On Tue, August 10, 2010 23:13, Ian Collins wrote: > On 08/11/10 03:45 PM, David Dyer-Bennet wrote: >> cannot receive incremental stream: most recent snapshot of >> bup-wrack/fsfs/zp1/ddb does not >> match incremental source > That last error occurs if the snapshot exists, but has changed, it has

Re: [zfs-discuss] Raidz - what is stored in parity?

2010-08-11 Thread Marty Scholes
Erik Trimble wrote: > On 8/10/2010 9:57 PM, Peter Taps wrote: > > Hi Eric, > > > > Thank you for your help. At least one part is clear > now. > > > > I still am confused about how the system is still > functional after one disk fails. > > > > Consider my earlier example of 3 disks zpool > configure

Re: [zfs-discuss] zfs replace problems please please help

2010-08-11 Thread Mark J Musante
On Tue, 10 Aug 2010, seth keith wrote: # zpool status pool: brick state: UNAVAIL status: One or more devices could not be used because the label is missing or invalid. There are insufficient replicas for the pool to continue functioning. action: Destroy and re-create the pool fro

Re: [zfs-discuss] Need a link on data corruption

2010-08-11 Thread Thomas Burgess
On Wed, Aug 11, 2010 at 4:05 AM, Orvar Korvar < knatte_fnatte_tja...@yahoo.com> wrote: > Someone posted about CERN having a bad network card which injected faulty > bits into the data stream. And ZFS detected it, because of end-to-end > checksum. Does anyone has more information on this? > -- > Th

Re: [zfs-discuss] Raidz - what is stored in parity?

2010-08-11 Thread Thomas Burgess
On Wed, Aug 11, 2010 at 12:57 AM, Peter Taps wrote: > Hi Eric, > > Thank you for your help. At least one part is clear now. > > I still am confused about how the system is still functional after one disk > fails. > > Consider my earlier example of 3 disks zpool configured for raidz-1. To > keep i

Re: [zfs-discuss] Need a link on data corruption

2010-08-11 Thread Lassi Tuura
Hi, > Someone posted about CERN having a bad network card which injected faulty > bits into the data stream. I've regularly heard people mention data corruption in network layer here - and other physics sites like FNAL and SLAC - but I don't have details of any specific incident. We do see our

[zfs-discuss] Need a link on data corruption

2010-08-11 Thread Orvar Korvar
Someone posted about CERN having a bad network card which injected faulty bits into the data stream. And ZFS detected it, because of end-to-end checksum. Does anyone has more information on this? -- This message posted from opensolaris.org ___ zfs-disc