Re: [zfs-discuss] [Fwd: [zfs-auto-snapshot] Heads up: SUNWzfs-auto-snapshot obsoletion in snv 128]

2009-11-17 Thread Daniel Carosone
I welcome the re-write. The deficiencies of the current snapshot cleanup implementation have been a source of constant background irritation to me for a while, and the subject of a few bugs. Regarding the issues in contention - the send hooks capability is useful and should remain, but the i

[zfs-discuss] Recovering FAULTED zpool

2009-11-17 Thread Peter Jeremy
I have a zpool on a JBOD SE3320 that I was using for data with Solaris 10 (the root/usr/var filesystems were all UFS). Unfortunately, we had a bit of a mixup with SCSI cabling and I believe that we created a SCSI target clash. The system was unloaded and nothing happened until I ran "zpool status

Re: [zfs-discuss] ZFS storage server hardware

2009-11-17 Thread Ian Allison
Hi Bruno, Bruno Sousa wrote: Hi Ian, I use the Supermicro SuperChassis 846E1-R710B, and i added the JBOD kit that has : * Power Control Card Sorry to keep bugging you, but which card is this. I like the sound of your setup. Cheers, Ian. * SAS 846EL2/EL1 BP External

Re: [zfs-discuss] ZFS storage server hardware

2009-11-17 Thread Ed Jobs
On Tuesday 17 November 2009 22:50, Ian Allison wrote: > I'm learning as I go here, but as far as I've been able to determine, > the basic choices for attaching drives seem to be > > 1) SATA Port multipliers > 2) SAS Multilane Enclosures > 3) SAS Expanders what about pci(-X) cards? as stated in: ht

Re: [zfs-discuss] ZFS ZIL/log on SSD weirdness

2009-11-17 Thread Miles Nordin
> "d" == Dushyanth writes: d> Performance dropped for some reason the SSD's black-box-filesystem is fragmented? Do the slog-less test again and see if it's still fast. pgpQ5Pzv39hs6.pgp Description: PGP signature ___ zfs-discuss mailing li

Re: [zfs-discuss] ZFS ZIL/log on SSD weirdness

2009-11-17 Thread Richard Elling
On Nov 17, 2009, at 2:50 PM, Scott Meilicke wrote: I am sorry that I don't have any links, but here is what I observe on my system. dd does not do sync writes, so the ZIL is not used. iSCSI traffic does sync writes (as of 2009.06, but not 2008.05), so if you repeat your test using an iSCSI

Re: [zfs-discuss] building zpools on device aliases

2009-11-17 Thread Andrew Gabriel
sean walmsley wrote: We have a number of Sun J4200 SAS JBOD arrays which we have multipathed using Sun's MPxIO facility. While this is great for reliability, it results in the /dev/dsk device IDs changing from cXtYd0 to something virtually unreadable like "c4t5000C5000B21AC63d0s3". Since the

Re: [zfs-discuss] building zpools on device aliases

2009-11-17 Thread Cindy Swearingen
Hi Sean, I sympathize with your intentions but providing pseudo-names for these disks might cause more confusion than actual help. The "c4t5..." name isn't so bad. I've seen worse. :-) Here are the issues with using the aliases: - If a device fails on a J4200, a LED will indicate which disk ha

Re: [zfs-discuss] ZFS ZIL/log on SSD weirdness

2009-11-17 Thread Scott Meilicke
I am sorry that I don't have any links, but here is what I observe on my system. dd does not do sync writes, so the ZIL is not used. iSCSI traffic does sync writes (as of 2009.06, but not 2008.05), so if you repeat your test using an iSCSI target from your system, you should see log activity. Sa

Re: [zfs-discuss] ZFS ZIL/log on SSD weirdness

2009-11-17 Thread Dushyanth
I ran a quick test to confirm James theory - and there is more weirdness # Mirror pool with two 500GB SATA disks - no log device r...@m1-sv-zfs-1:~# zpool create pool1 mirror c8t5d0 c8t2d0 r...@m1-sv-zfs-1:~# zfs create pool1/fs r...@m1-sv-zfs-1:~# cd /pool1/fs r...@m1-sv-zfs-1:/pool1/fs# time dd

Re: [zfs-discuss] ZFS ZIL/log on SSD weirdness

2009-11-17 Thread Dushyanth
Oops - most important info missed - Its OpenSolaris 2009.06 # uname -a SunOS m1-sv-ZFS-1 5.11 snv_111b i86pc i386 i86pc Solaris TIA Dushyanth -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://ma

Re: [zfs-discuss] ZFS storage server hardware

2009-11-17 Thread Chris Du
You can get the E2 version of the chassis that supports multipathing but you have to use dual port SAS disks. Or you can use seperate SAS hba to connect to seperate jbos chassis and do mirror over 2 chassis. The backplane is just a path-through fabric which is very unlikely to die. Then like ot

Re: [zfs-discuss] ZFS ZIL/log on SSD weirdness

2009-11-17 Thread Richard Elling
Which OS and release? The behaviour has changed over time. -- richard On Nov 17, 2009, at 1:33 PM, Dushyanth wrote: Hey guys, Iam new to ZFS and have been playing around since few days. Iam trying to improve performance of a iSCSI storage backend by putting the ZIL/log on a SSD. Below

Re: [zfs-discuss] ZFS storage server hardware

2009-11-17 Thread Bruno Sousa
Hi Ian, I use the Supermicro SuperChassis 846E1-R710B, and i added the JBOD kit that has : * Power Control Card * SAS 846EL2/EL1 BP External Cascading Cable * SAS 846EL1 BP 1-Port Internal Cascading Cable I don't do any monitoring in the JBOD chassis.. Bruno Ian Allison wrote: > H

Re: [zfs-discuss] ZFS storage server hardware

2009-11-17 Thread Ian Allison
Hi Bruno, Bruno Sousa wrote: Hi, I currently have a 1U server (Sun X2200) with 2 LSI HBA attached to a Supermicro JBOD chassis each one with 24 disks , SATA 1TB, and so far so good.. So i have a 48 TB raw capacity, with a mirror configuration for NFS usage (Xen VMs) and i feel that for the pric

Re: [zfs-discuss] ZFS ZIL/log on SSD weirdness

2009-11-17 Thread James Lever
On 18/11/2009, at 7:33 AM, Dushyanth wrote: > Now when i run dd and create a big file on /iftraid0/fs and watch `iostat > -xnz 2` i dont see any stats for c8t4d0 nor does the write performance > improves. > > I have not formatted either c9t9d0 or c8t4d0. What am i missing ? Last I checked, i

Re: [zfs-discuss] ZFS storage server hardware

2009-11-17 Thread Ian Allison
Hi Richard, Richard Elling wrote: Cases like the Supermicro 846E1-R900B have 24 hot swap bays accessible via a single (4u) LSI SASX36 SAS expander chip, but I'm worried about controller death and having the backplane as a single point of failure. There will be dozens of single point failure

[zfs-discuss] ZFS ZIL/log on SSD weirdness

2009-11-17 Thread Dushyanth
Hey guys, Iam new to ZFS and have been playing around since few days. Iam trying to improve performance of a iSCSI storage backend by putting the ZIL/log on a SSD. Below are the steps i followed # format < /dev/null Searching for disks... The device does not support mode page 3 or page 4, or t

Re: [zfs-discuss] ZFS storage server hardware

2009-11-17 Thread Angelo Rajadurai
Also if you are a startup, there are some ridiculously sweet deals on Sun hardware through the Sun Startup Essentials program. http://sun.com/startups This way you do not need to worry about compatibility and you get all the Enterprise RAS features at a pretty low price point. -Angelo On Nov

Re: [zfs-discuss] ZFS storage server hardware

2009-11-17 Thread Bruno Sousa
Hi, I currently have a 1U server (Sun X2200) with 2 LSI HBA attached to a Supermicro JBOD chassis each one with 24 disks , SATA 1TB, and so far so good.. So i have a 48 TB raw capacity, with a mirror configuration for NFS usage (Xen VMs) and i feel that for the price i paid i have a very nice sys

Re: [zfs-discuss] ZFS storage server hardware

2009-11-17 Thread Richard Elling
On Nov 17, 2009, at 12:50 PM, Ian Allison wrote: Hi, I know (from the zfs-discuss archives and other places [1,2,3,4]) that a lot of people are looking to use zfs as a storage server in the 10-100TB range. I'm in the same boat, but I've found that hardware choice is the biggest issue.

[zfs-discuss] ZFS storage server hardware

2009-11-17 Thread Ian Allison
Hi, I know (from the zfs-discuss archives and other places [1,2,3,4]) that a lot of people are looking to use zfs as a storage server in the 10-100TB range. I'm in the same boat, but I've found that hardware choice is the biggest issue. I'm struggling to find something which will work nicely

Re: [zfs-discuss] Old zfs version with OpenSolaris 2009.06 JeOS ??

2009-11-17 Thread Benoit Heroux
Hi Tim, You were right. I wasn't using the dev repository, so i was stuck into an old build of zfs. So i did those step: - pkg set-publisher -P -O http://pkg.opensolaris.org/dev/ opensolaris.org - pkg image-update -v : download and upgrade. - reboot Then i have version 4 now of zfs and version

Re: [zfs-discuss] [Fwd: [zfs-auto-snapshot] Heads up: SUNWzfs-auto-snapshot obsoletion in snv 128]

2009-11-17 Thread Craig S. Bell
I don't have any problem with a rewrite, but please allow a non-GUI-dependent solution for headless servers. Also please add rsync as an option, rather than replacing zfs send/recv. Thanks. -- This message posted from opensolaris.org ___ zfs-discuss

Re: [zfs-discuss] Comstar thin provisioning space reclamation

2009-11-17 Thread Ed Plese
You can reclaim this space with the SDelete utility from Microsoft. With the -c option it will zero any free space on the volume. For example: C:\>sdelete -c C: I've tested this with xVM and with compression enabled for the zvol, but it worked very well. Ed Plese On Tue, Nov 17, 2009 at 12:1

[zfs-discuss] Comstar thin provisioning space reclamation

2009-11-17 Thread Brent Jones
I use several file-backed thin provisioned iSCSI volumes presented over Comstar. The initiators are Windows 2003/2008 systems with the MS MPIO initiator. The Windows systems only claim to be using about 4TB of space, but the ZFS volume says 7.12TB is used. Granted, I imagine ZFS allocates the bloc

Re: [zfs-discuss] updated zfs on disk format?

2009-11-17 Thread Richard Elling
On Nov 17, 2009, at 8:06 AM, Cindy Swearingen wrote: Hi Luca, We do not have an updated version of this document. One of the nice features of the ZFS design is that new capabilities can be added without changing the on disk format as described in ondiskformat0822.pdf. For instance, if a n

Re: [zfs-discuss] Disk I/O in RAID-Z as new disks are added/removed

2009-11-17 Thread Bob Friesenhahn
On Sun, 15 Nov 2009, Joe Auty wrote: I've see several comparisons to existing RAID solutions, but I'm not finding whether the more disks you add, the more I/O you can get, unless I'm missing something? Perhaps that is because "it depends" and you may or may not get "more I/O", depending on w

Re: [zfs-discuss] updated zfs on disk format?

2009-11-17 Thread Cindy Swearingen
Hi Luca, We do not have an updated version of this document. Thanks, Cindy On 11/17/09 06:59, Luca Morettoni wrote: Hi see the "ondiskformat" PDF[1] is "quite" old, there is an updated version of that important document? Thanks!! [1]http://hub.opensolaris.org/bin/download/Community+Group+z

[zfs-discuss] updated zfs on disk format?

2009-11-17 Thread Luca Morettoni
Hi see the "ondiskformat" PDF[1] is "quite" old, there is an updated version of that important document? Thanks!! [1]http://hub.opensolaris.org/bin/download/Community+Group+zfs/docs/ondiskformat0822.pdf -- Luca Morettoni | OpenSolaris SCA #OS0344 Web/BLOG: http://www.morettoni.net/ | http://tw

Re: [zfs-discuss] Disk I/O in RAID-Z as new disks are added/removed

2009-11-17 Thread Joe Auty
Tim Cook wrote: > On Sun, Nov 15, 2009 at 2:57 AM, besson3c > wrote: > > Anybody? > > I would truly appreciate some general, if not definite insight as > to what one can expect in terms of I/O performance after adding > new disks to ZFS pools. > > > >

[zfs-discuss] [Fwd: [zfs-auto-snapshot] Heads up: SUNWzfs-auto-snapshot obsoletion in snv 128]

2009-11-17 Thread Tim Foster
Hi all, Just forwarding Niall's heads-up message about the impending removal of the existing zfs-auto-snapshot implementation in nv_128 I've not been involved in the rewrite, but what I've read about the new code, it'll be a lot more efficient than the old ksh-based code, and will fix many of the

Re: [zfs-discuss] Best config for different sized disks

2009-11-17 Thread Erik Trimble
Tim Cook wrote: On Mon, Nov 16, 2009 at 12:09 PM, Bob Friesenhahn mailto:bfrie...@simple.dallas.tx.us>> wrote: On Sun, 15 Nov 2009, Tim Cook wrote: Once again I question why you're wasting your time with raid-z. You might as well just stripe across all the drives.