Re: [zfs-discuss] terrible ZFS performance compared to UFS on ramdisk (70% drop)

2010-03-09 Thread Ross Walker
On Mar 8, 2010, at 11:46 PM, ольга крыжановская anov...@gmail.com> wrote: tmpfs lacks features like quota and NFSv4 ACL support. May not be the best choice if such features are required. True, but if the OP is looking for those features they are more then unlikely looking for an in-memory fi

Re: [zfs-discuss] terrible ZFS performance compared to UFS on ramdisk (70% drop)

2010-03-09 Thread Ross Walker
On Mar 9, 2010, at 1:42 PM, Roch Bourbonnais wrote: I think This is highlighting that there is extra CPU requirement to manage small blocks in ZFS. The table would probably turn over if you go to 16K zfs records and 16K reads/writes form the application. Next step for you is to figure

Re: [zfs-discuss] ZFS - VMware ESX --> vSphere Upgrade : Zpool Faulted

2010-03-11 Thread Ross Walker
On Mar 11, 2010, at 8:27 AM, Andrew wrote: Ok, The fault appears to have occurred regardless of the attempts to move to vSphere as we've now moved the host back to ESX 3.5 from whence it came and the problem still exists. Looks to me like the fault occurred as a result of a reboot. Any

Re: [zfs-discuss] ZFS - VMware ESX --> vSphere Upgrade : Zpool Faulted

2010-03-11 Thread Ross Walker
On Mar 11, 2010, at 12:31 PM, Andrew wrote: Hi Ross, Ok - as a Solaris newbie.. i'm going to need your help. Format produces the following:- c8t4d0 (VMware-Virtualdisk-1.0 cyl 65268 alt 2 hd 255 sec 126) / p...@0,0/pci15ad,1...@10/s...@4,0 what dd command do I need to run to reference thi

Re: [zfs-discuss] corruption of ZFS on iScsi storage

2010-03-15 Thread Ross Walker
On Mar 15, 2010, at 10:55 AM, Gabriele Bulfon wrote: Hello, I'd like to check for any guidance about using zfs on iscsi storage appliances. Recently I had an unlucky situation with an unlucky storage machine freezing. Once the storage was up again (rebooted) all other iscsi clients were

Re: [zfs-discuss] corruption of ZFS on iScsi storage

2010-03-15 Thread Ross Walker
On Mar 15, 2010, at 12:19 PM, Ware Adams wrote: On Mar 15, 2010, at 12:13 PM, Gabriele Bulfon wrote: Well, I actually don't know what implementation is inside this legacy machine. This machine is an AMI StoreTrends ITX, but maybe it has been built around IET, don't know. Well, maybe I s

Re: [zfs-discuss] corruption of ZFS on iScsi storage

2010-03-15 Thread Ross Walker
On Mar 15, 2010, at 7:11 PM, Tonmaus wrote: Being an iscsi target, this volume was mounted as a single iscsi disk from the solaris host, and prepared as a zfs pool consisting of this single iscsi target. ZFS best practices, tell me that to be safe in case of corruption, pools should always be m

Re: [zfs-discuss] corruption of ZFS on iScsi storage

2010-03-15 Thread Ross Walker
On Mar 15, 2010, at 11:10 PM, Tim Cook wrote: On Mon, Mar 15, 2010 at 9:10 PM, Ross Walker wrote: On Mar 15, 2010, at 7:11 PM, Tonmaus wrote: Being an iscsi target, this volume was mounted as a single iscsi disk from the solaris host, and prepared as a zfs pool consisting of this

Re: [zfs-discuss] Can we get some documentation on iSCSI sharing after comstar took over?

2010-03-17 Thread Ross Walker
On Mar 17, 2010, at 2:30 AM, Erik Ableson wrote: On 17 mars 2010, at 00:25, Svein Skogen wrote: -BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 16.03.2010 22:31, erik.ableson wrote: On 16 mars 2010, at 21:00, Marc Nicholas wrote: On Tue, Mar 16, 2010 at 3:16 PM, Svein Skogen mailto

Re: [zfs-discuss] ISCSI + RAID-Z + OpenSolaris HA

2010-03-20 Thread Ross Walker
On Mar 20, 2010, at 10:18 AM, vikkr wrote: Hi sorry for bad eng and picture :). Can such a decision? 3 servers openfiler give their drives 2 - 1 tb ISCSI server to OpenSolaris On OpenSolaris assembled a RAID-Z with double parity. Server OpenSolaris provides NFS access to this array, and du

Re: [zfs-discuss] ISCSI + RAID-Z + OpenSolaris HA

2010-03-20 Thread Ross Walker
On Mar 20, 2010, at 11:48 AM, vikkr wrote: THX Ross, i plan exporting each drive individually over iSCSI. I this case, the write, as well as reading, will go to all 6 discs at once, right? The only question - how to calculate fault tolerance of such a system if the discs are all different

Re: [zfs-discuss] Sun Flash Accelerator F20 numbers

2010-03-31 Thread Ross Walker
On Mar 31, 2010, at 5:39 AM, Robert Milkowski wrote: On Wed, Mar 31, 2010 at 1:00 AM, Karsten Weiss Use something other than Open/Solaris with ZFS as an NFS server? :) I don't think you'll find the performance you paid for with ZFS and Solaris at this time. I've been trying to more tha

Re: [zfs-discuss] Sun Flash Accelerator F20 numbers

2010-03-31 Thread Ross Walker
On Mar 31, 2010, at 10:25 PM, Richard Elling wrote: On Mar 31, 2010, at 7:11 PM, Ross Walker wrote: On Mar 31, 2010, at 5:39 AM, Robert Milkowski wrote: On Wed, Mar 31, 2010 at 1:00 AM, Karsten Weiss Use something other than Open/Solaris with ZFS as an NFS server? :) I don&#

Re: [zfs-discuss] Sun Flash Accelerator F20 numbers

2010-04-01 Thread Ross Walker
On Mar 31, 2010, at 11:51 PM, Edward Ned Harvey wrote: A MegaRAID card with write-back cache? It should also be cheaper than the F20. I haven't posted results yet, but I just finished a few weeks of extensive benchmarking various configurations. I can say this: WriteBack cache is much

Re: [zfs-discuss] Sun Flash Accelerator F20 numbers

2010-04-01 Thread Ross Walker
On Mar 31, 2010, at 11:58 PM, Edward Ned Harvey wrote: We ran into something similar with these drives in an X4170 that turned out to be an issue of the preconfigured logical volumes on the drives. Once we made sure all of our Sun PCI HBAs where running the exact same version of firmware a

Re: [zfs-discuss] Sun Flash Accelerator F20 numbers

2010-04-01 Thread Ross Walker
On Apr 1, 2010, at 8:42 AM, casper@sun.com wrote: Is that what "sync" means in Linux? A sync write is one in which the application blocks until the OS acks that the write has been committed to disk. An async write is given to the OS, and the OS is permitted to buffer the write to di

Re: [zfs-discuss] Sun Flash Accelerator F20 numbers

2010-04-01 Thread Ross Walker
On Thu, Apr 1, 2010 at 10:03 AM, Darren J Moffat wrote: > On 01/04/2010 14:49, Ross Walker wrote: >>> >>> We're talking about the "sync" for NFS exports in Linux; what do they >>> mean >>> with "sync" NFS exports? >> >>

Re: [zfs-discuss] Sun Flash Accelerator F20 numbers

2010-04-02 Thread Ross Walker
On Fri, Apr 2, 2010 at 8:03 AM, Edward Ned Harvey wrote: >> > Seriously, all disks configured WriteThrough (spindle and SSD disks >> > alike) >> > using the dedicated ZIL SSD device, very noticeably faster than >> > enabling the >> > WriteBack. >> >> What do you get with both SSD ZIL and WriteBack

Re: [zfs-discuss] SSD best practices

2010-04-19 Thread Ross Walker
On Apr 19, 2010, at 12:50 PM, Don wrote: Now I'm simply confused. Do you mean one cachefile shared between the two nodes for this zpool? How, may I ask, would this work? The rpool should be in /etc/zfs/zpool.cache. The shared pool should be in /etc/cluster/zpool.cache (or wherever you p

Re: [zfs-discuss] Can RAIDZ disks be slices ?

2010-04-21 Thread Ross Walker
On Apr 20, 2010, at 12:13 AM, Sunil wrote: Hi, I have a strange requirement. My pool consists of 2 500GB disks in stripe which I am trying to convert into a RAIDZ setup without data loss but I have only two additional disks: 750GB and 1TB. So, here is what I thought: 1. Carve a 500GB s

Re: [zfs-discuss] Snapshots and Data Loss

2010-04-22 Thread Ross Walker
On Apr 20, 2010, at 4:44 PM, Geoff Nordli wrote: From: matthew patton [mailto:patto...@yahoo.com] Sent: Tuesday, April 20, 2010 12:54 PM Geoff Nordli wrote: With our particular use case we are going to do a "save state" on their virtual machines, which is going to write 100-400 MB per VM v

Re: [zfs-discuss] Snapshots and Data Loss

2010-04-23 Thread Ross Walker
On Apr 22, 2010, at 11:03 AM, Geoff Nordli wrote: From: Ross Walker [mailto:rswwal...@gmail.com] Sent: Thursday, April 22, 2010 6:34 AM On Apr 20, 2010, at 4:44 PM, Geoff Nordli wrote: If you combine the hypervisor and storage server and have students connect to the VMs via RDP or VNC

Re: [zfs-discuss] Performance of the ZIL

2010-05-06 Thread Ross Walker
On May 6, 2010, at 8:34 AM, Edward Ned Harvey wrote: From: Pasi Kärkkäinen [mailto:pa...@iki.fi] In neither case do you have data or filesystem corruption. ZFS probably is still OK, since it's designed to handle this (?), but the data can't be OK if you lose 30 secs of writes.. 30 secs o

Re: [zfs-discuss] ZFS High Availability

2010-05-12 Thread Ross Walker
On May 12, 2010, at 1:17 AM, schickb wrote: I'm looking for input on building an HA configuration for ZFS. I've read the FAQ and understand that the standard approach is to have a standby system with access to a shared pool that is imported during a failover. The problem is that we use Z

Re: [zfs-discuss] ZFS High Availability

2010-05-12 Thread Ross Walker
On May 12, 2010, at 3:06 PM, Manoj Joseph wrote: Ross Walker wrote: On May 12, 2010, at 1:17 AM, schickb wrote: I'm looking for input on building an HA configuration for ZFS. I've read the FAQ and understand that the standard approach is to have a standby system with access t

Re: [zfs-discuss] ZFS High Availability

2010-05-13 Thread Ross Walker
On May 12, 2010, at 7:12 PM, Richard Elling wrote: On May 11, 2010, at 10:17 PM, schickb wrote: I'm looking for input on building an HA configuration for ZFS. I've read the FAQ and understand that the standard approach is to have a standby system with access to a shared pool that is impo

Re: [zfs-discuss] New SSD options

2010-05-20 Thread Ross Walker
On May 20, 2010, at 6:25 PM, Travis Tabbal wrote: use a slog at all if it's not durable? You should disable the ZIL instead. This is basically where I was going. There only seems to be one SSD that is considered "working", the Zeus IOPS. Even if I had the money, I can't buy it. As my ap

Re: [zfs-discuss] New SSD options

2010-05-21 Thread Ross Walker
On May 20, 2010, at 7:17 PM, Ragnar Sundblad wrote: On 21 maj 2010, at 00.53, Ross Walker wrote: On May 20, 2010, at 6:25 PM, Travis Tabbal wrote: use a slog at all if it's not durable? You should disable the ZIL instead. This is basically where I was going. There only seems

Re: [zfs-discuss] Migrating to ZFS

2010-06-02 Thread Ross Walker
On Jun 2, 2010, at 12:03 PM, zfsnoob4 wrote: Wow thank you very much for the clear instructions. And Yes, I have another 120GB drive for the OS, separate from A, B and C. I will repartition the drive and install Solaris. Then maybe at some point I'll delete the entire drive and just instal

Re: [zfs-discuss] Homegrown Hybrid Storage

2010-06-07 Thread Ross Walker
On Jun 7, 2010, at 2:10 AM, Erik Trimble wrote: Comments in-line. On 6/6/2010 9:16 PM, Ken wrote: I'm looking at VMWare, ESXi 4, but I'll take any advice offered. On Sun, Jun 6, 2010 at 19:40, Erik Trimble wrote: On 6/6/2010 6:22 PM, Ken wrote: Hi, I'm looking to build a virtualiz

Re: [zfs-discuss] General help with understanding ZFS performance bottlenecks

2010-06-09 Thread Ross Walker
On Jun 8, 2010, at 1:33 PM, besson3c wrote: Sure! The pool consists of 6 SATA drives configured as RAID-Z. There are no special read or write cache drives. This pool is shared to several VMs via NFS, these VMs manage email, web, and a Quickbooks server running on FreeBSD, Linux, and Wind

Re: [zfs-discuss] Sun Flash Accelerator F20

2010-06-10 Thread Ross Walker
On Jun 10, 2010, at 5:54 PM, Richard Elling wrote: On Jun 10, 2010, at 1:24 PM, Arne Jansen wrote: Andrey Kuzmin wrote: Well, I'm more accustomed to "sequential vs. random", but YMMW. As to 67000 512 byte writes (this sounds suspiciously close to 32Mb fitting into cache), did you have w

Re: [zfs-discuss] Please trim posts

2010-06-11 Thread Ross Walker
On Jun 11, 2010, at 2:07 AM, Dave Koelmeyer wrote: I trimmed, and then got complained at by a mailing list user that the context of what I was replying to was missing. Can't win :P If at a minimum one trims the disclaimers, footers and signatures, that's better then nothing. On long th

Re: [zfs-discuss] Moved disks to new controller - cannot import pool even after moving ba

2010-06-14 Thread Ross Walker
On Jun 13, 2010, at 2:14 PM, Jan Hellevik wrote: Well, for me it was a cure. Nothing else I tried got the pool back. As far as I can tell, the way to get it back should be to use symlinks to the fdisk partitions on my SSD, but that did not work for me. Using -V got the pool back. What is

Re: [zfs-discuss] Dedup... still in beta status

2010-06-16 Thread Ross Walker
On Jun 16, 2010, at 9:02 AM, Carlos Varela wrote: Does the machine respond to ping? Yes If there is a gui does the mouse pointer move? There is no GUI (nexentastor) Does the keyboard numlock key respond at all ? Yes I just find it very hard to believe that such a situation cou

Re: [zfs-discuss] SLOG striping? (Bob Friesenhahn)

2010-06-22 Thread Ross Walker
On Jun 22, 2010, at 8:40 AM, Jeff Bacon wrote: >> The term 'stripe' has been so outrageously severely abused in this >> forum that it is impossible to know what someone is talking about when >> they use the term. Seemingly intelligent people continue to use wrong >> terminology because they thin

Re: [zfs-discuss] raid-z - not even iops distribution

2010-06-23 Thread Ross Walker
On Jun 23, 2010, at 1:48 PM, Robert Milkowski wrote: > > 128GB. > > Does it mean that for dataset used for databases and similar environments > where basically all blocks have fixed size and there is no other data all > parity information will end-up on one (z1) or two (z2) specific disks? W

Re: [zfs-discuss] raid-z - not even iops distribution

2010-06-24 Thread Ross Walker
On Jun 24, 2010, at 5:40 AM, Robert Milkowski wrote: > On 23/06/2010 18:50, Adam Leventhal wrote: >>> Does it mean that for dataset used for databases and similar environments >>> where basically all blocks have fixed size and there is no other data all >>> parity information will end-up on one

Re: [zfs-discuss] raid-z - not even iops distribution

2010-06-24 Thread Ross Walker
On Jun 24, 2010, at 10:42 AM, Robert Milkowski wrote: > On 24/06/2010 14:32, Ross Walker wrote: >> On Jun 24, 2010, at 5:40 AM, Robert Milkowski wrote: >> >> >>> On 23/06/2010 18:50, Adam Leventhal wrote: >>> >>>>> Does i

Re: [zfs-discuss] Should i enable Write-Cache ?

2010-07-10 Thread Ross Walker
On Jul 10, 2010, at 5:46 AM, Erik Trimble wrote: > On 7/10/2010 1:14 AM, Graham McArdle wrote: >>> Instead, create "Single Disk" arrays for each disk. >>> >> I have a question related to this but with a different controller: If I'm >> using a RAID controller to provide non-RAID single-disk

Re: [zfs-discuss] Encryption?

2010-07-11 Thread Ross Walker
On Jul 11, 2010, at 5:11 PM, Freddie Cash wrote: > ZFS-FUSE is horribly unstable, although that's more an indication of > the stability of the storage stack on Linux. Not really, more an indication of the pseudo-VFS layer implemented in fuse. Remember fuse provides it's own VFS API separate fro

Re: [zfs-discuss] Need ZFS master!

2010-07-13 Thread Ross Walker
The whole disk layout should be copied from disk 1 to 2, then the slice on disk 2 that corresponds to the slice on disk 1 should be attached to the rpool which forms an rpool mirror (attached not added). Then you need to add the grub bootloader to disk 2. When it finishes resilvering then you

Re: [zfs-discuss] zfs raidz1 and traditional raid 5 perfomrance comparision

2010-07-20 Thread Ross Walker
On Jul 20, 2010, at 6:12 AM, v wrote: > Hi, > for zfs raidz1, I know for random io, iops of a raidz1 vdev eqaul to one > physical disk iops, since raidz1 is like raid5 , so is raid5 has same > performance like raidz1? ie. random iops equal to one physical disk's ipos. On reads, no, any part of

Re: [zfs-discuss] File cloning

2010-07-22 Thread Ross Walker
On Jul 22, 2010, at 2:41 PM, Miles Nordin wrote: >> "sw" == Saxon, Will writes: > >sw> 'clone' vs. a 'copy' would be very easy since we have >sw> deduplication now > > dedup doesn't replace the snapshot/clone feature for the > NFS-share-full-of-vmdk use case because there's no equi

Re: [zfs-discuss] zfs raidz1 and traditional raid 5 perfomrance comparision

2010-07-25 Thread Ross Walker
On Jul 23, 2010, at 10:14 PM, Edward Ned Harvey wrote: >> From: Arne Jansen [mailto:sensi...@gmx.net] >>> >>> Can anyone else confirm or deny the correctness of this statement? >> >> As I understand it that's the whole point of raidz. Each block is its >> own >> stripe. > > Nope, that doesn't

Re: [zfs-discuss] Mirrored raidz

2010-07-26 Thread Ross Walker
On Jul 26, 2010, at 2:51 PM, Dav Banks wrote: > I wanted to test it as a backup solution. Maybe that's crazy in itself but I > want to try it. > > Basically, once a week detach the 'backup' pool from the mirror, replace the > drives, add the new raidz to the mirror and let it resilver and sit

Re: [zfs-discuss] iScsi slow

2010-08-03 Thread Ross Walker
On Aug 3, 2010, at 12:13 PM, Roch Bourbonnais wrote: > > Le 27 mai 2010 à 07:03, Brent Jones a écrit : > >> On Wed, May 26, 2010 at 5:08 AM, Matt Connolly >> wrote: >>> I've set up an iScsi volume on OpenSolaris (snv_134) with these commands: >>> >>> sh-4.0# zfs create rpool/iscsi >>> sh-4.0#

Re: [zfs-discuss] iScsi slow

2010-08-03 Thread Ross Walker
On Aug 3, 2010, at 5:56 PM, Robert Milkowski wrote: > On 03/08/2010 22:49, Ross Walker wrote: >> On Aug 3, 2010, at 12:13 PM, Roch Bourbonnais >> wrote: >> >> >>> Le 27 mai 2010 à 07:03, Brent Jones a écrit : >>> >>> >>&g

Re: [zfs-discuss] iScsi slow

2010-08-03 Thread Ross Walker
On Aug 3, 2010, at 5:56 PM, Robert Milkowski wrote: > On 03/08/2010 22:49, Ross Walker wrote: >> On Aug 3, 2010, at 12:13 PM, Roch Bourbonnais >> wrote: >> >> >>> Le 27 mai 2010 à 07:03, Brent Jones a écrit : >>> >>> >>&g

Re: [zfs-discuss] iScsi slow

2010-08-04 Thread Ross Walker
On Aug 4, 2010, at 3:52 AM, Roch wrote: > > Ross Walker writes: > >> On Aug 3, 2010, at 12:13 PM, Roch Bourbonnais >> wrote: >> >>> >>> Le 27 mai 2010 à 07:03, Brent Jones a écrit : >>> >>>> On Wed, May 26, 2010 at 5:08 A

Re: [zfs-discuss] iScsi slow

2010-08-04 Thread Ross Walker
On Aug 4, 2010, at 9:20 AM, Roch wrote: > > > Ross Asks: > So on that note, ZFS should disable the disks' write cache, > not enable them despite ZFS's COW properties because it > should be resilient. > > No, because ZFS builds resiliency on top of unreliable parts. it's able to > deal

Re: [zfs-discuss] iScsi slow

2010-08-04 Thread Ross Walker
On Aug 4, 2010, at 12:04 PM, Roch wrote: > > Ross Walker writes: >> On Aug 4, 2010, at 9:20 AM, Roch wrote: >> >>> >>> >>> Ross Asks: >>> So on that note, ZFS should disable the disks' write cache, >>> not enable t

Re: [zfs-discuss] iScsi slow

2010-08-05 Thread Ross Walker
On Aug 5, 2010, at 11:10 AM, Roch wrote: > > Ross Walker writes: >> On Aug 4, 2010, at 12:04 PM, Roch wrote: >> >>> >>> Ross Walker writes: >>>> On Aug 4, 2010, at 9:20 AM, Roch wrote: >>>> >>>>> >>>&g

Re: [zfs-discuss] iScsi slow

2010-08-05 Thread Ross Walker
On Aug 5, 2010, at 2:24 PM, Roch Bourbonnais wrote: > > Le 5 août 2010 à 19:49, Ross Walker a écrit : > >> On Aug 5, 2010, at 11:10 AM, Roch wrote: >> >>> >>> Ross Walker writes: >>>> On Aug 4, 2010, at 12:04 PM, Roch wrote: >>>&

Re: [zfs-discuss] ZFS and VMware

2010-08-14 Thread Ross Walker
On Aug 14, 2010, at 8:26 AM, "Edward Ned Harvey" wrote: >> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- >> boun...@opensolaris.org] On Behalf Of Edward Ned Harvey >> >> #3 I previously believed that vmfs3 was able to handle sparse files >> amazingly well, like, when you create

Re: [zfs-discuss] Opensolaris is apparently dead

2010-08-16 Thread Ross Walker
On Aug 16, 2010, at 9:06 AM, "Edward Ned Harvey" wrote: > ZFS does raid, and mirroring, and resilvering, and partitioning, and NFS, and > CIFS, and iSCSI, and device management via vdev's, and so on. So ZFS steps > on a lot of linux peoples' toes. They already have code to do this, or that,

Re: [zfs-discuss] Opensolaris is apparently dead

2010-08-16 Thread Ross Walker
On Aug 15, 2010, at 9:44 PM, Peter Jeremy wrote: > Given that both provide similar features, it's difficult to see why > Oracle would continue to invest in both. Given that ZFS is the more > mature product, it would seem more logical to transfer all the effort > to ZFS and leave btrfs to die.

Re: [zfs-discuss] extremely slow writes (with good reads)

2009-09-27 Thread Ross Walker
On Sep 27, 2009, at 3:19 AM, Paul Archer wrote: So, after *much* wrangling, I managed to take on of my drives offline, relabel/repartition it (because I saw that the first sector was 34, not 256, and realized there could be an alignment issue), and get it back into the pool. Problem is t

Re: [zfs-discuss] extremely slow writes (with good reads)

2009-09-27 Thread Ross Walker
On Sep 27, 2009, at 11:49 AM, Paul Archer wrote: Problem is that while it's back, the performance is horrible. It's resilvering at about (according to iostat) 3.5MB/sec. And at some point, I was zeroing out the drive (with 'dd if=/dev/zero of=/dev/ dsk/c7d0'), and iostat showed me that the

Re: [zfs-discuss] extremely slow writes (with good reads)

2009-09-27 Thread Ross Walker
On Sep 27, 2009, at 1:44 PM, Paul Archer wrote: My controller, while normally a full RAID controller, has had its BIOS turned off, so it's acting as a simple SATA controller. Plus, I'm seeing this same slow performance with dd, not just with ZFS. And I wouldn't think that write caching wou

Re: [zfs-discuss] OS install question

2009-09-27 Thread Ross Walker
On Sep 27, 2009, at 8:41 PM, Ron Watkins wrote: I have a box with 4 disks. It was my intent to place a mirrored root partition on 2 disks on different controllers, then use the remaining space and the other 2 disks to create a raid-5 configuration from which to export iscsi luns for use by

Re: [zfs-discuss] OS install question

2009-09-27 Thread Ross Walker
On Sep 27, 2009, at 10:05 PM, Ron Watkins wrote: My goal is to have a mirrored root on c1t0d0s0/c2t0d0s0, another mirrored app fs on c1t0d0s1/c2t0d0s1 and then a 3+1 Raid-5 accross c1t0d0s7/c1t1d0s7/c2t0d0s7/c2t1d0s7. There is no need for the 2 mirrors both on c1t0 and c2t0 one mirrored

Re: [zfs-discuss] NFS/ZFS slow on parallel writes

2009-09-29 Thread Ross Walker
On Tue, Sep 29, 2009 at 10:35 AM, Richard Elling wrote: > > On Sep 29, 2009, at 2:03 AM, Bernd Nies wrote: > >> Hi, >> >> We have a Sun Storage 7410 with the latest release (which is based upon >> opensolaris). The system uses a hybrid storage pool (23 1TB SATA disks in >> RAIDZ2 and 1 18GB SSD as

Re: [zfs-discuss] [ZFS-discuss] RAIDZ drive "removed" status

2009-09-29 Thread Ross Walker
On Tue, Sep 29, 2009 at 5:30 PM, David Stewart wrote: > Before I try these options you outlined I do have a question.  I went in to > VMWare Fusion and removed one of the drives from the virtual machine that was > used to create a RAIDZ pool (there were five drives, one for the OS, and four > f

Re: [zfs-discuss] Incremental snapshot size

2009-09-30 Thread Ross Walker
On Sep 30, 2009, at 10:40 AM, Brian Hubbleday wrote: Just realised I missed a rather important word out there, that could confuse. So the conclusion I draw from this is that the --incremental-- snapshot simply contains every written block since the last snapshot regardless of whether the

Re: [zfs-discuss] Slow reads with ZFS+NFS

2009-10-20 Thread Ross Walker
But this is concerning reads not writes. -Ross On Oct 20, 2009, at 4:43 PM, Trevor Pretty wrote: Gary Where you measuring the Linux NFS write performance? It's well know that Linux can use NFS in a very "unsafe" mode and report the write complete when it is not all the way to safe s

Re: [zfs-discuss] Slow reads with ZFS+NFS

2009-10-20 Thread Ross Walker
x27; which is unsafe. -Ross Ross Walker wrote: But this is concerning reads not writes. -Ross On Oct 20, 2009, at 4:43 PM, Trevor Pretty wrote: Gary Where you measuring the Linux NFS write performance? It's well know that Linux can use NFS in a very "unsafe" mo

Re: [zfs-discuss] CR6894234 -- improved sgid directory compatibility with non-Solaris NFS clients

2009-11-03 Thread Ross Walker
On Nov 2, 2009, at 2:38 PM, "Paul B. Henson" wrote: On Sat, 31 Oct 2009, Al Hopper wrote: Kudos to you - nice technical analysis and presentation, Keep lobbying your point of view - I think interoperability should win out if it comes down to an arbitrary decision. Thanks; but so far tha

Re: [zfs-discuss] CR6894234 -- improved sgid directory compatibility with non-Solaris NFS clients

2009-11-06 Thread Ross Walker
On Nov 6, 2009, at 11:23 PM, "Paul B. Henson" wrote: NFSv3 gss: damien cfservd # mount -o sec=krb5p ike.unx.csupomona.edu:/export/ user/henson /mnt hen...@damien /mnt/sgid_test $ ls -ld drwx--s--x+ 2 henson iit 2 Nov 6 20:14 . hen...@damien /mnt/sgid_test $ mkdir gss hen...@damien /mnt/

Re: [zfs-discuss] RAID-Z and virtualization

2009-11-08 Thread Ross Walker
On Nov 8, 2009, at 12:09 PM, Tim Cook wrote: Why not just convert the VM's to run in virtualbox and run Solaris directly on the hardware? Or use OpenSolaris xVM (Xen) with either qemu img files on zpools for the VMs or zvols. -Ross ___ zfs-dis

Re: [zfs-discuss] Help needed to find out where the problem is

2009-11-27 Thread Ross Walker
On Nov 27, 2009, at 12:55 PM, Carsten Aulbert > wrote: On Friday 27 November 2009 18:45:36 Carsten Aulbert wrote: I was too fast, now it looks completely different: scrub: resilver completed after 4h3m with 0 errors on Fri Nov 27 18:46:33 2009 [...] s13:~# zpool status pool: atlashome state

Re: [zfs-discuss] Separate Zil on HDD ?

2009-12-02 Thread Ross Walker
On Dec 2, 2009, at 6:57 AM, Brian McKerr wrote: Hi all, I have a home server based on SNV_127 with 8 disks; 2 x 500GB mirrored root pool 6 x 1TB raidz2 data pool This server performs a few functions; NFS : for several 'lab' ESX virtual machines NFS : mythtv storage (videos, music, recordi

Re: [zfs-discuss] Doing ZFS rollback with preserving later created clones/snapshot?

2009-12-11 Thread Ross Walker
On Dec 11, 2009, at 4:17 AM, Alexander Skwar > wrote: Hello Jeff! Could you (or anyone else, of course *G*) please show me how? Situation: There shall be 2 snapshots of a ZFS called rpool/rb-test Let's call those snapshots "01" and "02". $ sudo zfs create rpool/rb-test $ zfs list rpool/rb-t

Re: [zfs-discuss] Doing ZFS rollback with preserving later created clones/snapshot?

2009-12-11 Thread Ross Walker
On Dec 11, 2009, at 3:26 PM, Alexander Skwar > wrote: Hi! On Fri, Dec 11, 2009 at 15:55, Fajar A. Nugraha wrote: On Fri, Dec 11, 2009 at 4:17 PM, Alexander Skwar wrote: $ sudo zfs create rpool/rb-test $ zfs list rpool/rb-test NAMEUSED AVAIL REFER MOUNTPOINT rpool/rb-test

Re: [zfs-discuss] raidz data loss stories?

2009-12-21 Thread Ross Walker
On Dec 21, 2009, at 4:09 PM, Michael Herf wrote: Anyone who's lost data this way: were you doing weekly scrubs, or did you find out about the simultaneous failures after not touching the bits for months? Scrubbing on a routine basis is good for detecting problems early, but it doesn't so

Re: [zfs-discuss] raidz data loss stories?

2009-12-22 Thread Ross Walker
On Dec 21, 2009, at 11:56 PM, Roman Naumenko wrote: On Dec 21, 2009, at 4:09 PM, Michael Herf wrote: Anyone who's lost data this way: were you doing weekly scrubs, or did you find out about the simultaneous failures after not touching the bits for months? Scrubbing on a routine basis i

Re: [zfs-discuss] raidz data loss stories?

2009-12-22 Thread Ross Walker
On Dec 22, 2009, at 11:46 AM, Bob Friesenhahn > wrote: On Tue, 22 Dec 2009, Ross Walker wrote: Raid10 provides excellent performance and if performance is a priority then I recommend it, but I was under the impression that resiliency was the priority, as raidz2/raidz3 provide grea

Re: [zfs-discuss] getting decent NFS performance

2009-12-22 Thread Ross Walker
On Dec 22, 2009, at 8:40 PM, Charles Hedrick wrote: It turns out that our storage is currently being used for * backups of various kinds, run daily by cron jobs * saving old log files from our production application * saving old versions of java files from our production application Most of

Re: [zfs-discuss] getting decent NFS performance

2009-12-22 Thread Ross Walker
On Dec 22, 2009, at 8:58 PM, Richard Elling wrote: On Dec 22, 2009, at 5:40 PM, Charles Hedrick wrote: It turns out that our storage is currently being used for * backups of various kinds, run daily by cron jobs * saving old log files from our production application * saving old versions o

Re: [zfs-discuss] getting decent NFS performance

2009-12-22 Thread Ross Walker
On Dec 22, 2009, at 9:08 PM, Bob Friesenhahn > wrote: On Tue, 22 Dec 2009, Ross Walker wrote: I think zil_disable may actually make sense. How about a zil comprised of two mirrored iSCSI vdevs formed from a SSD on each box? I would not have believed that this is a useful idea except t

Re: [zfs-discuss] Benchmarks results for ZFS + NFS, using SSD's as slog devices (ZIL)

2009-12-25 Thread Ross Walker
On Dec 25, 2009, at 6:01 PM, Jeroen Roodhart wrote: -BEGIN PGP SIGNED MESSAGE- Hash: RIPEMD160 Hi Freddie, list, Option 4 is to re-do your pool, using fewer disks per raidz2 vdev, giving more vdevs to the pool, and thus increasing the IOps for the whole pool. 14 disks in a single

Re: [zfs-discuss] repost - high read iops

2009-12-29 Thread Ross Walker
On Dec 29, 2009, at 7:55 AM, Brad wrote: Thanks for the suggestion! I have heard mirrored vdevs configuration are preferred for Oracle but whats the difference between a raidz mirrored vdev vs a raid10 setup? A mirrored raidz provides redundancy at a steep cost to performance and might

Re: [zfs-discuss] repost - high read iops

2009-12-29 Thread Ross Walker
On Dec 29, 2009, at 12:36 PM, Bob Friesenhahn > wrote: On Tue, 29 Dec 2009, Ross Walker wrote: A mirrored raidz provides redundancy at a steep cost to performance and might I add a high monetary cost. I am not sure what a "mirrored raidz" is. I have never heard of such a

Re: [zfs-discuss] raidz vs raid5 clarity needed

2009-12-29 Thread Ross Walker
On Dec 29, 2009, at 5:37 PM, Brad wrote: Hi! I'm attempting to understand the pros/cons between raid5 and raidz after running into a performance issue with Oracle on zfs (http://opensolaris.org/jive/thread.jspa?threadID=120703&tstart=0 ). I would appreciate some feedback on what I've und

Re: [zfs-discuss] repost - high read iops

2009-12-30 Thread Ross Walker
On Wed, Dec 30, 2009 at 12:35 PM, Bob Friesenhahn wrote: > On Tue, 29 Dec 2009, Ross Walker wrote: >> >>> Some important points to consider are that every write to a raidz vdev >>> must be synchronous.  In other words, the write needs to complete on all the >>

Re: [zfs-discuss] zvol (slow) vs file (fast) performance snv_130

2009-12-30 Thread Ross Walker
On Dec 30, 2009, at 11:55 PM, "Steffen Plotner" wrote: Hello, I was doing performance testing, validating zvol performance in particularly, and found that zvol write performance to be slow ~35-44MB/s at 1MB blocksize writes. I then tested the underlying zfs file system with the same te

Re: [zfs-discuss] zvol (slow) vs file (fast) performance snv_130

2010-01-04 Thread Ross Walker
On Sun, Jan 3, 2010 at 1:59 AM, Brent Jones wrote: > On Wed, Dec 30, 2009 at 9:35 PM, Ross Walker wrote: >> On Dec 30, 2009, at 11:55 PM, "Steffen Plotner" >> wrote: >> >> Hello, >> >> I was doing performance testing, validating zvol performa

Re: [zfs-discuss] rethinking RaidZ and Record size

2010-01-04 Thread Ross Walker
On Mon, Jan 4, 2010 at 2:27 AM, matthew patton wrote: > I find it baffling that RaidZ(2,3) was designed to split a record-size block > into N (N=# of member devices) pieces and send the uselessly tiny requests to > spinning rust when we know the massive delays entailed in head seeks and > rotat

Re: [zfs-discuss] rethinking RaidZ and Record size

2010-01-06 Thread Ross Walker
On Wed, Jan 6, 2010 at 4:30 PM, Wes Felter wrote: > Michael Herf wrote: > >> I agree that RAID-DP is much more scalable for reads than RAIDZx, and >> this basically turns into a cost concern at scale. >> >> The raw cost/GB for ZFS is much lower, so even a 3-way mirror could be >> used instead of n

Re: [zfs-discuss] I/O Read starvation

2010-01-11 Thread Ross Walker
On Jan 11, 2010, at 2:23 PM, Bob Friesenhahn > wrote: On Mon, 11 Jan 2010, bank kus wrote: Are we still trying to solve the starvation problem? I would argue the disk I/O model is fundamentally broken on Solaris if there is no fair I/O scheduling between multiple read sources until that

Re: [zfs-discuss] 4 Internal Disk Configuration

2010-01-14 Thread Ross Walker
On Jan 14, 2010, at 10:44 AM, "Mr. T Doodle" wrote: Hello, I have played with ZFS but not deployed any production systems using ZFS and would like some opinions I have a T-series box with 4 internal drives and would like to deploy ZFS with availability and performance in mind ;

Re: [zfs-discuss] 2gig file limit on ZFS?

2010-01-21 Thread Ross Walker
On Jan 21, 2010, at 6:47 PM, Daniel Carosone wrote: On Thu, Jan 21, 2010 at 02:54:21PM -0800, Richard Elling wrote: + support file systems larger then 2GiB include 32-bit UIDs a GIDs file systems, but what about individual files within? I think the original author meant files bigger then 2

Re: [zfs-discuss] Home ZFS NAS - 2 drives or 3?

2010-01-30 Thread Ross Walker
On Jan 30, 2010, at 2:53 PM, Mark wrote: I have a 1U server that supports 2 SATA drives in the chassis. I have 2 750 GB SATA drives. When I install opensolaris, I assume it will want to use all or part of one of those drives for the install. That leaves me with the remaining part of disk 1

Re: [zfs-discuss] How to get a list of changed files between two snapshots?

2010-02-03 Thread Ross Walker
On Feb 3, 2010, at 9:53 AM, Henu wrote: Okay, so first of all, it's true that send is always fast and 100% reliable because it uses blocks to see differences. Good, and thanks for this information. If everything else fails, I can parse the information I want from send stream :) But am I

Re: [zfs-discuss] How to get a list of changed files between two snapshots?

2010-02-03 Thread Ross Walker
On Feb 3, 2010, at 12:35 PM, Frank Cusack z...@linetwo.net> wrote: On February 3, 2010 12:19:50 PM -0500 Frank Cusack > wrote: If you do need to know about deleted files, the find method still may be faster depending on how ddiff determines whether or not to do a file diff. The docs don't expla

Re: [zfs-discuss] How to get a list of changed files between two snapshots?

2010-02-03 Thread Ross Walker
On Feb 3, 2010, at 8:59 PM, Frank Cusack wrote: On February 3, 2010 6:46:57 PM -0500 Ross Walker wrote: So was there a final consensus on the best way to find the difference between two snapshots (files/directories added, files/directories deleted and file/directories changed)? Find

Re: [zfs-discuss] How to get a list of changed files between two snapshots?

2010-02-04 Thread Ross Walker
system functions offered by OS. I scan every byte in every file manually and it ^^^ On February 3, 2010 10:11:01 AM -0500 Ross Walker > wrote: Not a ZFS method, but you could use rsync with the dry run option to list all changed fi

Re: [zfs-discuss] How to get a list of changed files between two snapshots?

2010-02-04 Thread Ross Walker
Interesting, can you explain what zdb is dumping exactly? I suppose you would be looking for blocks referenced in the snapshot that have a single reference and print out the associated file/ directory name? -Ross On Feb 4, 2010, at 7:29 AM, Darren Mackay wrote: Hi Ross, zdb - f..

Re: [zfs-discuss] Cores vs. Speed?

2010-02-05 Thread Ross Walker
On Feb 5, 2010, at 10:49 AM, Robert Milkowski wrote: Actually, there is. One difference is that when writing to a raid-z{1|2} pool compared to raid-10 pool you should get better throughput if at least 4 drives are used. Basically it is due to the fact that in RAID-10 the maximum you can g

Re: [zfs-discuss] NFS access by OSX clients (was Cores vs. Speed?)

2010-02-09 Thread Ross Walker
On Feb 8, 2010, at 4:58 PM, Edward Ned Harvey > wrote: How are you managing UID's on the NFS server? If user eharvey connects to server from client Mac A, or Mac B, or Windows 1, or Windows 2, or any of the linux machines ... the server has to know it's eharvey, and assign the correct UID'

  1   2   >