Re: [zfs-discuss] ZFS read performance terrible

2010-08-01 Thread Karol
I can achive 140MBps to individual disks until I hit a 1GBps system ceiling which I suspect 1GBps may be all that the 4x SAS HBA connection on a 3Gbps sas expander can handle. (just a guess) Anyway, with ZFS or SVM I can't do much beyond a single disk performance total (if that) I am thinking

Re: [zfs-discuss] ZFS read performance terrible

2010-07-30 Thread Richard Elling
140MBps. > What do you think? That is about right per disk. I usually SWAG 100 +/- 50 MB/sec for HDD media speed. -- richard > > Thank you again for your help! > > --- On Thu, 7/29/10, Richard Elling wrote: > > From: Richard Elling > Subject: Re: [zfs-discuss]

Re: [zfs-discuss] ZFS read performance terrible

2010-07-30 Thread Brent Jones
On Thu, Jul 29, 2010 at 6:04 PM, Carol wrote: > Richard, > > I disconnected all but one path and disabled mpxio via stmsboot -d and my > read performance doubled. I saw about 100MBps average from the pool. > > BTW, single harddrive performance (single disk in a pool) is about 140MBps. > > What d

Re: [zfs-discuss] ZFS read performance terrible

2010-07-30 Thread Carol
Thu, 7/29/10, Richard Elling wrote: From: Richard Elling Subject: Re: [zfs-discuss] ZFS read performance terrible To: "Carol" Cc: "zfs-discuss@opensolaris.org" Date: Thursday, July 29, 2010, 2:03 PM On Jul 29, 2010, at 9:57 AM, Carol wrote: > Yes I noticed that threa

Re: [zfs-discuss] ZFS read performance terrible

2010-07-30 Thread Carol
> Yep. With round robin it's about 80 for each disk for ascv_t Any ideas? ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] ZFS read performance terrible

2010-07-30 Thread Carol
Yes I noticed that thread a while back and have been doing a great deal of testing with various scsi_vhci options. I am disappointed that the thread hasn't moved further since I also suspect that it is related to mpt-sas or multipath or expander related. I was able to get aggregate writes up t

Re: [zfs-discuss] ZFS read performance terrible

2010-07-30 Thread Karol
I'm about to do some testing with that dtrace script.. However, in the meantime - I've disabled primarycache (set primarycache=none) since I noticed that it was easily caching /dev/zero and I wanted to do some tests within the OS rather than over FC. I am getting the same results through dd. Vi

Re: [zfs-discuss] ZFS read performance terrible

2010-07-30 Thread Eff Norwood
Yes because the author was too smart for his own good and ssd is for Sparc, you use SD. Delete all the ssd lines. Here's that script which will work for you provided it doesn't get wrapped or otherwise maligned by this html interface: #!/usr/sbin/dtrace -s #pragma D option quiet fbt:sd:sdstrat

Re: [zfs-discuss] ZFS read performance terrible

2010-07-30 Thread Karol
> You should look at your disk IO patterns which will > likely lead you to find unset IO queues in sd.conf. > Look at this > http://blogs.sun.com/chrisg/entry/latency_bubble_in_yo > ur_io as a place to start. Any idea why I would get this message from the dtrace script? (I'm new to dtrace / open

Re: [zfs-discuss] ZFS read performance terrible

2010-07-30 Thread Karol
Good idea. I will keep this test in mind - I'd do it immediately except for the fact that it would be somewhat difficult to connect power to the drives considering the design of my chassis, but I'm sure I can figure something out if it comes to it... -- This message posted from opensolaris.org

Re: [zfs-discuss] ZFS read performance terrible

2010-07-30 Thread Eff Norwood
You should look at your disk IO patterns which will likely lead you to find unset IO queues in sd.conf. Look at this http://blogs.sun.com/chrisg/entry/latency_bubble_in_your_io as a place to start. The parameter you can try to set globally (bad idea) is done by doing echo zfs_vdev_max_pending/W

Re: [zfs-discuss] ZFS read performance terrible

2010-07-30 Thread Alexander Lesle
Hello Karol, you wrote at, 29. Juli 2010 02:23: > I appear to be getting between 2-9MB/s reads from individual disks It sounds for me that you have a hardware failure because 2-9 MB/s are less than dropping. > 2x LSI 9200-8e SAS HBAs (2008 chipset) > Supermicro 846e2 enclosure with LSI sasx36 e

Re: [zfs-discuss] ZFS read performance terrible

2010-07-29 Thread Karol
Hi Robert - I tried all of your suggestions but unfortunately my performance did not improve. I tested single disk performance and I get 120-140MBps read/write to a single disk. As soon as I add an additional disk (mirror, stripe, raidz) , my performance drops significantly. I'm using 8Gbit F

Re: [zfs-discuss] ZFS read performance terrible

2010-07-29 Thread StorageConcepts
Actually writes faster then reads are typical fora Copy on Write FC (or Write Anywhere). I usually describe it like this. CoW in ZFS works like when you come home after a long day and you ust want to go to bed. You take of one pice of clothing after another and drop it on the floor just where

Re: [zfs-discuss] ZFS read performance terrible

2010-07-29 Thread Richard Elling
On Jul 29, 2010, at 9:57 AM, Carol wrote: > Yes I noticed that thread a while back and have been doing a great deal of > testing with various scsi_vhci options. > I am disappointed that the thread hasn't moved further since I also suspect > that it is related to mpt-sas or multipath or expande

Re: [zfs-discuss] ZFS read performance terrible

2010-07-29 Thread Karol
Yes I noticed that thread a while back and have been doing a great deal of testing with various scsi_vhci options. I am disappointed that the thread hasn't moved further since I also suspect that it is related to mpt-sas or multipath or expander related. I was able to get aggregate writes up t

Re: [zfs-discuss] ZFS read performance terrible

2010-07-29 Thread Richard Elling
This sounds very similar to another post last month. http://opensolaris.org/jive/thread.jspa?messageID=487453 The trouble appears to be below ZFS, so you might try asking on the storage-discuss forum. -- richard On Jul 28, 2010, at 5:23 PM, Karol wrote: > I appear to be getting between 2-9MB/s

Re: [zfs-discuss] ZFS read performance terrible

2010-07-29 Thread Richard Jahnel
>Hi r2ch >The operations column shows about 370 operations for read - per spindle >(Between 400-900 for writes) >How should I be measuring iops? It seems to me then that your spindles are going about as fast as they can and your just moving small block sizes. There are lots of ways to test for

Re: [zfs-discuss] ZFS read performance terrible

2010-07-29 Thread Karol
> Update to my own post. Further tests more > consistently resulted in closer to 150MB/s. > > When I took one disk offline, it was just shy of > 100MB/s on the single disk. There is both an obvious > improvement with the mirror, and a trade-off (perhaps > the latter is controller related?). > >

Re: [zfs-discuss] ZFS read performance terrible

2010-07-28 Thread Karol
Hi r2ch The operations column shows about 370 operations for read - per spindle (Between 400-900 for writes) How should I be measuring iops? -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail

Re: [zfs-discuss] ZFS read performance terrible

2010-07-28 Thread Richard Jahnel
How many iops per spindle are you getting? A rule of thumb I use is to expect no more than 125 iops per spindle for regular HDDs. SSDs are a different story of course. :) -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-dis

[zfs-discuss] ZFS read performance terrible

2010-07-28 Thread Karol
I appear to be getting between 2-9MB/s reads from individual disks in my zpool as shown in iostat -v I expect upwards of 100MBps per disk, or at least aggregate performance on par with the number of disks that I have. My configuration is as follows: Two Quad-core 5520 processors 48GB ECC/REG ra