On Jul 29, 2010, at 6:04 PM, Carol wrote:
> Richard,
>  
> I disconnected all but one path and disabled mpxio via stmsboot -d and my 
> read performance doubled.  I saw about 100MBps average from the pool. 

This is a start.  Something is certainly fishy in the data paths, but
it is proving to be difficult to pinpoint.  The only common factor I see
at this time is the SuperMicro JBOD chassis. It would be worthwhile
checking to see if there are firmware updates available for the
chassis or expanders.

>  
> BTW, single harddrive performance (single disk in a pool) is about 140MBps.
> What do you think? 

That is about right per disk.  I usually SWAG 100 +/- 50 MB/sec for HDD
media speed.
 -- richard

>  
> Thank you again for your help!
> 
> --- On Thu, 7/29/10, Richard Elling <rich...@nexenta.com> wrote:
> 
> From: Richard Elling <rich...@nexenta.com>
> Subject: Re: [zfs-discuss] ZFS read performance terrible
> To: "Carol" <holaaqu...@yahoo.com>
> Cc: "zfs-discuss@opensolaris.org" <zfs-discuss@opensolaris.org>
> Date: Thursday, July 29, 2010, 2:03 PM
> 
> On Jul 29, 2010, at 9:57 AM, Carol wrote:
> 
> > Yes I noticed that thread a while back and have been doing a great deal of 
> > testing with various scsi_vhci options.  
> > I am disappointed that the thread hasn't moved further since I also suspect 
> > that it is related to mpt-sas or multipath or expander related.
> 
> The thread is in the ZFS forum, but the problem is not a ZFS problem.
> 
> > I was able to get aggregate writes up to 500MB out to the disks but reads 
> > have not improved beyond an aggregate average of about 50-70MBps for the 
> > pool.
> 
> I find "zpool iostat" to be only marginally useful.  You need to look at the
> output of "iostat -zxCn" which will show the latency of the I/Os.  Check to
> see if the latency (asvc_t) is similar to the previous thread.
> 
> > I did not look much at read speeds during alot of my previous testing 
> > because I thought write speeds were my issue... And I've since realized 
> > that my userland write speed problem from zpool <-> zpool was actually read 
> > limited.
> 
> Writes are cached in RAM, so looking at iostat or zpool iostat doesn't offer
> the observation point you'd expect.
> 
> > Since then I've tried mirrors, stripes, raidz, checked my drive caches, 
> > tested recordsizes, volblocksizes, clustersizes, combinations therein, 
> > tried vol-backed luns, file-backed luns, wcd=false - etc.
> > 
> > Reads from disk are slow no matter what.  Of course - once the arc cache is 
> > populated, the userland experience is blazing - because the disks are not 
> > being read.
> 
> Yep, classic case of slow disk I/O.
> 
> > Seeing write speeds so much faster that read strikes me as quite strange 
> > from a hardware perspective, though, since writes also invoke a read 
> > operation - do they not?
> 
> In many cases, writes do not invoke a read.
> -- richard
> 
> 

-- 
Richard Elling
rich...@nexenta.com   +1-760-896-4422
Enterprise class storage for everyone
www.nexenta.com



_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to