> I'm using iozone to get some performance numbers and I/O hangs when > it's doing the writing phase. > > This pool has: > > 18 x 2TB SAS disks as 9 data mirrors > 2 x 32GB X-25E as log mirror > 1 x 160GB X-160M as cache > > iostat shows "2" I/O operations active and SSDs at 100% busy when > it's stuck.
Interesting. Have a SM 847E2 chassis with 33 constellation 2TB SAS and 3 vertex LE 100G, dual-connected across a pair of 9211-8is, sol10u8 with may patchset, and it runs like a champ - left several bonnie++ processes running on it for three days straight thrashing the pool, not even a blip. (the rear and front backplanes are separately cabled to the controllers.) (that's with load-balance="none", in deference to Josh Simon's observations - not really willing to lock the paths because I want the auto-failover. I'm going to be dropping in another pair of 9211-4is and connecting the back 12 drives to them since I have the PCIe slots, though it's probably not especially necessary.) I wonder if the expander chassis work better if you're running with the dual-expander-chip backplane? So far all of my testing with the 2TB SAS drives have been with single-expander-chip backplanes. Hm, might have to give that a try; it never came up simply because both of my dual-expander-chip-backplane JBODs were filled and in use, which just recently changed. > My plan is to use the newest SC846E26 chassis with 2 cables but right > now what I've available for testing is the SC846E1. Agreed. I just got my first 847E2 chassis in today - been waiting for months for them to be available, and I'm not entirely sure there's any real stock (sorta like SM's quad-socket Magny-Cours boards - a month ago, they didn't even have any boards in the USA available for RMA, they got one batch in and sold it in a week or so). > >> Swapping the 9211-4i for a MegaRAID 8888ELP (mega_sas) improves > >> performance by 30-40% instantly and there are no hangs anymore so I'm > >> guessing it's something related to the mpt_sas driver. Wait. The mpt_sas driver by default uses scsi_vhci, and scsi_vhci by default does load-balance round-robin. Have you tried setting load-balance="none" in scsi_vhci.conf? -bacon -- This message posted from opensolaris.org _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss