On Feb 16, 2010, at 9:44 AM, Brian E. Imhoff wrote: > Some more back story. I initially started with Solaris 10 u8, and was > getting 40ish MB/s reads, and 65-70MB/s writes, which was still a far cry > from the performance I was getting with OpenFiler. I decided to try > Opensolaris 2009.06, thinking that since it was more "state of the art & up > to date" then main Solaris. Perhaps there would be some performance tweaks or > bug fixes which might bring performance closer to what I saw with OpenFiler. > But, then on an untouched clean install of OpenSolaris 2009.06, ran into > something...else...apparently causing this far far far worse performance.
You thought a release dated 2009.06 was further along than than a release dated 2009.10? :-) CR 6794730 was fixed in April, 2009, after the freeze for the 2009.06 release, but before the freeze for 2009.10. http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6794730 The schedule is published here, so you can see that there is a freeze now for the 2010.03 OpenSolaris release. http://hub.opensolaris.org/bin/view/Community+Group+on/schedule As they say in comedy, timing is everything :-( > But, at the end of the day, this is quite a bomb: "A single raidz2 vdev has > about as many IOs per second as a single disk, which could really hurt iSCSI > performance." The context for this statement is for small, random reads. 40 MB/sec of 8KB reads is 5,000 IOPS, or about 50 HDDs worth of small random reads @ 100 IOPS/disk, or one decent SSD. > If I have to break 24 disks up in to multiple vdevs to get the expected > performance might be a deal breaker. To keep raidz2 redundancy, I would have > to lose..almost half of the available storage to get reasonable IO speeds. Are your requirements for bandwidth or IOPS? > Now knowing about vdev IO limitations, I believe the speeds I saw with > Solaris 10u8 are inline with those limitations, and instead of fighting with > whatever issue I have with this clean install of OpenSolaris, I reverted back > to 10u8. I guess I'll just have to see if the speeds that Solaris ISCSI > w/ZFS is capable of, is workable for what I want to do, and what the size > sacrifice/performace acceptability point is at. In Solaris 10 you are stuck with the legacy iSCSI target code. In OpenSolaris, you have the option of using COMSTAR which performs and scales better, as Roch describes here: http://blogs.sun.com/roch/entry/iscsi_unleashed > Thanks for all the responses and help. First time posting here, and this > looks like an excellent community. We try hard, and welcome the challenges :-) -- richard ZFS storage and performance consulting at http://www.RichardElling.com ZFS training on deduplication, NexentaStor, and NAS performance http://nexenta-atlanta.eventbrite.com (March 15-17, 2010) _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss