On Mon, Jul 19, 2010 at 06:00:04PM -0700, Brent Jones wrote: > On Mon, Jul 19, 2010 at 5:40 PM, Chad Cantwell <c...@iomail.org> wrote: > > fyi, everyone, I have some more info here. in short, rich lowe's 142 works > > correctly (fast) on my hardware, while both my compilations (snv 143, snv > > 144) > > and also the nexanta 3 rc2 kernel (134 with backports) are horribly slow. > > > > I finally got around to trying rich lowe's snv 142 compilation in place of > > my own compilation of 143 (and later 144, not mentioned below), and unlike > > my own two compilations, his works very fast again on my same zpool ( > > scrubbing avg increased from low 100s to over 400 MB/s within a few > > minutes after booting into this copy of 142. I should note that since > > my original message, I also tried booting from a Nexanta Core 3.0 RC2 ISO > > after realizing it had zpool 26 support backported into 134 and was in > > fact able to read my zpool despite upgrading the version. Running a > > scrub from the F2 shell on the Nexanta CD was also slow scrubbing, just > > like the 143 and 144 that I compiled. So, there seem to be two > > possibilities. > > Either (and this seems unlikely) there is a problem introduced post-142 > > which > > slows things down, and it occured in 143, 144, and was brought back to 134 > > with Nexanta's backports, or else (more likely) there is something different > > or wrong with how I'm compiling the kernel that makes the hardware not > > perform up to its specifications with a zpool, and possibly the Nexanta 3 > > RC2 ISO has the same problem as my own compilations. > > > > Chad > > > > On Tue, Jul 06, 2010 at 03:08:50PM -0700, Chad Cantwell wrote: > >> Hi all, > >> > >> I've noticed something strange in the throughput in my zpool between > >> different snv builds, and I'm not sure if it's an inherent difference > >> in the build or a kernel parameter that is different in the builds. > >> I've setup two similiar machines and this happens with both of them. > >> Each system has 16 2TB Samsung HD203WI drives (total) directly connected > >> to two LSI 3081E-R 1068e cards with IT firmware in one raidz3 vdev. > >> > >> In both computers, after a fresh installation of snv 134, the throughput > >> is a maximum of about 300 MB/s during scrub or something like > >> "dd if=/dev/zero bs=1024k of=bigfile". > >> > >> If I bfu to snv 138, I then get throughput of about 700 MB/s with both > >> scrub or a single thread dd. > >> > >> I assumed at first this was some sort of bug or regression in 134 that > >> made it slow. However, I've now tested also from the fresh 134 > >> installation, compiling the OS/Net build 143 from the mercurial > >> repository and booting into it, after which the dd throughput is still > >> only about 300 MB/s just like snv 134. The scrub throughput in 143 > >> is even slower, rarely surpassing 150 MB/s. I wonder if the scrubbing > >> being extra slow here is related to the additional statistics displayed > >> during the scrub that didn't used to be shown. > >> > >> Is there some kind of debug option that might be enabled in the 134 build > >> and persist if I compile snv 143 which would be off if I installed a 138 > >> through bfu? If not, it makes me think that the bfu to 138 is changing > >> the configuration somewhere to make it faster rather than fixing a bug or > >> being a debug flag on or off. Does anyone have any idea what might be > >> happening? One thing I haven't tried is bfu'ing to 138, and from this > >> faster working snv 138 installing the snv 143 build, which may possibly > >> create a 143 that performs faster if it's simply a configuration parameter. > >> I'm not sure offhand if installing source-compiled ON builds from a bfu'd > >> rpool is supported, although I suppose it's simple enough to try. > >> > >> Thanks, > >> Chad Cantwell > >> _______________________________________________ > >> zfs-discuss mailing list > >> zfs-discuss@opensolaris.org > >> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > > _______________________________________________ > > zfs-discuss mailing list > > zfs-discuss@opensolaris.org > > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > > > > I'm surprised you're even getting 400MB/s on the "fast" > configurations, with only 16 drives in a Raidz3 configuration. > To me, 16 drives in Raidz3 (single Vdev) would do about 150MB/sec, as > your "slow" speeds suggest. > > -- > Brent Jones > br...@servuhome.net
With which drives and controllers? For a single dd thread writing a large file to fill up a new zpool from /dev/zero, in this configuration I can sustain over 700 MB/s for the duration of the process and can fill up the ~26t usable space overnight. This is with two 8 port LSI 1068e controllers and no expanders. RAIDZ operates similiar to regular raid and you should get striped speeds for sequential access minus any inefficiencies and processing time for the parity. 16 disks in raidz3 is 13 disks worth of striping so with ~700 MB/s I'm getting about 50% efficiency after the parity calculations etc which is fine with me. I understand that some people need to have higher performance random I/O to many places at once, and I think this is where more vdevs has an advantage. Sequential read/write over a single vdev is actually quite good in ZFS in my experience and on par or better than most hardware raid cards, so if you have hardware that works well in OpenSolaris and no bottlenecks in the bus or CPU (I'm not sure how much CPU is needed for good zfs performance, but most of my opensolaris machines are harpertown xeons or better) you really should be getting better performance than 150 MB/s. Chad _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss