On Mon, Jul 19, 2010 at 5:40 PM, Chad Cantwell <c...@iomail.org> wrote:
> fyi, everyone, I have some more info here.  in short, rich lowe's 142 works
> correctly (fast) on my hardware, while both my compilations (snv 143, snv 144)
> and also the nexanta 3 rc2 kernel (134 with backports) are horribly slow.
>
> I finally got around to trying rich lowe's snv 142 compilation in place of
> my own compilation of 143 (and later 144, not mentioned below), and unlike
> my own two compilations, his works very fast again on my same zpool (
> scrubbing avg increased from low 100s to over 400 MB/s within a few
> minutes after booting into this copy of 142.  I should note that since
> my original message, I also tried booting from a Nexanta Core 3.0 RC2 ISO
> after realizing it had zpool 26 support backported into 134 and was in
> fact able to read my zpool despite upgrading the version.  Running a
> scrub from the F2 shell on the Nexanta CD was also slow scrubbing, just
> like the 143 and 144 that I compiled.  So, there seem to be two possibilities.
> Either (and this seems unlikely) there is a problem introduced post-142 which
> slows things down, and it occured in 143, 144, and was brought back to 134
> with Nexanta's backports, or else (more likely) there is something different
> or wrong with how I'm compiling the kernel that makes the hardware not
> perform up to its specifications with a zpool, and possibly the Nexanta 3
> RC2 ISO has the same problem as my own compilations.
>
> Chad
>
> On Tue, Jul 06, 2010 at 03:08:50PM -0700, Chad Cantwell wrote:
>> Hi all,
>>
>> I've noticed something strange in the throughput in my zpool between
>> different snv builds, and I'm not sure if it's an inherent difference
>> in the build or a kernel parameter that is different in the builds.
>> I've setup two similiar machines and this happens with both of them.
>> Each system has 16 2TB Samsung HD203WI drives (total) directly connected
>> to two LSI 3081E-R 1068e cards with IT firmware in one raidz3 vdev.
>>
>> In both computers, after a fresh installation of snv 134, the throughput
>> is a maximum of about 300 MB/s during scrub or something like
>> "dd if=/dev/zero bs=1024k of=bigfile".
>>
>> If I bfu to snv 138, I then get throughput of about 700 MB/s with both
>> scrub or a single thread dd.
>>
>> I assumed at first this was some sort of bug or regression in 134 that
>> made it slow.  However, I've now tested also from the fresh 134
>> installation, compiling the OS/Net build 143 from the mercurial
>> repository and booting into it, after which the dd throughput is still
>> only about 300 MB/s just like snv 134.  The scrub throughput in 143
>> is even slower, rarely surpassing 150 MB/s.  I wonder if the scrubbing
>> being extra slow here is related to the additional statistics displayed
>> during the scrub that didn't used to be shown.
>>
>> Is there some kind of debug option that might be enabled in the 134 build
>> and persist if I compile snv 143 which would be off if I installed a 138
>> through bfu?  If not, it makes me think that the bfu to 138 is changing
>> the configuration somewhere to make it faster rather than fixing a bug or
>> being a debug flag on or off.  Does anyone have any idea what might be
>> happening?  One thing I haven't tried is bfu'ing to 138, and from this
>> faster working snv 138 installing the snv 143 build, which may possibly
>> create a 143 that performs faster if it's simply a configuration parameter.
>> I'm not sure offhand if installing source-compiled ON builds from a bfu'd
>> rpool is supported, although I suppose it's simple enough to try.
>>
>> Thanks,
>> Chad Cantwell
>> _______________________________________________
>> zfs-discuss mailing list
>> zfs-discuss@opensolaris.org
>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>

I'm surprised you're even getting 400MB/s on the "fast"
configurations, with only 16 drives in a Raidz3 configuration.
To me, 16 drives in Raidz3 (single Vdev) would do about 150MB/sec, as
your "slow" speeds suggest.

-- 
Brent Jones
br...@servuhome.net
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to