Hi Steve,
Thanks for the thoughts - I think that everything you asked about is in
the original email - but for reference again, it's 151a (s11 express).
Are you really suggesting, for a single user system I need 16GB of
memory, just to get ZFS to be able to write when it's reading? (and even
them, that would be contingent on you getting repeat, cached hits on the
ARC). That's hardly sensible, and anything but enterprise. I know I'm
only talking about my little baby box at the moment, but extend that to
a large database application, and I'm seeing badness all round.
Worse - If I'm reading a 45GB contiguous file (say, HD video), the only
way an ARC will help me is if I have 64GB, and have read it in the
past... especially if I'm reading it sequentially. That's
inconceivable!! (cue reference to the Princess Bride :). I'd also ad
that for the most part, 8GB is plenty for ZFS, and there are a lot of
Sun/Oracle customers using it now in LDOM environments where 8GB is just
great in the control/IO domain.
I don't think trying to blame the system in this case is the right
answer. ZFS schedules the read/write activities, and to me it seems that
it's just not doing that.
I was suspicious of the impact the HP Raid controller is having - and
how it might be reacting to what's being pushed at it, so re-created
exactly this problem again on a different system with native non-cached
SATA controllers. Issue is identical. (Though I have since determined
that my HP raid controller is actually *slowing* my reads and writes to
disk! ;)
Cheers!
Nathan.
On 14/02/2011 4:08 AM, gon...@comcast.net wrote:
Hi Nathan,
Maybe it is buried somewhere in your email, but I did not see what
zfs version you are using.
This is rather important, because the 145+ kernels work a lot better
in many ways than the
early ones ( say 134-ish).
So whenever you are reporting various ZFS issues, something like
`uname -a` to report the kernel rev
is most useful.
Writes starved by reads has been a complaint in early ZFS, I certainy
do not see
any evidence of this in the 145+ kernels.
There is a fair amount of tuning and configuration that can be done
(adding ssd-s to your pool, zil vs no zil, how cacheing is configured,
ie what to cache..)
8 Gig is not a lot of memory for ZFS, I would recommend double of that.
If all goes well, most reads would be statisfied from ARC, and not
interfere with writes.
Steve
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss