Hail, caesar.

I've got a 10-disk RAID-Z2 backed by the 1.5 TB Seagate drives
everyone's so fond of. They've all received a firmware upgrade (the
sane one, not the one that caused your drives to brick if the internal
event log hit the wrong number on boot).
They're attached to an ARC-1280ML, a reasonably good SATA controller,
which has 1 GB of ECC DDR2 for caching purposes.

The machine this is all inside of is backed by an nForce 4 chipset
[hahaha], with a dual-core 1.8 GHz Opteron and 4 GB of ECC RAM
onboard. No swap is enabled.

I'm running snv_112.

I expect to get very nice performance out of the above configuration,
both locally and remotely. And I do, sometimes. I get ~70 MB/s
sustained random reads and something to the tune of 40-50 MB/s random
writes.

But a lot of the time, random small reads and writes will block for
several seconds (!), while zpool iostat won't show any particularly
exciting throughput issues (~20 MB/s read in use on any given disk).

I tried attaching a 4GB USB flash device as a dedicated ZIL, which
improved performance a fair bit in the cases where it was already
performing well, and didn't help at all in the cases where it was
performing poorly.

I then tried attaching a 2GB flash device as a cache device for the
pool, and it rapidly filled (obviously), but didn't appear to give any
interesting performance wins.

I then tried attaching a 500GB 7200 RPM HD instead as a cache device,
and despite giving it several days to fill, as with the ZIL, it
improved performance marginally in cases where it was already fine and
doesn't appear to have helped with the poor cases.

It's not that the system is under heavy CPU load (load average is
never above .5), or memory pressure (600 MB free +/- 50 MB).

I presume I'm missing something, but I have no idea what. Halp?

Thanks,
- Rich
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to