Robert Milkowski wrote:
Hello Matthew,

Thursday, August 10, 2006, 4:50:31 PM, you wrote:

MA> On Thu, Aug 10, 2006 at 11:48:09AM +0200, Robert Milkowski wrote:

MA> This test fundamentally requires waiting for lots of syncronous writes.
MA> Assuming no other activity on the system, the performance of syncronous
MA> writes does not scale with the number of drives, it scales with the
MA> drive's write latency.

MA> If you were to alter the test to not require everything to be done
MA> synchronously, then you would see much different behavior.

Does it mean that instead of creating one pool if I create two pools
with the same numbers of disks in summary, and run two tests at the
same time each on its own pool I should observe better scaling than
with one pool with all disks?


MA> Yes, but a better solution would be to use one pool with multiple
MA> filesystems.

MA> The intent log is per filesystem, so if you can break up your workload
MA> into sub-loads that don't depend on the others being on disk
MA> synchronously, then you can put each sub-load on a different filesystem,
MA> and it will scale approximately with the number of filesystems.

That is great news as that is exactly what I'm doing right now (many
fs'es in a single pool) and going to do.

Thank you for info.


btw: wouldn't it be possible to write block only once (for synchronous
IO) and than just point to that block instead of copying it again?

We already do that if the block is sufficiently large (currently >32KB).
We still have to write a log record as well though, but many of those records
can get aggregated into the same log block if there are a lot of parallel
write threads.


IIRC raiserfs folks are trying to implement something like this.



--

Neil
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to