Hi,

On Dec 17, 2007 10:37 AM, Roch - PAE <[EMAIL PROTECTED]> wrote:
>
>
> dd uses a default block size of 512B.  Does this map to your
> expected usage ? When I quickly tested the CPU cost of small
> read from cache, I did see that ZFS was more costly than UFS
> up to a crossover between 8K and 16K.   We might need a more
> comprehensive study of that (data in/out of cache, different
> recordsize  &    alignment constraints   ).   But  for small
> syscalls, I think we might need some work  in ZFS to make it
> CPU efficient.
>
> So first,  does  small sequential write    to a large  file,
> matches an interesting use case ?

The pool holds home directories so small sequential writes to one
large file present one of a few interesting use cases.
The performance is equally disappointing for many (small) files
like compiling projects in svn repositories.

Cheers,
  Frank
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to