long topic, it was discuss in a previous thread.
in relation to this, there is
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6415647
which can be of interest

selim

On 9/4/07, Matty <[EMAIL PROTECTED]> wrote:
> Are there any plans to support record sizes larger than 128k? We use
> ZFS file systems for disk staging on our backup servers (compression
> is a nice feature here), and we typically configure the disk staging
> process to read and write large blocks (typically 1MB or so). This
> reduces the number of I/Os that take place to our storage arrays, and
> our testing has shown that we can push considerably more I/O with 1MB+
> block sizes.
>
> Thanks for any insight,
> - Ryan
> --
> UNIX Administrator
> http://prefetch.net
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to