On Tue, Jun 8, 2010 at 12:04 PM, Joe Auty <j...@netmusician.org> wrote:
>
>   Cool, so maybe this guy was going off of earlier information? Was there
> a time when there was no way to enable cache flushing in Virtualbox?
>

The default is to ignore cache flushes, so he was correct for the default
setting. The IgnoreFlush command has existed since 2.0 at least.

My mistake, yes I see pretty significant iowait times on the host... Right
> now "iostat" is showing 9.30% wait times.
>

That's not too bad, but not great. Here's from a system at work:
avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           2.99    0.00    3.98   92.54    0.50    0.00

The problem is that io gets bursty, so you'll have good speeds for the most
part, followed by some large waits. Small writes to the vmdk will have the
worst performance, since the 128k block has to be read and written out with
the change. Because your guest has /var on the vmdk, there are constant
small writes going to the pool.


> Do you have a recommendation for a good size to start with for the dataset
> hosting VMDKs? Half of 128K? A third?
>

There are inherit tradeoffs using smaller blocks, notably more overhead for
checksums.

zvols use an 8k volblocksize by default, which is probably a decent size.


> In general large files are better served with smaller recordsizes, whereas
> small files are better served with the 128k default?
>

Files that have random small writes in the middle of the data will have poor
performance. Things such as database files, vmdk files, etc. Other than
specific cases like what you've run into, you shouldn't ever need to adjust
the recordsize.

-B

-- 
Brandon High : bh...@freaks.com
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to