Brandon High wrote:
On Tue, Jun 8, 2010 at 10:33 AM, besson3c <j...@netmusician.org> wrote:
  
On heavy reads or writes (writes seem to be more problematic) my load averages on my VM host shoot up and overall performance is bogged down. I suspect that I do need a mirrored SLOG, but I'm wondering what the best way is
    

The load that you're seeing is probably iowait. If that's the case,
it's almost certainly the write speed of your pool. A raidz will be
slow for your purposes, and adding a zil may help. There's been lots
of discussion in the archives about how to determine if a log device
will help, such as using zilstat or disabling the zil and testing.

You may want to set the recordsize smaller for the datasets that
contain vmdk files as well. With the default recordsize of 128k, a 4k
write by the VM host can result in 128k being read from and written to
the dataset.

What VM software are you using? There are a few knobs you can turn in
VBox which will help with slow storage. See
http://www.virtualbox.org/manual/ch12.html#id2662300 for instructions
on reducing the flush interval.

-B

  

I'd love to use Virtualbox, but right now it (3.2.2 commercial which I'm evaluating, I haven't been able to compile OSE on the CentOS 5.5 host yet) is giving me kernel panics on the host while starting up VMs which are obviously bothersome, so I'm exploring continuing to use VMWare Server and seeing what I can do on the Solaris/ZFS side of things. I've also read this on a VMWare forum, although I don't know if this correct? This is in context to me questioning why I don't seem to have these same load average problems running Virtualbox:

The problem with the comparison VirtualBox comparison is that caching is known to be broken in VirtualBox (ignores cache flush, which, by continuing to cache, can "speed up" IO at the expense of data integrity or loss). This could be playing in your favor from a performance perspective, but puts your data at risk. Disabling disk caching altogether would be a bit hit on the Virtualbox side... Neither solution is ideal.

If this is incorrect and I can get Virtualbox working stably, I'm happy to switch to it. It has definitely performed better prior to my panics, and others on the internet seem to agree that it outperforms VMWare products in general. I'm definitely not opposed to this idea.

I've actually never seen much, if any iowait (%w in iostat output, right?). I've run the zilstat script and am happy to share that output with you if you wouldn't mind taking a look at it? I'm not sure I'm understanding its output correctly...

As far as the recordsizes, the evil tuning guide says this:

Depending on workloads, the current ZFS implementation can, at times, cause much more I/O to be requested than other page-based file systems. If the throughput flowing toward the storage, as observed by iostat, nears the capacity of the channel linking the storage and the host, tuning down the zfs recordsize should improve performance. This tuning is dynamic, but only impacts new file creations. Existing files keep their old recordsize.

Will this tuning have an impact on my existing VMDK files? Can you kindly tell me more about this, how I can observe my current recordsize and play around with this setting if it will help? Will adjusting ZFS compression on my share hosting my VMDKs be of any help too? Compression is disabled on my ZFS share where my VMDKs are hosted.

This ZFS host hosts regular data shares in addition to the VMDKs. All user data on my VM guests that is subject to change is hosted on a ZFS share, only the OS and basic OS applications are saved to my VMDKs.



--
Joe Auty, NetMusician
NetMusician helps musicians, bands and artists create beautiful, professional, custom designed, career-essential websites that are easy to maintain and to integrate with popular social networks.
www.netmusician.org
j...@netmusician.org

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to