On 5/2/2011 5:09 AM, Adam Vande More wrote:
On Mon, May 2, 2011 at 12:54 AM, John<aqq...@earthlink.net> wrote:
On both the FreeBSD host and the CentOS host, the copying only takes 1
second, as tested before. Actually, the classic "dd" test is slightly
faster on the FreeBSD host than on the CentOS host.
The storage I chose for the virtualbox guest is a SAS controller. I found
by default it did not enable "Use Host I/O Cache". I just enabled that and
rebooted the guest. Now the copying on the guest takes 3 seconds. Still,
that's clearly slower than 1 second.
Any other things I can try? I love FreeBSD and hope we can sort this out.
Your FreeBSD Host/guest results seem relatively consistent with what I would
expect since VM block io isn't really that great yet, however the results in
your Linux VM seems too good to be true.
We know that Linux likes to run with the condom off on the file system,
(async writes) just because it helps them win all the know-nothing
benchmark contests in the ragazines out there, and FreeBSD does not
because it's users want to have an intact filesystem in case the
system crashes or loses power. I'm guessing this is the central issue
here.
Have you tried powering off the
Linux VM immediately after the cp exits and md5'ing the two files? This
will insure your writes are completing successfully.
That isn't going to do anything because the VM will take longer than 3
seconds to close and it it's done gracefully then the VM won't close
until the writes are all complete.
Also, it may be this type of operation really is faster on your Linux setup,
but is it representative of the primary workload? If not, you'll probably
want to arrange some type of benchmark that mimics real world IO flows as
hypervisor IO performance varies across workloads(example: it used to be
true KVM wasn't very good at concurrent IO, not sure if it's better now).
You should also ensure caches are not effecting the outcome of consecutive
benchmarks runs. umounting and remounting the file system on your SAS
controller is the quickest way to do this, but depending on your setup you
may need to reboot. If you do want to test with caching involved, you
should discard the initial run from your results. Use something like
/usr/bin/time to get consistent results of how long operations took. Ensure
each setup is running in a minimal configuration so there is less resource
contention.
Also single runs are a terrible way benchmark things. You *need* multiple
runs to ensure accuracy. ministat(1) is a tool in base to help with this.
Here is more detail:
http://ivoras.sharanet.org/blog/tree/2009-12-02.using-ministat.html
http://lists.freebsd.org/pipermail/freebsd-current/2011-March/023435.html
However that tool doesn't mimic real world behavior, either. The only
real way to test is to run both systems in production and see what
happens.
I would not make a choice of going with one system over another based
on a single large file write difference of 2 seconds. We have to
assume he's got an application that makes hundreds to thousands of large
file writes where this discrepancy would actually make a difference.
Ted
_______________________________________________
freebsd-emulation@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-emulation
To unsubscribe, send any mail to "freebsd-emulation-unsubscr...@freebsd.org"