On 5/2/2011 7:39 PM, Adam Vande More wrote:
On Mon, May 2, 2011 at 4:30 PM, Ted Mittelstaedt <t...@mittelstaedt.us
<mailto:t...@mittelstaedt.us>> wrote:

    that's sync within the VM.  Where is the bottleneck taking place?  If
    the bottleneck is hypervisor to host, then the guest to vm write may
    write all it's data to a memory buffer in the hypervisor that is
    then slower-writing it to the filesystem.  In that case killing the
    guest without killing the VM manager will allow the buffer to
    complete emptying since the hypervisor isn't actually being shut down.


No the bottle neck is the emulated hardware inside the VM process
container.  This is easy to observe, just start a bound process in the
VM and watch top host side.  Also the hypervisor uses native host IO
driver, there's no reason for it to be slow.  Since it's the emulated
NIC which is the bottleneck, there is nothing left to issue the write.
Further empirical evidence for this can be seen by by watching gstat on
VM running with an md or ZVOL backed storage.  I already utilize ZVOL's
for this so it was pretty easy to confirm no IO occurs when the VM is
paused or shutdown.

    Is his app going to ever face the extremely bad scenario, though?


The point is it should be relatively easy to induce patterns you expect
to see in production.  If you can't, I would consider that a problem.
Testing out theories(performance based or otherwise) on a production
system is not a good way to keep the continued faith of your clients
when the production system is a mission critical one.  Maybe throwing
more hardware at a problem is the first line of defense for some
companies, unfortunately I don't work for them.  Are they hiring? ;)  I
understand the logic of such an approach and have even argued for it
occasionally.  Unfortunately payroll is already in the budget, extra
hardware is not even if it would be a net savings.


Most if not all sites I've ever been in that run Windows servers behave in this manner. With most of these sites SOP is to "prove" that the existing hardware is inadequate by loading whatever Windows software that management wants loaded then letting the users on the network scream about it. Then money magically frees itself up when there wasn't
any before.  Since of course management will never blame the OS for
the slowness, always the hardware.

Understand I'm not advocating this, just making an observation.

Understand that I'm not against testing but I've seen people get
so engrossed in spending time constructing test suites that they
have ended up wasting a lot of money.  I would have to ask, how much
time did the OP who started this thread take building 2 systems,
a Linux and a BSD system?  How much time has he spent trying to get
the BSD system to "work as well as the Linux" system?  Wouldn't it
have been cheaper for him to not spend that time and just put the
Linux system into production?

Ted
_______________________________________________
freebsd-emulation@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-emulation
To unsubscribe, send any mail to "freebsd-emulation-unsubscr...@freebsd.org"

Reply via email to