> Welcome to the world of RAID and fuse based file systems.

Yeah, I don't expect way too much from fuse, especially the 2.6
version that I'm running.  That's why I want to try out solaris :)

> You don't expect a guest to go faster than the host, are you??

My thinking was this:  to access a file in fuse, you visit a
directory, your program makes a kernel call to stat/open the file, the
kernel calls out to the fuse application to get the info, fuse calls
back to the OS to look at the disk (all 8 of them, I guess), then fuse
returns data to the OS and the OS returns data to the caller.
zfs-fuse doesn't have terribly good caching yet (it's gotten a lot
better recently, but I'm not running those builds right now), so it is
a lot of context switching to access a file.  Under a VM, the software
using the disks (VirtualBox) has exclusive access to them.  When the
guest OS needs to access the drives, it needs to call kernel space to
do that, but Solaris has caching and I have tons of RAM, so disk
accesses should be minimized, I wouldn't expect streaming writes
through the VM to be a ton worse than streaming writes through any
other userspace application.

I can write a few lines of C that write data to the drives at a
combined speed of 800MB/s.  Zfs is a ton more complicated than that,
and OS virtualization obviously has an overhead, but I don't see why
VirtualBox can't have streaming write speeds within an order of
magnitude of the raw drive performance.

> Welcome to virtualization. If performance is an issue, you shouldn't be
> virtualizing. I think you're doing reasonably well at 1/3 of host
> performance.

I don't think it's really fair to call fuse performance "host
performance".  I'd say the host performance of accessing the raw
drives is 100MB/s for each drive, simultaneously.  Fuse is slow for
its own reasons, so using that as a benchmark isn't very productive.
There is a huge difference between the raw disk capability (800MB/s)
and what vbox is getting (30MB/s).

> The only VM with reasonable disk I/O performance I've found is KVM. I
> did a basic test with XP guest load times (fresh, bare install on each
> VM), 4GB RAM host 1GB guest (so the entirety of what is accessed at
> startup stick in host's cache). The first time they start they'll hit
> all the files in the image required to start, and the second time it'll
> all come from the host's cache. Thus, the 2nd pass should give a
> reasonable indication of relative overheads of the VM's disk I/O
> performance. On the 2nd startup, times to login screen:

I'd love to use KVM, but qemu's SCSI controller isn't compatible with
Solaris, and there isn't a baloon driver for Solaris either.  IDE only
gives 4 devices, so there doesn't appear to be any way for KVM to
expose 8 devices to Solaris.  I did give that path a try though :)

> VMware Player/Workstation: 40s
> VirtualBox: 20s
> KVM: 6s (yes, that's six seconds, not a typo)

Those are pretty impressive numbers for vbox and kvm.

> A virtualized solution will always go slower than the underlying host.

Understood.  It's the order of the slowdown that concerns me.

_______________________________________________
vbox-users mailing list
[email protected]
http://vbox.innotek.de/mailman/listinfo/vbox-users

Reply via email to