> I could argue they are unimpressive numbers for vmware and vbox.

Sure, either way.  I was impressed that the open-source upstarts were
beating the established leader so handily.

> Well, assuming kvm has zero overhead (which would be optimistic at best, but
> humour me for the sake of the argument), that would put vbox overhead at
> 3.5x over bare metal, in the best possible case imaginable. So, let's say
> 30MB/s vs 100MB/s. The difference on top of that is a question of fuse
> overhead vs. kvm overhead. Also consider that ZFS is fairly complex, which
> is likely to affect performance.
>
> For example, if you are using the RAID feature, that'll slow things down
> substantially. For example, I can get about 600-700MB/s combined from my
> home grown storage server raw, but with RAID6 software RAID with optimally
> aligned ext3, I only see about 110MB/s on linear reads. The CPU isn't
> bottlenecking it, either (low CPU usage and checksumming benchmarks at over
> 6GB/s). Some of it at least is likely down to controller switching
> latencies.

I suppose that makes sense.  I never have tried anything other than
zfs (RAID6) on my server.  Once btrfs RAID6 is ready, I'll probably be
reformatting to try that out.  I have about 1.5TB on my server though,
so restoring to the drives after a reformatting is a bit of a pain.

> I'm not saying improving on that is impossible, but your figures seem to
> already be in the right ballpark.

Ok, sounds fair.

_______________________________________________
vbox-users mailing list
[email protected]
http://vbox.innotek.de/mailman/listinfo/vbox-users

Reply via email to