Ivan Voras wrote:

Hi,

Thanks for the reply.

Sebastiaan van Erk wrote:
Hi,

[snip]

<n>             1               2               4
freebsd         12.0009         13.6348         12.9402         (MB/s)
linux           376.145         651.314         634.649         (MB/s)

Both virtual machines run dbench 3.04 and the results are extremely
stable over repeated runs.

VMWare has many optimizations for Linux that are not used with FreeBSD.
VMI, for example, makes the Linux guest paravirtualized, and then there
are special drivers for networking, its vmotion driver (this one
probably doesn't contribute to performance much), etc. and Linux is in
any case much better tested and supported.

VMI/paravirtualization is not enabled for this Linux host. Neither is VMotion. Networking is performing extremely well (see also below).

If VMWare allows, you may try changing the type of the controller (I
don't know about ESXi but VMWare Server supports LSI or Buslogic SCSI
emulation) or switch to ATA emulation and try again.

I tried this, and it has no significant effect. Just for completeness, here's the relevant output of dmesg:

bt0: <Buslogic Multi-Master SCSI Host Adapter> port 0x1060-0x107f mem 0xf4810000-0xf481001f irq 17 at device 16.0 on pci0
bt0: BT-958 FW Rev. 5.07B Ultra Wide SCSI Host Adapter, SCSI ID 7, 192 CCBs
bt0: [GIANT-LOCKED]
bt0: [ITHREAD]

da0 at bt0 bus 0 target 0 lun 0
da0: <VMware Virtual disk 1.0> Fixed Direct Access SCSI-2 device
da0: 40.000MB/s transfers (20.000MHz DT, offset 15, 16bit)
da0: 8192MB (16777216 512 byte sectors: 255H 63S/T 1044C)

The transfer rate for dbench 1 is 15.0118 MB/s.

A generic optimization is to reduce kern.hz to something like 50 but it
probably won't help your disk performance.

I already had this (not 50, but 100), but this doesn't do anything for the disk performance.

As for unixbench, you need to examine and compare each microbenchmark
result individually before drawing a conclusion.

Yes, I realize that. However the dbench result is my first priority, and when (if) that is fixed, I'll run the unixbench again and see what my next priority is.

(However, just to give you an idea I attached the basic 5.1.2 unixbench outputs (the CPU info for FreeBSD is "fake", since unixbench does a cat /proc/cpuinfo, so I removed the /proc/ part and copied the output under linux to the "procinfo" file.)

Finally, I also ran some network benchmarks such as netio, and tested VM to VM communication on *different* ESXi machines connected via Gigabit ethernet, and it achieved more than 100MB/s throughput.

Since CPU speed + Network IO are doing just fine, I'm guessing this is a pure disk (driver?) related issue. However, to go into production with FreeBSD I *must* be able to fix it.

Note also the discrepency: 12 MB/s vs 350 MB/s on disk access! My lousy home machine (FreeBSD) is even 5 times faster at 60 MB/s. This machine has extremely fast disks in a RAID10 configuration.

Any ideas are welcome!

Regards,
Sebastiaan

Attachment: smime.p7s
Description: S/MIME Cryptographic Signature

Reply via email to