Hi,

I want to deploy a production FreeBSD web site (database cluster, apache cluster, ip failover using carp, etc.), however I'm experiencing painful disk I/O throughput problems which currently does not make the above project viable. I've done some rudimentary benchmarking of two identically configured virtual machines (2 VCPUs, 512MB memory, 8GB disk) and installed one with FreeBSD 7.1-amd64 and one with Linux Ubuntu 8.10-amd64. These are the results I'm getting with dbench <n>:

<n>             1               2               4
freebsd         12.0009         13.6348         12.9402         (MB/s)
linux           376.145         651.314         634.649         (MB/s)

Both virtual machines run dbench 3.04 and the results are extremely stable over repeated runs.

The virtual hardware detected by the FreeBSD machine is as follows:

mpt0: <LSILogic 1030 Ultra4 Adapter> port 0x1080-0x10ff mem 0xf4810000-0xf4810fff irq 17 at device 16.0 on pci0
mpt0: [ITHREAD]
mpt0: MPI Version=1.2.0.0

And:

da0 at mpt0 bus 0 target 0 lun 0
da0: <VMware Virtual disk 1.0> Fixed Direct Access SCSI-2 device
da0: 3.300MB/s transfers
da0: 8192MB (16777216 512 byte sectors: 255H 63S/T 1044C)

I've also run unixbench (4.1 and 5.1.2) and the performance of the FreeBSD machine is horrible compared to Linux on many of the tests, though my first guess is that it all comes back down the disk performance (on the CPU-only tests the results are about the same).

Online when I see logs of da0 specs via google, they more look more like this (much higher transfer rate, and SCSI-n, n>2):

da0: <ATA GB0500C8046 HPG1> Fixed Direct Access SCSI-5 device
da0: 300.000MB/s transfers

Does anybody know how I can get proper performance for the drive under ESXi?

Regards,
Sebastiaan

Attachment: smime.p7s
Description: S/MIME Cryptographic Signature

Reply via email to