I have need to benchmark volume-read performance of an application running
in an instance, assuming extremely fast storage.

To simulate fast storage, I have an AIO install of OpenStack, with local
flash disks. Cinder LVM volumes are striped across three flash drives (what
I have in the present setup).

Since I am only interested in sequential-read performance, the "dd" utility
is sufficient as a measure.

Running "dd" in the physical host against the Cinder-allocated volumes nets
~1.2GB/s (roughly in line with expectations for the striped flash volume).

Running "dd" in an instance against the same volume (now attached to the
instance) got ~300MB/s, which was pathetic. (I was expecting 80-90% of the
raw host volume numbers, or better.) Upping read-ahead in the instance via
"hdparm" boosted throughput to ~450MB/s. Much better, but still sad.

In the second measure the volume data passes through iSCSI and then the
QEMU hypervisor. I expected to lose *some* performance, but not more than
half!

Note that as this is an all-in-one OpenStack node, iSCSI is strictly local
and not crossing a network. (I did not want network latency or throughput
to be a concern with this first measure.)

I do not see any prior mention of performance of this sort on the web or in
the mailing list. Possible I missed something.

What sort of numbers are you seeing out of high performance storage?

Is the *huge* drop in read-rate within an instance something others have
seen?
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to