Hi,

We currently have 4xQDR infiniband cards & switches as interconnect
between storage servers and "workhorses". The storage servers have a
mixture of harddisk and ssd. The arrays are split into volumes. They
export their volumes via SRP and/or iSer.
Both based on linux, kernel 3.2.8.
Now, I ran several (more like 100s!) of tests like this:

on storage:
dd if=/dev/zero of=volume bs=1M : ~ 1,600MByte/s
cp file1 file2 : ~ 300MByte/s (both files on same volume)

on server
dd if=/dev/zero of=volume bs=1M : ~ 1,100MByte/s
cp file1 file2 : ~ 30MByte/s (both files on same volume)

Happy with the dd, but it always drops to a tenth of the performance
when I do it with cp.
Even worse, the more "cp" processes (on different volumes) I start on
each workhorse, the lower the performance of each individual cp process.
For example:
5x cp on storage: >1,000MByte/s (aggregated)
5x cp on server: 150MByte/s (aggregated)
10x cp on server: stil 150MByte/s (aggregated)

The filesystem on the volume is ext4.
I expect there is a rather nasty read/write interleaved pattern which
breaks down due to the additional latency introduced by infiniband and
the 2nd blocklayer of the server.
Am I just expecting too much here? What have other people seen in this
particular test (copying a file on the same volume)


Conrad


_______________________________________________
Tech mailing list
[email protected]
https://lists.lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/

Reply via email to