Hi,

I am having a horrible latency with the following configuration:

 * protocol c
 * raid5 over a 3Ware 9750-4i SAS2 raid controller
 * Dedicated Gigabit link between the two machines, no switch in between.

This is what I can tell you:

 * iperf shows around 950 Mbits/sec -- sounds okay for gigabit ;)
 * throughput for the drbd raid has 70-80 MByte/sec -- sounds good to me
too.

Now on to the latency tests (backing device):

dd if=/dev/zero of=/dev/sda3 bs=512 count=1000 oflag=direct
512000 bytes (512 kB) copied, 0.0365779 seconds, 14.0 MB/s

and onto the drbd device:

dd if=/dev/zero of=/dev/drbd1 bs=512 count=1000 oflag=direct
512000 bytes (512 kB) copied, 9.651 seconds, 53.1 kB/s

Sounded awfully slow to me, so I compared with a ramdisk:

dd if=/dev/zero of=/dev/drbd2 bs=512 count=1000 oflag=direct
512000 bytes (512 kB) copied, 0.166689 seconds, 3.1 MB/s

Sounds waaaay better, though I can't tell if that's slow or fast enough.

The RTT for the link is:
rtt min/avg/max/mdev = 0.096/0.166/0.207/0.043 ms

So let's summarize:

-> Harddisk speed shouldn't be a problem (0.0365779 seconds for the write)
-> Network speed should be a problem as indicated by RTT and iperf

-> Network and Harddisk combined -> problems.

Any hints on how to debug that? I am currently grasping any straw I can
get :/

Thx in advance and regards,
Florian Apolloner

Attachment: signature.asc
Description: OpenPGP digital signature

_______________________________________________
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user

Reply via email to