> Dedicated replication link? > > Maybe the additional latency is all that kills you. > Do you have non-volatile write cache on your IO backend? > Did you post your drbd configuration setings already?
There is a dedicated 10GB Ethernet replication link between both nodes. There is also a cache on the IO backend. I have started some additional measurments with dd and oflag=direct. On a remote host I get: - With enabled drbd link: 3 MBytes/s - With disabled drbd link: 9 MBytes/s On one of the machines locally: - With enabled drbd link: 24 MBytes/s - With disabled drbd link: 74 MBytes/s Same machine but a parition without Drbd and LVM: - 90 MBytes/s This is our current drbd.conf: global { usage-count yes; } common { syncer { rate 500M; } } resource lfs { protocol C; startup { wfc-timeout 0; degr-wfc-timeout 120; } disk { on-io-error detach; fencing resource-only; } handlers { fence-peer "/usr/lib/drbd/crm-fence-peer.sh"; after-resync-target "/usr/lib/drbd/crm-unfence-peer.sh"; } net { max-buffers 8000; max-epoch-size 8000; } on d1106i06 { device /dev/drbd0; disk /dev/sda4; address 192.168.2.1:7788; meta-disk internal; } on d1106i07 { device /dev/drbd0; disk /dev/sda4; address 192.168.2.2:7788; meta-disk internal; } } Thanks Christoph _______________________________________________ Pacemaker mailing list: Pacemaker@oss.clusterlabs.org http://oss.clusterlabs.org/mailman/listinfo/pacemaker Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://bugs.clusterlabs.org