On 2009-12-16 03:48, Jakov Sosic wrote:
> Hello to all!
> 
> I've found the bottlenecks, so let me start over.
> 
> I used 3ware controllers and RAID10 over 4 SATA Barracudas. I've googled
> around and I've found that 3ware RAID performance really sucks for a
> controller of that type. I did have the write cache turned on, although
> I didn't have BBU.
> 
> So first step first - I've switched from hardware raid 10 do Linux
> software raid 10. My performance increased noticeably, for almost 30% on
> all bonnie++ tests, and in one test even 400%. That was it - I was
> convinced. I've reconfigured both my drbd nodes to software RAID-10.

Neil, someone owes you a drink of your favorite beverage. :)

> Next, I've noticed that when drbd is disconnected, my performance
> doubles. So I've started to investigate why was that. I've found that
> problem was that drbd replication was going through the same 4
> round-robin bonded interfaces that iSCSI export was going. So, I've
> split that 4 interfaces to two bonds, both round-robin. One was for
> iSCSI, other for DRBD replication only. Now, I've got the same
> performance as when drbd was disconnected, with minor decrease (~5-10%).

That, most likely, has nothing to do with the fact that previously you
shared that same bonded link between DRBD and iSCSI, and you no longer
do. Instead, it's most probably due to your DRBD link now being bonded
over just 2 NICs, not 4. See my earlier posts in this thread about this
issue.

Florian

Attachment: signature.asc
Description: OpenPGP digital signature

_______________________________________________
drbd-user mailing list
[email protected]
http://lists.linbit.com/mailman/listinfo/drbd-user

Reply via email to