Hi,

I'm trying to use DRBD with the backend storage on a FusionIO device (NAND 
Flash without disk controller overhead.)  I've found that I'm taking a huge 
performance hit via testing with fio.

IOPS
`fio --directory=/fio/test --direct=1 --rw=randread --bs=4k --size=5G 
--numjobs=64 --runtime=10 --group_reporting --name=file1`
raw device: 104k
drbd device: 75k

Bandwidth
`fio --directory=/fio/test --direct=1 --rw=randread --bs=1m --size=5G 
--numjobs=4 --runtime=100 --group_reporting --name=file1`
raw device: 750 MB/s
drbd device: 550-600 MB/s

I've tried drbd 8.3.9 and 8.3.10 and while 8.3.10 might be slightly better, 
it's mostly insignificant (1-2%.)

The backend for writing is 2 x 1GB bonded round robin dedicated interfaces, but 
since I'm seeing a read hit I'm not concerned with this potential write bottle 
neck at the moment unless it's somehow related, I plan to use 10 gig cards in 
the near future if this works out.

?  The DRBD process seems be CPU bound, but performance is worse if I change 
cpu affinity to be more than 1 core.  This is on a E5430  @ 2.66GHz.  I imagine 
a faster CPU could improve performance, but I'm looking for other suggestions 
of course.
Other Relevant Details:
CentOS 5.5
Kernel 2.6.18-194
Filesystem XFS
drbd 8.3.9 and 8.3.10 compiled into rpm's from source (any compile time tweaks 
I could use, such as CPU type?)

Thanks,
Mark

______________________________________________

See  http://www.peak6.com/email_disclaimer.php
for terms and conditions related to this email
_______________________________________________
drbd-user mailing list
[email protected]
http://lists.linbit.com/mailman/listinfo/drbd-user

Reply via email to