I apologise for a slightly OT question, but this controller is on the Inktank 
recommended hardware list, so someone might have an idea

I have 3 different controllers for my OSDs in the cluster

1) LSI SAS 2308 aka 9207-8i in IT mode (the target is to have this one 
everywhere)
2) a few Intel integrated C606 SAS 
3) 1-2 Intel integrated SATA controllers

What I am seeing is that with any SSD that normaly has 30K IOPS on the Intel 
HBA achieves at most 100 on the LSI SAS 2308  (maybe 200 since I’m testing fio 
with filesystem). Those writes are synchronous and tested with fio —direct=1 
—sync=1 
This seems abysmal and I have no idea what is causing that. There’s no (?) 
utility, and no settings for the HBA when in IT target/initiator mode that I 
know of, write cache is enabled (disabling makes no difference).

I am seeing this on all hosts that have this card, with different (17/19) 
firmwares, with different drives - only commonality is mpt2sas driver that is 
the same everywhere, and the brand of SSDs - but those SSDs perform much better 
when put in a different HBA.

Has anybody seen this? Real workloads seem to be unaffected so far, but it 
could quickly become a bottleneck once we upgrade to Giant.

Thanks

Jan


_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to