Hello,

a few questions on Ceph's current support for Infiniband

(A) Can Ceph use Infiniband's native protocol stack, or must it use
IP-over-IB?  Google finds a couple of entries in the Ceph wiki related
to native IB support (see [1], [2]), but none of them seems finished
and there is no timeline.

[1]: 
https://wiki.ceph.com/Planning/Blueprints/Emperor/msgr%3A_implement_infiniband_support_via_rsockets
[2]: http://wiki.ceph.com/Planning/Blueprints/Giant/Accelio_RDMA_Messenger


(B) Can we connect to the same Ceph cluster from Infiniband *and*
Ethernet?  Some clients do only have Ethernet and will not be
upgraded, some others would have QDR Infiniband -- we would like both
sets to access the same storage cluster.


(C) I found this old thread about Ceph's performance on 10GbE and
Infiniband: are the issues reported there still current?

http://comments.gmane.org/gmane.comp.file-systems.ceph.devel/6816


Thanks for any hint!

Riccardo

--
Riccardo Murri
http://www.s3it.uzh.ch/about/team/

S3IT: Services and Support for Science IT
University of Zurich
Winterthurerstrasse 190, CH-8057 Zürich (Switzerland)
Tel: +41 44 635 4222
Fax: +41 44 635 6888
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to