Hi cephers,

Is there anyone out there using Ceph (any version) with Infiniband FDR
topology network (both public and cluster), that could share some
performance results? To be more specific, running something like this on a
RBD volume mapped to a IB host:

# fio --rw=randread --bs=4m --numjobs=4 --iodepth=32 --runtime=22
--time_based --size=16777216k --loops=1 --ioengine=libaio --direct=1
--invalidate=1 --fsync_on_close=1 --randrepeat=1 --norandommap
--group_reporting --exitall --name
dev-ceph-randread-4m-4thr-libaio-32iodepth-22sec
--filename=/mnt/rbdtest/test1

# fio --rw=randread --bs=1m --numjobs=4 --iodepth=32 --runtime=22
--time_based --size=16777216k --loops=1 --ioengine=libaio --direct=1
--invalidate=1 --fsync_on_close=1 --randrepeat=1 --norandommap
--group_reporting --exitall --name
dev-ceph-randread-1m-4thr-libaio-32iodepth-22sec
--filename=/mnt/rbdtest/test2

# fio --rw=randwrite --bs=1m --numjobs=4 --iodepth=32 --runtime=22
--time_based --size=16777216k --loops=1 --ioengine=libaio --direct=1
--invalidate=1 --fsync_on_close=1 --randrepeat=1 --norandommap
--group_reporting --exitall --name
dev-ceph-randwrite-1m-4thr-libaio-32iodepth-22sec
--filename=/mnt/rbdtest/test3

# fio --rw=randwrite --bs=4m --numjobs=4 --iodepth=32 --runtime=22
--time_based --size=16777216k --loops=1 --ioengine=libaio --direct=1
--invalidate=1 --fsync_on_close=1 --randrepeat=1 --norandommap
--group_reporting --exitall --name
dev-ceph-randwrite-4m-4thr-libaio-32iodepth-22sec
--filename=/mnt/rbdtest/test4

will really appreciate the outputs.

Thanks in advance,

*German*
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to