Hello,
I was wondering what user experience was with using Ceph over RDMA?
- How you set it up?
- Documentation used to set it up?
- Known issues when using it?
- If you still use it?
Kind regards
Gabryel Mason-Williams
___
ceph-users mailing
10G should be fine on bluestore the smallest size you can have is about 2GB
since LVM takes up about 1GB of space at that size so at that point it most of
the disk is taken up with LVM. I have seen/recorded performance benefits in
some cases when using small OSD sizes on bluestore instead of lar
Have you tried making a smaller increment instead of jumping from 8 to 128 as
that is quite a big leap?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
We have been benchmarking CephFS and comparing it Rados to see the performance
difference and how much overhead CephFS has. However, we are getting odd
results when using more than 1 OSD server (each OSDS has only one disk) using
CephFS but using Rados everything appears normal. These tests are
Hi Mark,
Sorry for the delay, I didn't see your response.
Yes, the pools are all using 1X replication, I have tried changing the num jobs
and io-depth with no prevail. This is using kernel cephfs.
Gabryel
___
ceph-users mailing list -- ceph-users@ce