e!
Best Regards,
Vignesh Varma G
Cloud Engineer
www.stackbill.com
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hi Darren & Anthony
>>How many PG’s have you got configured for the Ceph pool that you are testing
>>against?
I have crealed the cloudstack pools with the PG of 64.
ceph osd pool get cloudstack-BRK pg_autoscale_mode
pg_autoscale_mode: on
ceph osd pool ls
.mgr
cloudstack-GUL
cloudstack-BRK
.nfs
Hi darren & Anthony,
>>How many PG’s have you got configured for the Ceph pool that you are testing
>>against?
I have crealed the cloudstack pools with the PG of 64.
ceph osd pool get cloudstack-BRK pg_autoscale_mode
pg_autoscale_mode: on
>>Have you tried the same benchmark without the replicat
Hi Team,
I have set up a 2 Ceph cluster 3-node each cluster with a two-way RBD mirror.
In this setup, Ceph 1 is configured two-way mirror to Ceph 2, and vice versa.
The RBD pools are integrated with CloudStack.
The Ceph cluster uses NVMe drives, but I am experiencing very low IOPS
performance.
Best Regards,
Vignesh Varma G
Cloud Engineer
www.stackbill.com
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io