Hi All,
What are the metadata pools in an RGW deployment that need to sit on the
fastest medium to better the client experience from an access standpoint ?
Also is there an easy way to migrate these pools in a PROD scenario with
minimal to no-outage if possible ?
Regards,
Nikhil
__
I believe you can use ceph tell to inject it in a running cluster.
>From your admin node you should be able to run
Ceph tell osd.* injectargs "--osd_recovery_max_active 1 --osd_max_backfills 1”
Regards,
Nikhil Mitra
From: ceph-users
mailto:ceph-users-boun...@lists.ceph.com>>
on behalf of Noa
Hi Paul,
Did you try to stop the osd first before marking it down and out ?
stop ceph-osd id=21 or /etc/init.d/ceph stop osd.21
Ceph osd crush remove osd.21
Ceph auth del osd.21
Ceph osd rm osd.310
Regards,
Nikhil Mitra
From: ceph-users
mailto:ceph-users-boun...@lists.ceph.com>>
on behalf of
iSCSI with RBD to connect to VMware but ran into stability issues (could have
been the target software we were using) but have found NFS to be pretty
reliable.
From: "Nikhil Mitra (nikmitra)" mailto:nikmi...@cisco.com>>
To: ceph-users@lists.
Hi,
Has anyone implemented using CEPH RBD with Vmware ESXi hypervisor. Just looking
to use it as a native VMFS datastore to host VMDK’s. Please let me know if
there are any documents out there that might point me in the right direction to
get started on this. Thank you.
Regards,
Nikhil Mitra