We are using Ceph+RBD+NFS under pacemaker for VMware. We are doing iSCSI using
SCST but have not used it against VMware, just Solaris and Hyper-V.
It generally works and performs well enough – the biggest issues are the
clustering for iSCSI ALUA support and NFS failover, most of which we have
Hi all,
Today i wrote some code to get the usage for a Ceph cluster per crush
root. I missed/could not find a way to do it like the way i wanted it,
so i wrote i myself.
./ceph_usage.py
Crush root OSDs GB GB used GB
available Average utilization
--
On Tue, Feb 27, 2018 at 2:29 PM, Jan Pekař - Imatic wrote:
> I think I hit the same issue.
> I have corrupted data on cephfs and I don't remember the same issue before
> Luminous (i did the same tests before).
>
> It is on my test 1 node cluster with lower memory then recommended (so
> server is s
Ceph Users,
My question is if all mons are down(i know its a terrible situation to be),
does an existing rbd volume which is mapped to a host and being
used(read/written to) continues to work?
I understand that it wont get notifications about osdmap, etc, but assuming
nothing fails, does the read