The problem with mounting an RBD or CephFS on an OSD node is if you're
doing so with the kernel client. In a previous message on the ML John
Spray explained this wonderfully.
"This is not a Ceph-specific thing -- it can also affect similar systems
like Lustre. The classic case is when under so
Hi Marc,
We mount cephfs using FUSE on all 10 nodes of our cluster, and provided
that we limit bluestore memory use, find it to be reliable*.
bluestore_cache_size = 209715200
bluestore_cache_kv_max = 134217728
Without the above tuning, we get OOM errors.
As others will confirm, the FUSE client
I have 3 node test cluster and I would like to expand this with a 4th
node that is currently mounting the cephfs and rsync's backups to it. I
can remember reading something about that you could create a deadlock
situation doing this.
What are the risks I would be taking if I would be doing