Hi,
I have setup a ceph cluster (octopus) and installed he rbd
plugins/provisioner in my Kubernetes cluster.
I can create dynamically FS and Block Volumes which is fine. For that I
have created the following the following storageClass
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
hi,
I figured it out.
1) the image created in ceph should only have the feature 'layering'.
This can be created with the command:
$ rbd create test-image --size=1024 --pool=kubernetes --image-feature
layering
2) now the deployment should look like this:
---
apiVersion: v1
kind
Hi. I'm having performance issue about ceph rbd. The performance is not i
expected according to my node metrics. here're the metrics.
I've used Calico as CNI.
version: Rook-ceph 1.6
I've used stock yaml files and rook is not running on host network
Centos 8 Stream
[root@node4 ~]# uname -a
Linux n
I doubt it. The problem is that the CephFS MDS must perform
distributed metadata transactions with ordering and locking.
Whereas a filesystem on rbd runs locally and doesn't have to worry
about other computers writing to the same block device.
Our bottleneck in production is usually the MDS CPU lo
Hi all,
The cluster here is running v14.2.20 and is used for RBD images.
We have a PG in recovery_unfound state and since this is the first
time we've had this occur, we wanted to get your advice on the best
course of action.
PG 4.1904 went into state active+recovery_unfound+degraded+repair [1]