I created a cephfs using mgr dashboard, which created two pools: cephfs.fs.meta
and cephfs.fs.data
We are using custom provisioning for user defined volumes (users provide yaml
manifests with definition of what they want) which creates dedicated data pools
for them, so cephfs.fs.data is never u
Hello
In https://docs.ceph.com/en/latest/cephfs/client-auth/ we can find that
ceph fs authorize cephfs_a client.foo / r /bar rw Results in
client.foo
key: *key*
caps: [mds] allow r, allow rw path=/bar
caps: [mon] allow r
caps: [osd] allow rw tag cephfs data=cephfs_a
Wha
Most likely it wasn't, the ceph help or documentation is not very clear about
this:
osd deep-scrub
initiate deep scrub on osd , or use
to deep scrub all
It doesn't say anything like "initiate dee
Hello,
No I don't have osd_scrub_auto_repair, interestingly after about a week after
forgetting about this, an error manifested:
[ERR] OSD_SCRUB_ERRORS: 1 scrub errors
[ERR] PG_DAMAGED: Possible data damage: 1 pg inconsistent
pg 4.1d is active+clean+inconsistent, acting [4,2]
which could be
Hello,
I wanted to try out (lab ceph setup) what exactly is going to happen
when parts of data on OSD disk gets corrupted. I created a simple test
where I was going through the block device data until I found something
that resembled user data (using dd and hexdump) (/dev/sdd is a block
devic
Hello,
My goal is to setup multisite RGW with 2 separate CEPH clusters in separate
datacenters, where RGW data are being replicated. I created a lab for this
purpose in both locations (with latest reef ceph installed using cephadm) and
tried to follow this guide: https://docs.ceph.com/en/reef/r