[ceph-users] Large amount of empty objects in unused cephfs data pool

2024-07-18 Thread Petr Bena
I created a cephfs using mgr dashboard, which created two pools: cephfs.fs.meta and cephfs.fs.data We are using custom provisioning for user defined volumes (users provide yaml manifests with definition of what they want) which creates dedicated data pools for them, so cephfs.fs.data is never u

[ceph-users] Documentation for meaning of "tag cephfs" in OSD caps

2024-06-11 Thread Petr Bena
Hello In https://docs.ceph.com/en/latest/cephfs/client-auth/ we can find that ceph fs authorize cephfs_a client.foo / r /bar rw Results in client.foo   key: *key*   caps:  [mds]  allow  r,  allow  rw  path=/bar   caps:  [mon]  allow  r   caps:  [osd]  allow  rw  tag  cephfs  data=cephfs_a Wha

[ceph-users] Re: Testing CEPH scrubbing / self-healing capabilities

2024-06-10 Thread Petr Bena
Most likely it wasn't, the ceph help or documentation is not very clear about this: osd deep-scrub initiate deep scrub on osd , or use to deep scrub all It doesn't say anything like "initiate dee

[ceph-users] Re: Testing CEPH scrubbing / self-healing capabilities

2024-06-10 Thread Petr Bena
Hello, No I don't have osd_scrub_auto_repair, interestingly after about a week after forgetting about this, an error manifested: [ERR] OSD_SCRUB_ERRORS: 1 scrub errors [ERR] PG_DAMAGED: Possible data damage: 1 pg inconsistent pg 4.1d is active+clean+inconsistent, acting [4,2] which could be

[ceph-users] Testing CEPH scrubbing / self-healing capabilities

2024-06-04 Thread Petr Bena
Hello, I wanted to try out (lab ceph setup) what exactly is going to happen when parts of data on OSD disk gets corrupted. I created a simple test where I was going through the block device data until I found something that resembled user data (using dd and hexdump) (/dev/sdd is a block devic

[ceph-users] Multisite RGW setup not working when following the docs step by step

2023-08-30 Thread Petr Bena
Hello, My goal is to setup multisite RGW with 2 separate CEPH clusters in separate datacenters, where RGW data are being replicated. I created a lab for this purpose in both locations (with latest reef ceph installed using cephadm) and tried to follow this guide: https://docs.ceph.com/en/reef/r