[ceph-users] Ceph monitor won't start after Ubuntu update

2021-06-15 Thread Petr
way to get cluster running or at least get data from OSDs? Will appreciate any help. Thank you -- Best regards, Petr ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: Ceph monitor won't start after Ubuntu update

2021-06-16 Thread Petr
Hello Konstantin, Wednesday, June 16, 2021, 1:50:55 PM, you wrote: > Hi, >> On 16 Jun 2021, at 01:33, Petr wrote: >> >> I've upgraded my Ubuntu server from 18.04.5 LTS to Ubuntu 20.04.2 LTS via >> 'do-release-upgrade', >> during that proc

[ceph-users] Large amount of empty objects in unused cephfs data pool

2024-07-18 Thread Petr Bena
I created a cephfs using mgr dashboard, which created two pools: cephfs.fs.meta and cephfs.fs.data We are using custom provisioning for user defined volumes (users provide yaml manifests with definition of what they want) which creates dedicated data pools for them, so cephfs.fs.data is never u

[ceph-users] MDS not becoming active after migrating to cephadm

2021-10-04 Thread Petr Belyaev
flags are identical. Could someone please advise me why the dockerized MDS is being stuck as a standby? Maybe some config values missing or smth? Best regards, Petr ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph

[ceph-users] Re: MDS not becoming active after migrating to cephadm

2021-10-04 Thread Petr Belyaev
, problems started during the migration to cephadm (which was done after migrating everything to Pacific). It only occurs when using dockerized MDS. Non-dockerized MDS nodes, also Pacific, everything runs fine. Petr > On 4 Oct 2021, at 12:43, 胡 玮文 wrote: > > Hi Petr, > > Pl

[ceph-users] Re: MDS not becoming active after migrating to cephadm

2021-10-05 Thread Petr Belyaev
hat you are facing the same issue. > > [1]: > https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/KQ5A5OWRIUEOJBC7VILBGDIKPQGJQIWN/ > > <https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/KQ5A5OWRIUEOJBC7VILBGDIKPQGJQIWN/> > >> 在 2021年10月4日,19:0

[ceph-users] Testing CEPH scrubbing / self-healing capabilities

2024-06-04 Thread Petr Bena
Hello, I wanted to try out (lab ceph setup) what exactly is going to happen when parts of data on OSD disk gets corrupted. I created a simple test where I was going through the block device data until I found something that resembled user data (using dd and hexdump) (/dev/sdd is a block devic

[ceph-users] Re: Testing CEPH scrubbing / self-healing capabilities

2024-06-10 Thread Petr Bena
Hello, No I don't have osd_scrub_auto_repair, interestingly after about a week after forgetting about this, an error manifested: [ERR] OSD_SCRUB_ERRORS: 1 scrub errors [ERR] PG_DAMAGED: Possible data damage: 1 pg inconsistent pg 4.1d is active+clean+inconsistent, acting [4,2] which could be

[ceph-users] Re: Testing CEPH scrubbing / self-healing capabilities

2024-06-10 Thread Petr Bena
Most likely it wasn't, the ceph help or documentation is not very clear about this: osd deep-scrub initiate deep scrub on osd , or use to deep scrub all It doesn't say anything like "initiate dee

[ceph-users] Documentation for meaning of "tag cephfs" in OSD caps

2024-06-11 Thread Petr Bena
Hello In https://docs.ceph.com/en/latest/cephfs/client-auth/ we can find that ceph fs authorize cephfs_a client.foo / r /bar rw Results in client.foo   key: *key*   caps:  [mds]  allow  r,  allow  rw  path=/bar   caps:  [mon]  allow  r   caps:  [osd]  allow  rw  tag  cephfs  data=cephfs_a Wha

[ceph-users] Multisite RGW setup not working when following the docs step by step

2023-08-30 Thread Petr Bena
Hello, My goal is to setup multisite RGW with 2 separate CEPH clusters in separate datacenters, where RGW data are being replicated. I created a lab for this purpose in both locations (with latest reef ceph installed using cephadm) and tried to follow this guide: https://docs.ceph.com/en/reef/r

[ceph-users] CephFS clients waiting for lock when one of them goes slow

2020-08-12 Thread Petr Belyaev
. Have somebody seen similar issues before? Best regards, Petr Belyaev ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] postgresql vs ceph, fsync

2025-02-07 Thread Petr Holubec
We are evaluating pros and cons of running postgresql backed by ceph. We know that running pg on dedicated physical hw is highly recommended, but we've got our reasons. So to the question: What could happen if we switch fsync to off on postgre backed by ceph? The increase of perfomance is huge, w