[ceph-users] Re: classes crush rules new cluster

2024-11-28 Thread Eugen Block
You could decompile the crushmap, add a dummy OSD (with a non-existing ID) with your new device class and add a rule, then compile it and inject. Here's an excerpt from a lab cluster with 4 OSDs (0..3), adding a fifth non-existing: device 4 osd.4 class test rule testrule { id 6

[ceph-users] Re: Snaptriming speed degrade with pg increase

2024-11-28 Thread Szabo, Istvan (Agoda)
Let's say yes if that is the issue. Istvan Szabo Staff Infrastructure Engineer --- Agoda Services Co., Ltd. e: istvan.sz...@agoda.com --- ___

[ceph-users] Re: down OSDs, Bluestore out of space, unable to restart

2024-11-28 Thread Frédéric Nass
Hi Igor, Thank you for taking the time to explains the fragmentation issue. I had figured out the most part of it by reading the tracker and the PR but it's always clearer when you explain it. My question was more about why bluefs would still fail to allocate 4k chunks after being allowed to

[ceph-users] Re: Snaptriming speed degrade with pg increase

2024-11-28 Thread Bandelow, Gunnar
Dear Istvan, The first thing that stands out: Ubuntu 20.04  (EOL in April 2025) and Ceph v15 Octopus (EOL since 2022) Is there a possibility to upgrade these things? Best regards Gunnar --- Original Nachricht --- Betreff: [ceph-users] Snaptriming speed degrade with pg increase Von: "Szabo,

[ceph-users] Snaptriming speed degrade with pg increase

2024-11-28 Thread Szabo, Istvan (Agoda)
Hi, When we scale the placement group on a pool located in a full nvme cluster, the snaptriming speed degrades a lot. Currently we are running with these values to not degrade client op and have some progress on snaptrimmin, but it is terrible. (octopus 15.2.17 on ubuntu 20.04) -osd_max_trimmi

[ceph-users] new cluser ceph osd perf = 0

2024-11-28 Thread Marc
My ceph osd perf are all 0, do I need to enable module for this? osd_perf_query? Where should I find this in manuals? Or do I just need to wait? [@ target]# ceph osd perf osd commit_latency(ms) apply_latency(ms) 25 0 0 24 0

[ceph-users] Re: EC pool only for hdd

2024-11-28 Thread Eugen Block
Oh right, I always forget the reclassify command! It worked perfectly last time I used it. Thanks! Zitat von Anthony D'Atri : Apologies for the empty reply to this I seem to have sent. I blame my phone :o This process can be somewhat automated with crushtool’s reclassification directive

[ceph-users] Re: EC pool only for hdd

2024-11-28 Thread Anthony D'Atri
Apologies for the empty reply to this I seem to have sent. I blame my phone :o This process can be somewhat automated with crushtool’s reclassification directives, which can help avoid omissions or typos (/me whistles innocently): https://docs.ceph.com/en/latest/rados/operations/crush-map-edits

[ceph-users] classes crush rules new cluster

2024-11-28 Thread Marc
It looks like it is not possible to create crush rules when you don't have harddrives active in this class. I am testing with new squid and did not add ssd's yet, eventhough I added class like this. ceph osd crush class create ssd I can't execute this ceph osd crush rule create-replicated repl

[ceph-users] 2024-11-28 Perf Meeting Cancelled

2024-11-28 Thread Matt Vandermeulen
Hi folks, the perf meeting for today will be cancelled for US thanksgiving! As a heads up, next week will also be cancelled for Cephalocon. Thanks, Matt ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...

[ceph-users] rgw multisite excessive data usage on secondary zone

2024-11-28 Thread Adam Prycki
Hi, I've just configured a second zones for 2 of our ceph s3 deployments and I've noticed that after initial sync secondary zone data pool are much bigger than ones on master zones. My setup consists of main zone, archive zone and sync_policy which configure directional sync from main zone t

[ceph-users] Re: Squid: deep scrub issues

2024-11-28 Thread Nmz
Sveikas,   Can you try to set 'ceph config set osd osd_mclock_profile high_recovery_ops' and see how will it effect you?   For some PG deep scrub runned for about 20h for me. After I gave more priority 1,2 hour was enaught to finish.      - Original Message - From: Laimis Juzeliūnas To

[ceph-users] Re: down OSDs, Bluestore out of space, unable to restart

2024-11-28 Thread Igor Fedotov
Hi Frederic, here is an overview of the case when BlueFS ıs unable to allocate more space at main/shared device albeıt free space is available. Below I'm talking about stuff exısted before fıxıng https://tracker.ceph.com/issues/53466. First of al - BlueFS's minimal allocation unit for shared

[ceph-users] Re: nfs-ganesha 5 changes

2024-11-28 Thread Marc
> > In my old environment I have simple nfs-ganesha export like this, which > is sufficent and mounts. > > EXPORT { > Export_Id = 200; > Path = /backup; > Pseudo = /backup; > FSAL { Name = CEPH; Filesystem = ""; User_Id = > "cephfs..bakup"; Secret_Access_Ke

[ceph-users] nfs-ganesha 5 changes

2024-11-28 Thread Marc
In my old environment I have simple nfs-ganesha export like this, which is sufficent and mounts. EXPORT { Export_Id = 200; Path = /backup; Pseudo = /backup; FSAL { Name = CEPH; Filesystem = ""; User_Id = "cephfs..bakup"; Secret_Access_Key = "x=="; }

[ceph-users] Re: Migrated to cephadm, rgw logs to file even when rgw_ops_log_rados is true

2024-11-28 Thread Paul JURCO
Hi Eugen, Yes i have played arround with some of them, with the most obvious ones. They are all false by default: :~# ceph-conf -D | grep syslog clog_to_syslog = false clog_to_syslog_facility = default=daemon audit=local0 clog_to_syslog_level = info err_to_syslog = false log_to_syslog = false mon_c

[ceph-users] Re: Migrated to cephadm, rgw logs to file even when rgw_ops_log_rados is true

2024-11-28 Thread Eugen Block
I haven't played with rgw_ops yet, but have you looked at the various syslog config options? ceph config ls | grep syslog log_to_syslog err_to_syslog clog_to_syslog clog_to_syslog_level clog_to_syslog_facility mon_cluster_log_to_syslog mon_cluster_log_to_syslog_level mon_cluster_log_to_syslog_f