[ceph-users] Ceph orchestrator not refreshing device list

2024-09-25 Thread Bob Gibson
Hi, We recently converted a legacy cluster running Quincy v17.2.7 to cephadm. The conversion went smoothly and left all osds unmanaged by the orchestrator as expected. We’re now in the process of converting the osds to be managed by the orchestrator. We successfully converted a few of them, but

[ceph-users] Re: Backup strategies for rgw s3

2024-09-25 Thread Adam Prycki
Yes, I know. It's just that I would need to define zone wide default lifecycle. For example, archivezone stores 30 days of object versions unless specified otherwise. Is there a way to do it? As far as I know lifecycle you linked is configured per bucket. As a small cloud provide we cannot real

[ceph-users] Re: Backup strategies for rgw s3

2024-09-25 Thread Joachim Kraftmayer
Hi Adam, we started a github project for s3/SWIFT synchronization, backup, migration and more use cases. You also can use it in combination with backup solutions. https://github.com/clyso/chorus Joachim joachim.kraftma...@clyso.com www.clyso.com Hohenzollernstr. 27, 80801 Munich Utting

[ceph-users] Re: Backup strategies for rgw s3

2024-09-25 Thread Shilpa Manjrabad Jagannath
starting from quincy, you can define rules for lifecycle to execute on Archive zone alone by specifying flag under https://tracker.ceph.com/issues/53361 On Wed, Sep 25, 2024 at 7:59 AM Adam Prycki wrote: > Hi, > > I'm currently working on a project which requires us to backup 2 > separate s3

[ceph-users] Re: Backup strategies for rgw s3

2024-09-25 Thread Tim Holloway
Well, using Ceph as its own backup system has its merits, and I've little doubt something could be cooked up, but another alternative would be to use a true backup system. In my particular case, I use the Bacula backup system product. It's not the most polished thing around, but it is a full-featu

[ceph-users] Re: Backup strategies for rgw s3

2024-09-25 Thread Burkhard Linke
Hi, On 9/25/24 16:57, Adam Prycki wrote: Hi, I'm currently working on a project which requires us to backup 2 separate s3 zones/realms and retain it for few months. Requirements were written by someone who doesn't know ceph rgw capabilities. We have to do incremental and full backups. Each ty

[ceph-users] Re: Mds daemon damaged - assert failed

2024-09-25 Thread Kyriazis, George
> On Sep 25, 2024, at 1:05 AM, Eugen Block wrote: > > Great that you got your filesystem back. > >> cephfs-journal-tool journal export >> cephfs-journal-tool event recover_dentries summary >> >> Both failed > > Your export command seems to be missing the output file, or was it not the > exa

[ceph-users] Backup strategies for rgw s3

2024-09-25 Thread Adam Prycki
Hi, I'm currently working on a project which requires us to backup 2 separate s3 zones/realms and retain it for few months. Requirements were written by someone who doesn't know ceph rgw capabilities. We have to do incremental and full backups. Each type of backup has separate retention period

[ceph-users] Re: Quincy: osd_pool_default_crush_rule being ignored?

2024-09-25 Thread Florian Haas
On 25/09/2024 15:21, Eugen Block wrote: Hm, do you have any local ceph.conf on your client which has an override for this option as well? No. By the way, how do you bootstrap your cluster? Is it cephadm based? This one is bootstrapped (on Quincy) with ceph-ansible. And when the "ceph confi

[ceph-users] Re: Quincy: osd_pool_default_crush_rule being ignored?

2024-09-25 Thread Eugen Block
I redeployed a different single-node-cluster with quincy 17.2.6 and it works there as well. Zitat von Eugen Block : Hm, do you have any local ceph.conf on your client which has an override for this option as well? By the way, how do you bootstrap your cluster? Is it cephadm based? Zitat

[ceph-users] Re: Quincy: osd_pool_default_crush_rule being ignored?

2024-09-25 Thread Eugen Block
Hm, do you have any local ceph.conf on your client which has an override for this option as well? By the way, how do you bootstrap your cluster? Is it cephadm based? Zitat von Florian Haas : Hi Eugen, I've just torn down and completely respun my cluster, on 17.2.7. Recreated my CRUSH rule

[ceph-users] Re: Quincy: osd_pool_default_crush_rule being ignored?

2024-09-25 Thread Florian Haas
Hi Eugen, I've just torn down and completely respun my cluster, on 17.2.7. Recreated my CRUSH rule, set osd_pool_default_crush_rule to its rule_id, 1. Created a new pool. That new pool still has crush_rule 0, just as before and contrary to what you're seeing. I'm a bit puzzled, because I'm

[ceph-users] Re: [EXTERNAL] Re: Bucket Notifications v2 & Multisite Redundancy

2024-09-25 Thread Alex Hussein-Kershaw (HE/HIM)
I failed to reproduce this issue in Reef 18.2.4. So I think this is a Squid regression of the notification v1 functionality. In Reef, all the notification and topic config remains on the site it was created on. I raised: Bug #68227: rgw/notifications: notifications and topics appear on multisi

[ceph-users] Re: [EXTERNAL] Re: Bucket Notifications v2 & Multisite Redundancy

2024-09-25 Thread Alex Hussein-Kershaw (HE/HIM)
Sadly I think I've found another issue here that prevents my use case even with notifications_v2 disabled. My repro scenario: * Deployed a fresh multisite Ceph cluster at 19.1.1 (siteA is the master, siteB is non-master) * Immediately disabled notifications_v2 and update/commit the period.

[ceph-users] cephfs +inotify = caps problem?

2024-09-25 Thread Burkhard Linke
Hi, we are currently trying to debug and understand a problem with cephfs and inotify watchers. A user is running Visual Studio Code with a workspace on a cephfs mount. VSC uses inotify for monitoring files and directories in the workspace: root@cli:~# ./inotify-info --

[ceph-users] Re: Quincy: osd_pool_default_crush_rule being ignored?

2024-09-25 Thread Eugen Block
Still works: quincy-1:~ # ceph osd crush rule create-simple simple-rule default osd quincy-1:~ # ceph osd crush rule dump simple-rule { "rule_id": 4, ... quincy-1:~ # ceph config set mon osd_pool_default_crush_rule 4 quincy-1:~ # ceph osd pool create test-pool6 pool 'test-pool6' created quin

[ceph-users] Re: Quincy: osd_pool_default_crush_rule being ignored?

2024-09-25 Thread Florian Haas
On 25/09/2024 09:05, Eugen Block wrote: Hi, for me this worked in a 17.2.7 cluster just fine Huh, interesting! (except for erasure-coded pools). Okay, *that* bit is expected. https://docs.ceph.com/en/quincy/rados/configuration/pool-pg-config-ref/#confval-osd_pool_default_crush_rule does

[ceph-users] Re: Quincy: osd_pool_default_crush_rule being ignored?

2024-09-25 Thread Eugen Block
Hi, for me this worked in a 17.2.7 cluster just fine (except for erasure-coded pools). quincy-1:~ # ceph osd crush rule create-replicated new-rule default osd hdd quincy-1:~ # ceph config set mon osd_pool_default_crush_rule 1 quincy-1:~ # ceph osd pool create test-pool2 pool 'test-pool2' cr