[ceph-users] Re: Converting/Migrating EC pool to a replicated pool

2024-07-18 Thread Huy Nguyen
Hi, I know a third option that is create a secondary zone mapping to replicated pool. The data will be replicated from primary zone, after that, switch the master zone and the migration will be done. Zero downtime This is possible in theory, but I can not make it work when trying to setup 2 zon

[ceph-users] Cephadm has a small wart

2024-07-18 Thread Tim Holloway
I've been setting up a cookbook OSD creation process and as I walked through the various stages, I noted that the /etc/redhat-release file said "CentOS Stream 8". I panicked, because IBM has pulled the Ceph archives for CentOS 8 and nuked the machine, then rebuilt it with more attention to detail.

[ceph-users] Re: How to detect condition for offline compaction of RocksDB?

2024-07-18 Thread Joshua Baergen
Hey Aleksandr, > In the Pacific we have RocksDB column families. It will be helpful in the > case of many tombstones to do resharding of our old OSDs? > Do you think It can help without rocksdb_cf_compact_on_deletion? > Or, maybe It can help much more with rocksdb_cf_compact_on_deletion? Ah, I'm

[ceph-users] Re: Small issue with perms

2024-07-18 Thread Albert Shih
Le 18/07/2024 à 11:33:35+0200, David C. a écrit > you can test on a host and restart it, to validate that everything is fine > (with ceph orch host maintenance enter [or noout]). > > But yes, you should be able to do it without breaking anything. So just for those who got the same question as me

[ceph-users] Re: How to detect condition for offline compaction of RocksDB?

2024-07-18 Thread Rudenko Aleksandr
Josh, thanks! I will read more about LSM in RocksDB, thanks! Can I ask last one question) We have a lot of "old" SSD OSDs in the index pool which were deployed before Pacific. In the Pacific we have RocksDB column families. It will be helpful in the case of many tombstones to do resharding of

[ceph-users] Re: pg's stuck activating on osd create

2024-07-18 Thread Dan van der Ster
Hi Richard, See here for an example of what the OSD logs show in case of this "PG overdose protection". https://tracker.ceph.com/issues/65749 Cheers, dan -- Dan van der Ster CTO Clyso GmbH p: +49 89 215252722 | a: Vancouver, Canada w: https://clyso.com | e: dan.vanders...@clyso.com On Wed, Ju

[ceph-users] Re: How to detect condition for offline compaction of RocksDB?

2024-07-18 Thread Joshua Baergen
> And my question is: we have regular compaction that does some work. Why It > doesn't help with tombstones? > Why only offline compaction can help in our case? Regular compaction will take care of any tombstones in the files that end up being compacted, and compaction, when triggered, may even f

[ceph-users] Re: Heads up: New Ceph images require x86-64-v2 and possibly a qemu config change for virtual servers

2024-07-18 Thread Bailey Allison
+1 to this, also ran into this in our lab testing. Thanks for sharing this information! Regards, Bailey > -Original Message- > From: Eugen Block > Sent: July 18, 2024 3:55 AM > To: ceph-users@ceph.io > Subject: [ceph-users] Re: Heads up: New Ceph images require x86-64-v2 and > possibly

[ceph-users] Re: Converting/Migrating EC pool to a replicated pool

2024-07-18 Thread Eugen Block
Hi, instead of exporting/importing single objects via rados export/import I would use 'rados cppool ' although it does a linear copy of each object, so I'm not sure if that's so much better... So first create a new replicated pool, 'rados cppool old new', then rename the original pool, and

[ceph-users] Re: RBD images can't be mapped anymore

2024-07-18 Thread Eugen Block
Hi, can you please provide more information? Which other flags did you set (noout should be sufficient, or just use the maintenance mode)? Please share the output from: ceph osd tree ceph osd df ceph osd pool ls detail Add the corresponding crush rule which applies to the affected pool. Zit

[ceph-users] Re: cephadm rgw ssl certificate config

2024-07-18 Thread Eugen Block
Thanks, that's what I proposed to the customer as well. They also have their own CA, so it probably shouldn't be a problem to have such a certificate as well. Thanks! Zitat von Kai Stian Olstad : On Thu, Jul 18, 2024 at 10:49:02AM +, Eugen Block wrote: And after restarting the daemon,

[ceph-users] Re: Unable to mount with 18.2.2

2024-07-18 Thread David C.
Thank you for your research, Frédéric, We looked and the conf files were up to date, in the form [v1:(...),v2:(...)] I manage to reproduce the "incident": [aevoo-test - ceph-0]# ceph mon dump -f json|jq '.mons[].public_addrs' dumped monmap epoch 2 { "addrvec": [ { "type": "v2",

[ceph-users] Re: How to detect condition for offline compaction of RocksDB?

2024-07-18 Thread Rudenko Aleksandr
Hi Josh, thanks! I have one more question. I try to reproduce our OSD degradation due to massive lifecycle deletion and next step I will try to fix rocksdb_cf_compact_on_deletion. But I don't understand one thing. Okay, default auto-compaction can't detect tombstones which are growing, but reg

[ceph-users] Re: cephadm rgw ssl certificate config

2024-07-18 Thread Kai Stian Olstad
On Thu, Jul 18, 2024 at 10:49:02AM +, Eugen Block wrote: And after restarting the daemon, it seems to work. So my question is, how do you deal with per-host certificates and rgw? Any comments are appreciated. By not dealing with it, sort of. Since we run our own CA, so I create one certifi

[ceph-users] Re: Large amount of empty objects in unused cephfs data pool

2024-07-18 Thread Alexander Patrakov
On Thu, Jul 18, 2024 at 6:14 PM Petr Bena wrote: > > I created a cephfs using mgr dashboard, which created two pools: > cephfs.fs.meta and cephfs.fs.data > > We are using custom provisioning for user defined volumes (users provide yaml > manifests with definition of what they want) which creates

[ceph-users] cephadm rgw ssl certificate config

2024-07-18 Thread Eugen Block
Hi, I came across [1] and wanted to try to have all certificates/keys in one file. But it appears that the validation happens only against the first cert. So what I did was to concatenate all certs/keys into one file, then added that to ceph: ceph config-key set rgw/cert/rgw.realm.zone -i

[ceph-users] Re: Unable to mount with 18.2.2

2024-07-18 Thread Frédéric Nass
Hi Albert, David, I came across this: https://github.com/ceph/ceph/pull/47421 "OSDs have a config file that includes addresses for the mon daemons. We already have in place logic to cause a reconfig of OSDs if the mon map changes, but when we do we aren't actually regenerating the config so it's

[ceph-users] Large amount of empty objects in unused cephfs data pool

2024-07-18 Thread Petr Bena
I created a cephfs using mgr dashboard, which created two pools: cephfs.fs.meta and cephfs.fs.data We are using custom provisioning for user defined volumes (users provide yaml manifests with definition of what they want) which creates dedicated data pools for them, so cephfs.fs.data is never u

[ceph-users] Re: Small issue with perms

2024-07-18 Thread David C.
Thanks Christian, I see the fix is on the postinst, so probably the reboot shouldn't put "nobody" back, right? Le jeu. 18 juil. 2024 à 11:44, Christian Rohmann < christian.rohm...@inovex.de> a écrit : > On 18.07.24 9:56 AM, Albert Shih wrote: > >Error scraping /var/lib/ceph/crash: [Errno 13

[ceph-users] Re: Small issue with perms

2024-07-18 Thread Christian Rohmann
On 18.07.24 9:56 AM, Albert Shih wrote: Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash' There is / was a bug with the permissions for ceph-crash, see * https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/VACLBNVXTYNSXJSNXJSRAQNZHCHABDF4/

[ceph-users] Re: Small issue with perms

2024-07-18 Thread David C.
you can test on a host and restart it, to validate that everything is fine (with ceph orch host maintenance enter [or noout]). But yes, you should be able to do it without breaking anything. Le jeu. 18 juil. 2024 à 11:08, Albert Shih a écrit : > Le 18/07/2024 à 11:00:56+0200, Albert Shih a éc

[ceph-users] Re: RGW Lifecycle Problem (Reef)

2024-07-18 Thread Chris Palmer
Anyone got any ideas why this one lifecycle rule never runs automatically? On 15/07/2024 13:32, Chris Palmer wrote: Reef 18.2.2, package install on Centos 9. This is a very straightforward production cluster, 2 RGW hosts, no multisite. 4 buckets have lifecycle policies: $ radosgw-admin lc l

[ceph-users] Re: Small issue with perms

2024-07-18 Thread Albert Shih
Le 18/07/2024 à 11:00:56+0200, Albert Shih a écrit > Le 18/07/2024 à 10:56:33+0200, David C. a écrit > Hi, > > > Your ceph processes are in containers. > > Yes I know but in my install process I just install > > ceph-common > ceph-base > > then cephadm from with the wget. > > I didn't

[ceph-users] Re: Small issue with perms

2024-07-18 Thread Albert Shih
Le 18/07/2024 à 10:56:33+0200, David C. a écrit Hi, > Your ceph processes are in containers. Yes I know but in my install process I just install ceph-common ceph-base then cephadm from with the wget. I didn't install manually the other packages like : ii ceph-fuse

[ceph-users] Re: Small issue with perms

2024-07-18 Thread David C.
Your ceph processes are in containers. You don't need the ceph-* packages on the host hosting the containers Cordialement, *David CASIER* *Ligne directe: +33(0) 9 72 61 98 29*

[ceph-users] Re: Small issue with perms

2024-07-18 Thread Albert Shih
Le 18/07/2024 à 10:27:09+0200, David C. a écrit Hi, > > perhaps a conflict with the udev rules of locally installed packages. > > Try uninstalling ceph-* Sorry...not sure I understandyou want me to uninstall ceph ? Regards. JAS -- Albert SHIH 🦫 🐸 Observatoire de Paris France Heure loca

[ceph-users] Re: Small issue with perms

2024-07-18 Thread David C.
Hi Albert, perhaps a conflict with the udev rules of locally installed packages. Try uninstalling ceph-* Le jeu. 18 juil. 2024 à 09:57, Albert Shih a écrit : > Hi everyone. > > After my upgrade from 17.2.7 to 18.2.2 I notice after each time I restart I > got a issue with perm on > > /var/lib

[ceph-users] [RGW] Setup 2 zones within a cluster does not sync data

2024-07-18 Thread Huy Nguyen
Hi, Currently I'm testing replication ability between 2 zones in the same cluster. But only metadata is synced, not the data. I checked endpoints, system_key,... all good. If anyone have any idea, please guide me to resolve this situation. Thanks The radosgw shows this log on both side, primary

[ceph-users] Small issue with perms

2024-07-18 Thread Albert Shih
Hi everyone. After my upgrade from 17.2.7 to 18.2.2 I notice after each time I restart I got a issue with perm on /var/lib/ceph/FSID/crash after the restart the owner/group are on nobody and I got Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash' I