[ceph-users] Re: Convert mon kv backend to rocksdb

2022-10-04 Thread Boris Behrens
Cheers Reed, just saw this and checked on my own. Also had one mon that ran on leveldb. I just removed the mon, pulled the new monmap and deployed it. After that all was fine. Thanks for paging the ML, so I've read it :D Boris # assuming there is only one mon and you are connected to the host t

[ceph-users] Re: How to remove remaining bucket index shard objects

2022-10-04 Thread Konstantin Shalygin
Hi, > On 4 Oct 2022, at 03:36, Yuji Ito (伊藤 祐司) wrote: > > After removing the index objects, I ran deep-scrub for all PGs of the index > pool. However, the problem wasn't resolved. Seems you just have large OMAPs, not because 'bogus shard' objects. Try to look PG stats with 'show_osd_pool_pg_

[ceph-users] Re: 15.2.17: RGW deploy through cephadm exits immediately with exit code 5/NOTINSTALLED

2022-10-04 Thread Michel Jouvin
Hi, Conclusion of the story... It seems I made something wrong when recreating .rgw.root, ending up with a zonegroup "default" and a zone "default" being created (in addition to the zonegroup/zone I explicitely created). I guess it is because I forgot a --default option when creating the zone

[ceph-users] Trying to add NVMe CT1000P2SSD8

2022-10-04 Thread Murilo Morais
Good morning people. I'm having trouble adding 4 NVMe (CT1000P2SSD8). When trying the first time I came across the error "Cannot use /dev/nvme0n1: device is rejected by filter config" caused by LVM, when commenting on the filters this error no longer appeared, Ceph manages to add the OSDs, but the

[ceph-users] Versioning of objects in the archive zone

2022-10-04 Thread Beren beren
Hi, Is it possible to manage the number of versions of objects in the archive zone ? (https://docs.ceph.com/en/latest/radosgw/archive-sync-module/) If I can't manage the number of versions, then sooner or later the versions will kill the entire cluster:( ___

[ceph-users] Red Hat’s Ceph team is moving to IBM

2022-10-04 Thread Josh Durgin
Today IBM and Red Hat announced some big news related to Ceph: the Ceph storage team at Red Hat is moving to IBM [1]. This is a joint IBM/Red Hat decision, and represents a large investment in the continued growth and health of Ceph and its community. This is great news for upstream Ceph! Our proj

[ceph-users] Re: Versioning of objects in the archive zone

2022-10-04 Thread Matt Benjamin
Hi, Please review https://github.com/ceph/ceph/pull/46928 thanks, Matt On Tue, Oct 4, 2022 at 10:37 AM Beren beren wrote: > Hi, > Is it possible to manage the number of versions of objects in the archive > zone ? (https://docs.ceph.com/en/latest/radosgw/archive-sync-module/) > > If I can't ma

[ceph-users] How to report a potential security issue

2022-10-04 Thread Vladimir Brik
Hello I think I may have run into a bug in cephfs that has security implications. I am not sure it's a good idea to send the details to the public mailing list or create a public ticket for it. How should I proceed? Thanks Vlad ___ ceph-users ma

[ceph-users] Re: How to report a potential security issue

2022-10-04 Thread Ramana Krisna Venkatesh Raja
On Tue, Oct 4, 2022 at 5:09 PM Ramana Krisna Venkatesh Raja wrote: > > On Tue, Oct 4, 2022 at 5:01 PM Vladimir Brik > wrote: > > > > Hello > > > > I think I may have run into a bug in cephfs that has > > security implications. I am not sure it's a good idea to > > send the details to the public m

[ceph-users] Add a removed OSD back into cluster

2022-10-04 Thread Samuel Taylor Liston
I did a dumb thing and removed OSDs across a failover domain and as a result have 4 remapped+incomplete pgs. The data is still on the drives. Is there a way to add one of these OSDs back in to the cluster? I’ve made an attempt to re-add the keyring back in using ‘ceph auth’ and

[ceph-users] Re: multisite replication issue with Quincy

2022-10-04 Thread Jane Zhu (BLOOMBERG/ 120 PARK)
We are able to consistently reproduce the replication issue now. The following are the environment and the steps to reproduce it. Please see the details in the open tracker: https://tracker.ceph.com/issues/57562?next_issue_id=57266&prev_issue_id=55179#note-7. Any ideas of what's going on and ho

[ceph-users] Re: How to remove remaining bucket index shard objects

2022-10-04 Thread 伊藤 祐司
Hi, Thank you for your reply. Yesterday I ran compaction according to the following RedHat document (and deep scrub again). ref. https://access.redhat.com/solutions/5173092 The large omap objects warning in this time looks to be resolved. However, based on our observations so far, it could reocc