Cheers Reed,
just saw this and checked on my own. Also had one mon that ran on leveldb.
I just removed the mon, pulled the new monmap and deployed it. After that
all was fine.
Thanks for paging the ML, so I've read it :D
Boris
# assuming there is only one mon and you are connected to the host
t
Hi,
> On 4 Oct 2022, at 03:36, Yuji Ito (伊藤 祐司) wrote:
>
> After removing the index objects, I ran deep-scrub for all PGs of the index
> pool. However, the problem wasn't resolved.
Seems you just have large OMAPs, not because 'bogus shard' objects. Try to look
PG stats with 'show_osd_pool_pg_
Hi,
Conclusion of the story... It seems I made something wrong when
recreating .rgw.root, ending up with a zonegroup "default" and a zone
"default" being created (in addition to the zonegroup/zone I explicitely
created). I guess it is because I forgot a --default option when
creating the zone
Good morning people.
I'm having trouble adding 4 NVMe (CT1000P2SSD8). When trying the first time
I came across the error "Cannot use /dev/nvme0n1: device is rejected by
filter config" caused by LVM, when commenting on the filters this error no
longer appeared, Ceph manages to add the OSDs, but the
Hi,
Is it possible to manage the number of versions of objects in the archive
zone ? (https://docs.ceph.com/en/latest/radosgw/archive-sync-module/)
If I can't manage the number of versions, then sooner or later the versions
will kill the entire cluster:(
___
Today IBM and Red Hat announced some big news related to Ceph: the
Ceph storage team at Red Hat is moving to IBM [1]. This is a joint
IBM/Red Hat decision, and represents a large investment in the
continued growth and health of Ceph and its community.
This is great news for upstream Ceph! Our proj
Hi,
Please review https://github.com/ceph/ceph/pull/46928
thanks,
Matt
On Tue, Oct 4, 2022 at 10:37 AM Beren beren wrote:
> Hi,
> Is it possible to manage the number of versions of objects in the archive
> zone ? (https://docs.ceph.com/en/latest/radosgw/archive-sync-module/)
>
> If I can't ma
Hello
I think I may have run into a bug in cephfs that has
security implications. I am not sure it's a good idea to
send the details to the public mailing list or create a
public ticket for it.
How should I proceed?
Thanks
Vlad
___
ceph-users ma
On Tue, Oct 4, 2022 at 5:09 PM Ramana Krisna Venkatesh Raja
wrote:
>
> On Tue, Oct 4, 2022 at 5:01 PM Vladimir Brik
> wrote:
> >
> > Hello
> >
> > I think I may have run into a bug in cephfs that has
> > security implications. I am not sure it's a good idea to
> > send the details to the public m
I did a dumb thing and removed OSDs across a failover domain and as a
result have 4 remapped+incomplete pgs. The data is still on the drives. Is
there a way to add one of these OSDs back in to the cluster?
I’ve made an attempt to re-add the keyring back in using ‘ceph auth’
and
We are able to consistently reproduce the replication issue now.
The following are the environment and the steps to reproduce it.
Please see the details in the open tracker:
https://tracker.ceph.com/issues/57562?next_issue_id=57266&prev_issue_id=55179#note-7.
Any ideas of what's going on and ho
Hi,
Thank you for your reply. Yesterday I ran compaction according to the following
RedHat document (and deep scrub again).
ref. https://access.redhat.com/solutions/5173092
The large omap objects warning in this time looks to be resolved. However,
based on our observations so far, it could reocc
12 matches
Mail list logo