Thanks Eugen, massive help. Working on identifying and cleaning up old/empty
buckets now.
On Wed, 2023-08-02 at 06:10 +, Eugen Block wrote:
Correct, only a deep-scrub will check that threshold. 'ceph config
set' is persistent, a daemon restart will preserve the new config value.
Zitat von
Do you really need device paths in your configuration? You could use
other criteria like disk sizes, vendors, rotational flag etc. If you
really want device paths you'll probably need to ensure they're
persistent across reboots via udev rules.
Zitat von Kilian Ries :
Hi,
it seems that a
Correct, only a deep-scrub will check that threshold. 'ceph config
set' is persistent, a daemon restart will preserve the new config value.
Zitat von Mark Johnson :
Never mind, I think I worked it out. I consulted the Quincy
documentation which just said to do this:
ceph config set osd osd_
Hi,
from Ceph perspective it's supported to upgrade from N to P, you can
safely skip O. We have done that on several clusters without any
issues. You just need to make sure that your upgrade to N was
complete. Just a few days ago someone tried to upgrade from O to Q
with "require-osd-rele
Never mind, I think I worked it out. I consulted the Quincy
documentation which just said to do this:
ceph config set osd osd_deep_scrub_large_omap_object_key_threshold 200
But when i did that, the health warning didn't clear. I took a guess that
maybe I needed to trigger a deep scrub on t
Hi Yuri,
On Tue, Aug 1, 2023 at 10:34 PM Venky Shankar wrote:
>
> On Tue, Aug 1, 2023 at 5:55 PM Venky Shankar wrote:
> >
> > On Tue, Aug 1, 2023 at 1:21 AM Yuri Weinstein wrote:
> > >
> > > Pls see the updated test results and Release Notes PR
> > > https://github.com/ceph/ceph/pull/52490
> >
Regarding changing this bvalue back to the previous default of 2,000,000, how
would I go about doing that? I tried following that SUSE KB article which says
to do this:
ceph tell 'osd.*' injectargs
--osd_deep_scrub_large_omap_object_key_threshold=200
But while that didn't fail as such, it
On Mon, Jul 31, 2023 at 1:46 AM Yuri Weinstein wrote:
>
> Details of this release are summarized here:
>
> https://tracker.ceph.com/issues/62231#note-1
>
> Seeking approvals/reviews for:
>
> smoke - Laura, Radek
> rados - Neha, Radek, Travis, Ernesto, Adam King
> rgw - Casey
> fs - Venky
> orch -
Thanks for that. That's pretty much how I was reading it, but the text you
provided is a lot more explanatory than what I'd managed to find and makes it a
bit clearer. Without going into too much detail, yes we do have a single user
that is used to create multiple a bucket for each of a multip
Hi Götz,
We’ve done a similar process which involves going from starting at CentOS 7
Nautilus and upgrading to Rocky 8/Ubuntu 20.04 Octopus+.
What we do is start on CentOS 7 Nautilus we upgrade to Octopus on CentOS 7
(we’ve built python packages and have them on our repo to satisfy some c
>
> As I’v read and thought a lot about the migration as this is a bigger
> project, I was wondering if anyone has done that already and might share
> some notes or playbooks, because in all readings there where some parts
> missing or miss understandable to me.
>
> I do have some different appro
Hi Goetz,
I've done the same, and went to Octopus and to Ubuntu. It worked like a
charm and with pip, you can get the pecan library working. I think I did it
with this:
yum -y install python36-six.noarch python36-PyYAML.x86_64
pip3 install pecan werkzeug cherrypy
Worked very well, until we got hit
Hi,
As I’v read and thought a lot about the migration as this is a bigger project,
I was wondering if anyone has done that already and might share some notes or
playbooks, because in all readings there where some parts missing or miss
understandable to me.
I do have some different approaches i
Hi,
it seems that after reboot / OS update my disk labels / device paths may have
changed. Since then i get an error like this:
CEPHADM_APPLY_SPEC_FAIL: Failed to apply 1 service(s): osd.osd-12-22_hdd-2
###
RuntimeError: cephadm exited with an error code: 1, stderr:Non-zero exit code 1
On Tue, Aug 1, 2023 at 5:55 PM Venky Shankar wrote:
>
> On Tue, Aug 1, 2023 at 1:21 AM Yuri Weinstein wrote:
> >
> > Pls see the updated test results and Release Notes PR
> > https://github.com/ceph/ceph/pull/52490
> >
> > Still seeking approvals:
> > smoke - Laura, Radek, Venky
> > rados - Radek
Rados failures are summarized here:
https://tracker.ceph.com/projects/rados/wiki/REEF#Reef-v1820
All are known. Will let Radek give the final ack.
On Tue, Aug 1, 2023 at 9:05 AM Nizamudeen A wrote:
> dashboard approved! failure is unrelated and tracked via
> https://tracker.ceph.com/issues/5894
Hi , we have random error with rgw during a backup from veeam .
Daemons go in error state.
Where we can find the appropriate logs about it ?
I just find out something related to this
-1788> 2023-07-31T19:51:21.169+ 7f04567d3700 2 req 10656715914266436796
0.0s getting op 0
-1787> 2
dashboard approved! failure is unrelated and tracked via
https://tracker.ceph.com/issues/58946
Regards,
Nizam
On Sun, Jul 30, 2023 at 9:16 PM Yuri Weinstein wrote:
> Details of this release are summarized here:
>
> https://tracker.ceph.com/issues/62231#note-1
>
> Seeking approvals/reviews for:
On Tue, Aug 1, 2023 at 1:21 AM Yuri Weinstein wrote:
>
> Pls see the updated test results and Release Notes PR
> https://github.com/ceph/ceph/pull/52490
>
> Still seeking approvals:
> smoke - Laura, Radek, Venky
> rados - Radek, Laura, Nizamudeen
> fs - Venky
> orch - Adam King
> powercycle - Brad
Thanks. Just for reference I'm quoting the SUSE doc [1] you mentioned
because it explains what you already summarized:
User indices are not sharded, in other words we store all the keys
of names of buckets under one object. This can cause large objects
to be found. The large object is only
Here you go. It doesn't format very well, so I'll summarize what I'm
seeing.
5.c has 78051 OMAP_BYTES and 398 OMAP_KEYS
5.16 has 80186950 OMAP_BYTES and 401505 OMAP_KEYS
The remaining 30 PGS have zero of both. However, the BYTES for each PG
is very much the same at around 890 for each.
You could add (debug) logs for starters ;-)
There was a thread [1] describing something quite similar, pointing to
this bug report [2]. In recent versions it's supposed to be fixed
although I don't see the tracker or PR number in the release notes of
both pacific and quincy. Can you verify
22 matches
Mail list logo