Hi all,
We run a simple single zone nautilus radosGW instance with a few gateway
machines for some of our users. I've got some more gateway machines earmarked
for the purpose of adding some OpenStack Keystone integrated RadosGW gateways
to the cluster. I'm not sure how best to add them alongsid
As you have noted, 'ceph osd reweight 0' is the same as an 'ceph osd out', but
it is not the same as removing the OSD from the crush map (or setting crush
weight to 0). This explains your observation of the double rebalance when you
mark an OSD out (or reweight an OSD to 0), and then remove it l
(So I imaging that running all 14.2.9 mons would get past it, but if you're
> being
> cautious this should be reproduceable on your test cluster).
>
> -- Dan
>
>
> On Wed, May 13, 2020 at 12:07 PM Thomas Byrne - UKRI STFC
> wrote:
> >
> > Hi all,
> >
Aleksey Gutikov wrote a detailed response to a similar question last year,
maybe this will help?
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2019-July/036318.html
I haven't done much looking into this, but for your question, I believe the two
option that control the size of objects on d
Hi all,
We're upgrading a cluster from luminous to nautilus. The monitors and managers
are running a non-release version of luminous (12.2.12-642-g5ff3e8e) and we're
upgrading them to 14.2.9.
We've upgraded one monitor and it's happily in quorum as a peon. However, when
a ceph status hits the
Hi,
Our large Luminous cluster still has around 2k FileStore OSDs (35% of OSDs). We
haven't had any particular need to move these over to BlueStore yet, as the
performance is fine for our use case. Obviously, it would be easiest if we
could let the FileStore OSDs stay in the cluster until the h
s actually
created? You can check with
ceph pg ls-by-pool | tr -s ' ' | cut -d " " --fields=1,17
Regards,
Eugen
Zitat von Thomas Byrne - UKRI STFC :
>> It seems likely that jerasure ec pools (at least) are not compatible
>> with libradosstriper if k is not a po
> It seems likely that jerasure ec pools (at least) are not compatible with
> libradosstriper if k is not a power of 2.
Yes, that seems like the summary of the issue as it stands, I'm intrigued to
know what libradosstriper is doing compared to librados that makes it only work
on specific layout
I think I can replicate your issue on a luminous cluster. It works fine with a
8+3 pool, but 10+2 fails after creating the 3 chunks with the same error.
What does your erasure code profile look like? I don't think I actually tested
anything other than powers of two (8,16) for k before settling o
>From memory if you specify a crush rule (even if it is the same as the pool
>name), it will look for a rule and error if it is not found, rather than
>creating it.
The behaviour may have changed, but try explicitly not supplying a crush rule
name (if you haven't already).
Cheers,
Tom
-O
> And bluestore should refuse to start if the configured limit is > 4GB. Or
> something along those lines...
Just on this point - Bluestore OSDs will fail to start with an
osd_max_object_size >=4GB with a helpful error message about the Bluestore hard
limit. I was mildly amused when I discover
11 matches
Mail list logo