I guess this is coming from:
https://github.com/ceph/ceph/pull/30783
introduced in Nautilus 14.2.5
On Wed, Jan 15, 2020 at 8:10 AM Massimo Sgaravatto <
massimo.sgarava...@gmail.com> wrote:
> As I wrote here:
>
>
> http://lists.ceph.com/pipermail/ceph-users-ceph.com/2020-January/037909.html
>
>
Seeing a weird mount issue. Some info:
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 18.04.3 LTS
Release: 18.04
Codename: bionic
Ubuntu 18.04.3 with kerne 4.15.0-74-generic
Ceph 14.2.5 & 14.2.6
With ceph-common, ceph-base, etc installed:
ceph/stable,now 14.2.6-1bioni
Folks,
i would like to thank you again for your help regarding performance speedup of
our ceph cluster.
Customer just reports, that database is around 40% faster than before without
changing any hardware.
This really kicks ass now! :)
We measured the subop_latency - avgtime on our OSD
Hi All,
Running 14.2.5, currently experiencing some network blips isolated to a single
rack which is under investigation. However, it appears following a network
blip, random OSD's in unaffected racks are sometimes not recovering from the
incident and are left running running in a zombie state.
Thanks for that link.
Do you have a default osd max object size of 128M? I’m thinking about doubling
that limit to 256MB on our cluster. Our largest object is only about 10% over
that limit.
> On Jan 15, 2020, at 3:51 AM, Massimo Sgaravatto
> wrote:
>
> I guess this is coming from:
>
> ht
I never changed the default value for that attribute
I am missing why I have such big objects around
I am also wondering what a pg repair would do in such case
Il mer 15 gen 2020, 16:18 Liam Monahan ha scritto:
> Thanks for that link.
>
> Do you have a default osd max object size of 128M? I’m
On Wednesday, January 15, 2020 14:37 GMT, "Nick Fisk" wrote:
> Hi All,
>
> Running 14.2.5, currently experiencing some network blips isolated to a
> single rack which is under investigation. However, it appears following a
> network blip, random OSD's in unaffected racks are sometimes not re
I just changed my max object size to 256MB and scrubbed and the errors went
away. I’m not sure what can be done to reduce the size of these objects,
though, if it really is a problem. Our cluster has dynamic bucket index
resharding turned on, but that sharding process shouldn’t help it if non-
Hey all,
One of my mons has been having a rough time for the last day or so. It started
with a crash and restart I didn't notice about a day ago and now it won't
start. Where it crashes has changed over time but it is now stuck on the last
error below. I've tried to get some more information ou