On 23/08/2019 22:14, Paul Emmerich wrote:
> On Fri, Aug 23, 2019 at 3:54 PM Florian Haas wrote:
>>
>> On 23/08/2019 13:34, Paul Emmerich wrote:
>>> Is this reproducible with crushtool?
>>
>> Not for me.
>>
>>> ceph osd getcrushmap -o crushmap
>>> crushtool -i crushmap --update-item XX 1.0 osd.XX -
Hi everybody
Im new to ceph and I have a question related to active+remapped+backfilling
and misplaced objects
Recently I copied more than 10 million objects to a new cluster with 3 nodes
and 6 osds during this migration one of my OSDs got full and health check
became ERR I dont know why but c
Frank,
I wrote the wpq and the cut off code because the only scheduler at the time
was not servicing other priorities under extreme load. The default op
scheduler prioritized replication ops in the strict queue which meant as
long as there were any ops from other OSDs for replication, no client or
WPQ has been the default queue for quite some time now (Luminous?).
However, the default cut off is low. I remember changing this in some
early jewel (or kraken?) version to high and it helped a lot with the
only cluster we had back then.
We've been running all of our clusters with cut off high si
It seems that with Linux kernel 4.16.10 krdb clients are seen as Jewel
rather than Luminous. Can someone tell me which kernel version will be seen
as Luminous as I want to enable the Upmap Balancer.
___
ceph-users mailing list -- ceph-users@ceph.io
To uns
4.13 or newer is enough for upmap
--
Paul Emmerich
Looking for help with your Ceph cluster? Contact us at https://croit.io
croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90
On Mon, Aug 26, 2019 at 8:01 PM Frank R wrote:
>
> It seems that with Linux kernel 4.16.10
What will actually happen if an old client comes by, potential data damage - or
just broken connections from the client?
jesper
Sent from myMail for iOS
Monday, 26 August 2019, 20.16 +0200 from Paul Emmerich
:
>4.13 or newer is enough for upmap
>
>--
>Paul Emmerich
>
>Looking for help w
On Mon, Aug 26, 2019 at 8:25 PM wrote:
>
> What will actually happen if an old client comes by, potential data damage -
> or just broken connections from the client?
The latter (with "libceph: ... feature set mismatch ..." errors).
Thanks,
Ilya
_
will 4.13 also work for cephfs?
On Mon, Aug 26, 2019 at 2:31 PM Ilya Dryomov wrote:
> On Mon, Aug 26, 2019 at 8:25 PM wrote:
> >
> > What will actually happen if an old client comes by, potential data
> damage - or just broken connections from the client?
>
> The latter (with "libceph: ... feat
On Mon, Aug 26, 2019 at 9:37 PM Frank R wrote:
>
> will 4.13 also work for cephfs?
Upmap works the same for krbd and kcephfs. All upstream kernels
starting with 4.13 (and also RHEL/CentOS kernels starting with 7.5)
support it. If you have a choice which kernel to run, the newer the
better.
Tha
High should be the default with WPQ.
Robert LeBlanc
PGP Fingerprint 79A2 9CA4 6CC4 45DD A904 C70E E654 3BB2 FA62 B9F1
On Mon, Aug 26, 2019 at 10:44 AM Paul Emmerich
wrote:
> WPQ has been the default queue for quite some time now (Luminous?).
>
> However, the default cut off is
If it is the default, then the documentation should be updated. [0]
[0]
https://docs.ceph.com/docs/master/rados/configuration/osd-config-ref/?highlight=wpq#operations
Robert LeBlanc
PGP Fingerprint 79A2 9CA4 6CC4 45DD A904 C70E E654 3BB2 FA62 B9F1
On Mon, Aug 26, 2019 at 1:22 P
Hi Robert and Paul,
I checked today and the default scheduler is WPQ with cut off low (I was using
the defaults). I changed cut off to high in the config data base, but still
need to restart all OSDs to apply the change.
I'm not sure how much it will help though. Maybe heartbeats will get throu
13 matches
Mail list logo