I upgraded one cluster to 14.2.10 and this perf counter is still growing.
Does any have an idea of how to debug this problem?
Jacek
sob., 4 lip 2020 o 18:49 Simon Leinen napisał(a):
> Jacek Suchenia writes:
> > On two of our clusters (all v14.2.8) we observe a very strange
users.email",
> "user_swift_pool": ".users.swift",
> "user_uid_pool": ".users.uid",
> "otp_pool": "default.rgw.otp",
> "system_key": {
> "access_key": "",
> "secret_key": ""
> },
> "placement_pools": [
> {
> "key": "default-placement",
> "val": {
> "index_pool": "rgw.buckets.index",
> "storage_classes": {
> "STANDARD": {
> "data_pool": "rgw.buckets"
> }
> },
> "data_extra_pool": "rgw.buckets.non-ec",
> "index_type": 0
> }
> },
> {
> "key": "pre-jewel",
> "val": {
> "index_pool": "rgw.buckets",
> "storage_classes": {
> "STANDARD": {
> "data_pool": "rgw.buckets"
> }
> },
> "data_extra_pool": "",
> "index_type": 0
> }
> }
> ],
> "metadata_heap": ".rgw.meta",
> "realm_id": "c"
> }
>
> Nevertheless, only the luminous gateways may list my old buckets. As far
> as I can see, I may only change the placement_rule for new buckets. Is
> there any chance to make radosgw find the old indices and complete the
> upgrade to nautilus?
>
> Many thanks,
>
> Ingo
>
> --
> Ingo Reimann
> [ https://www.dunkel.de/ ]
> Dunkel GmbH
> Philipp-Reis-Straße 2
> 65795 Hattersheim
> Fon: +49 6190 889-100
> Fax: +49 6190 889-399
> eMail: supp...@dunkel.de
> http://www.Dunkel.de/ Amtsgericht Frankfurt/Main
> HRB: 37971
> Geschäftsführer: Axel Dunkel
> Ust-ID: DE 811622001
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
--
Jacek Suchenia
jacek.suche...@gmail.com
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
ct my new placement_target
> "pre-jewel" as placement_rule for that buckets with the modified
> radosgw-admin. Just now struggeling with the compile prozess.
>
> kind regards,
> Ingo
> --
> *Von: *"Jacek Suchenia"
> *An: *"cep
so i thought about modifying the
> reshard command and store the "resharded" index on the new place. Just now,
> i am not sure, if i need that and if i am able to do the right
> modifications on the code..
>
> Kind regards,
> Ingo
>
>
ck in
*undersized* state.
What mechanism prevents CRUSH algorithm to assign the same set of OSDs to
all PGs in a pool? How can I control it?
Jacek
--
Jacek Suchenia
jacek.suche...@gmail.com
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe se
osd.0
Now status is:
25 pgs degraded, 25 pgs undersized
All of them are from the same pool, while this pool is using 32 PGs - so 7
are correctly assigned on [0, 10, 11] while the rest is only on [10, 11]
Jacek
śr., 19 lut 2020 o 07:27 Wido den Hollander napisał(a):
>
>
> On 2/18/20
Janne
Thanks for good spot however all of them are 3.53830, that change was left
after some tests to kick CRUSH algorithm
Jacek
śr., 19 lut 2020 o 09:47 Janne Johansson napisał(a):
> Den ons 19 feb. 2020 kl 09:42 skrev Jacek Suchenia <
> jacek.suche...@gmail.com>:
>
>>
Jacek Suchenia
napisał(a):
> Janne
>
> Thanks for good spot however all of them are 3.53830, that change was left
> after some tests to kick CRUSH algorithm
>
> Jacek
>
> śr., 19 lut 2020 o 09:47 Janne Johansson napisał(a):
>
>> Den ons 19 feb. 2020 kl 09:42 skre