Hello!
On a two nodes ceph cluster, use mainly as s3 provider, i have manyRadosGW
crashs which probably explain why i have fail of large (multipart) files
uploads.Indeed, i havent no issue with small file (<10 Gb).
If someone here can help me dig to this issue, i'll be great!
Thanks a lot for you
It sounds like this is a non-primary mirrored image, which means it's
read-only and cannot be modified. A quick "rbd info" will tell you the
mirror state. Instead, you would need to force-promote it to primary
via "rbd mirror image promote --force" before attempting to modify the
image.
On Wed, Ma
That means you have another librbd client that "owns" the image that
you will need to shut down before you can make that change.
On Wed, Mar 24, 2021 at 9:01 AM Edgelong Voodu <107044349...@gmail.com> wrote:
>
> hi,
> Thank you for the clarification.
> It’s already promoted and not mirrored ,a pri
"rbd status" will tell you which IP addresses have the image open.
"rbd lock ls" will show you who owns the lock.
On Wed, Mar 24, 2021 at 9:07 AM Edgelong Voodu <107044349...@gmail.com> wrote:
>
> Hi,
> As info mentioned in the log file
> There still a lock owner alive. The question is how to fin
Hi people,
I currently try to add ~30 OSDs to our cluster and wanted to use the
gentle-rerweight script for that.
I use ceph-colume lvm prepare --data /dev/sdX to create the osd and want to
start it without weighting it in.
systemctl start ceph-osd@OSD starts the OSD with full weight.
Is this po
You can use:
osd_crush_initial_weight = 0.0
-- Dan
On Wed, Mar 24, 2021 at 2:23 PM Boris Behrens wrote:
>
> Hi people,
>
> I currently try to add ~30 OSDs to our cluster and wanted to use the
> gentle-rerweight script for that.
> I use ceph-colume lvm prepare --data /dev/sdX to create the osd
Den ons 24 mars 2021 kl 14:27 skrev Dan van der Ster :
> You can use:
>osd_crush_initial_weight = 0.0
We have it at 0.001 or something low which is non-zero so it doesn't
start as "out" or anything, but still will not receive any PGs.
--
May the most significant bit of your life be positive.
Oh cool. Thanks :)
How do I find the correct weight after it is added?
For the current process I just check the other OSDs but this might be a
question that someone will raise.
I could imagine that I need to adjust the ceph-gentle-reweight's target
weight to the correct one.
Am Mi., 24. März 202
Hello,
Our podman backed cluster currently is on version 15.2.8, the ceph and
cephadm packages on the hosts are 15.2.10.
When we check the package version with 'ceph orch upgrade check
--ceph-version 15.2.10' it tells us that it failed to pull the image on
a host.
Inspecting the log we see tha
Den ons 24 mars 2021 kl 14:55 skrev Boris Behrens :
>
> Oh cool. Thanks :)
>
> How do I find the correct weight after it is added?
> For the current process I just check the other OSDs but this might be a
> question that someone will raise.
>
> I could imagine that I need to adjust the ceph-gentle
Hi all.
I have some questions about how MDS cluster works when crash or operation
failure occur.
1. I read ceph documentation and code that each MDS has its own journal and
some directory operations like rename use distributed transaction mechanism
with ceph-defined events (e.g. EPeerUpdate). Wha
I might be stupid, but do I do something wrong with the script?
[root@mon1 ceph-scripts]# ./tools/ceph-gentle-reweight -o
43,44,45,46,47,48,49,50,51,52,53,54,55 -s 00:00 -e 23:59 -b 82 -p rbd -t
1.74660
Draining OSDs: ['43', '44', '45', '46', '47', '48', '49', '50', '51',
'52', '53', '54', '55']
M
Not sure why, without looking at your crush map in detail.
But to be honest, I don't think you need such a tool anymore. It was
written back in the filestore days when backfilling could be much more
disruptive than today.
You have only ~10 osds to fill up: just mark them fully in, or increment by
we recently added 3 new nodes with 12x12TB OSDs. It took 3 days or so
to reshuffle the data and another 3 days to split the pgs. I did
increase the number of max backfills to speed up the process. We didn't
notice the reshuffling in normal operation.
On Wed, 2021-03-24 at 19:32 +0100, Dan van der
There are just a couple remaining issues before the final release.
Please test it out and report any bugs.
The full release notes are in progress here [0].
Notable Changes
---
* New ``bluestore_rocksdb_options_annex`` config
parameter. Complements ``bluestore_rocksdb_options`` and
Hi Julian,
You are most likely running into this same issue:
https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/QNR2XRZPEYKANMUJLI4KYQGWAQEDJNSX/
It is podman 2.2 related.
I ran into this using CentOS 8.3 and decided to move to CentOS Stream to be
able to upgrade the cluster.
David
16 matches
Mail list logo