Hi Boris
Thank you then I'm not alone in the sea
it seems Mon is fine after migration
[mm@cephadm-X~]$ sudo ceph mon stat
e29: 5 mons at {xx1.com=[v2:10.3.144.10:3300/0,v1:10.3.144.10:6789/0
],xx2=[v2:10.3.144.11:3300/0,v1:10.3.144.11:6789/0],xx3=[v2:
10.3.144.12:3300/0,v1:
Hi,
some years ago we changed our setup from a IPoIB cluster network to a
single network setup, which is a similar operation.
The OSD use the cluster network for heartbeats and backfilling
operation; both use standard tcp connection. There is no "global view"
on the networks involved; OSDs
Hi
I'm using the pacific version with cephadm. After a failed upgrade from
16.2.7 to 17.2.2, 2/3 MGR nodes stopped working (this is a known bug of
upgrade) and the orchestrator also didn't respond to rollback services, so
I had to remove the daemons and add the correct one manually by running
this
Hi,
I really like the behavior of ceph to auto-manage block devices. But I get ceph
status warnings if I map an image to a /dev/rbd
Some log output:
Aug 29 11:57:34 hvs002 bash[465970]: Non-zero exit code 2 from /usr/bin/docker
run --rm --ipc=host --stop-signal=SIGTERM --net=host --entrypoint
Hi,
I couldn't reproduce the issue using those specific Ceph and Dokany builds.
Could you please check the ceph-dokan logs?
Thanks,
Lucian
On 11.08.2022 12:08, Spyros Trigazis wrote:
Hello ceph users,
I am trying to use ceph-dokan with a testing ceph cluster (versions below).
I can mount t
Hi Etienne,
Maybe I didn't make myself clear...
When I map an rbd-image from my cluster to a /dev/rbd, ceph wants to
automatically add the /dev/rbd as an OSD. This is undesirable behavior. Trying
to add a /dev/rdb mapped to an image in the same cluster??? Scary...
Luckily the automatic creatio
I would think so, but it isn't happening nearly fast enough.
It's literally been over 10 days with 40 new drives across 2 new servers and
they barely have any PGs yet. A few, but not nearly enough to help with the
imbalance.
From: Jarett
Sent: Sunday, August 2
I found a misconfiguration in my ceph config dump:
mgradvanced
mgr/cephadm/migration_current 5
and changing it to 3 solved the issue and the orchestrator is back to
working properly.
That's something to do with the previous failed upgrade to Quin
Thank You!
I will see about trying these out, probably using your suggestion of several
iterations with #1 and then #3.
From: Stefan Kooman
Sent: Monday, August 29, 2022 1:38 AM
To: Wyll Ingersoll ; ceph-users@ceph.io
Subject: Re: [ceph-users] OSDs growing be
Interesting, but weird...
I use Quincy
root@hvs001:/# ceph versions
{
"mon": {
"ceph version 17.2.3 (dff484dfc9e19a9819f375586300b3b79d80034d) quincy
(stable)": 3
},
"mgr": {
"ceph version 17.2.3 (dff484dfc9e19a9819f375586300b3b79d80034d) quincy
(stable)": 2
},
Am 29.08.22 um 14:14 schrieb Dominique Ramaekers:
Nevertheless, I would feel better if ceph just doesn't try to add the /dev/rbd
to the cluster.
It looks like your drivegroup specification is too generic.
Can you post the YAML for that here?
You should be as specific as possible with the sp
Hi Frank,
CRUSH can only find 5 OSDs, given your current tree, rule, and
reweights. This is why there is a NONE in the UP set for shard 6.
But in ACTING we see that it is refusing to remove shard 6 from osd.1
-- that is the only copy of that shard, so in this case it's helping
you rather than dele
Hi there,
I have some buckets that would require >100 shards and I would like to ask
if there are any downsides to have these many shards on a bucket?
Cheers
Boris
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-u
Generally it’s a good thing. There’s less contention for bucket index updates
when, for example, lots of writes are happening together. Dynamic resharding
will take things up to 1999 shards on its own with the default config.
Given that we use hashing of objet names to determine which shard they
Do I recall that the number of shards is ideally odd, or even prime?
Performance might be increased by indexless buckets if the application can
handle
> On Aug 29, 2022, at 10:06 AM, J. Eric Ivancich wrote:
>
> Generally it’s a good thing. There’s less contention for bucket index
> updates
We choose prime number shard counts, yes.
Indexless buckets do increase insert-delete performance, but by definition,
though, an indexless bucket cannot be listed.
Matt
On Mon, Aug 29, 2022 at 1:46 PM Anthony D'Atri
wrote:
> Do I recall that the number of shards is ideally odd, or even prime?
>
Can anyone explain why OSDs (ceph pacific, bluestore osds) continue to grow
well after they have exceeded the "full" level (95%) and is there any way to
stop this?
"The full_ratio is 0.95 but we have several osds that continue to grow and are
approaching 100% utilization. They are reweighted
Hi Wyll,
Any chance you're using CephFS and have some really large files in the
CephFS filesystem? Erasure coding? I recently encountered a similar
problem and as soon as the end-user deleted the really large files our
problem became much more managable.
I had issues reweighting OSDs too an
Hi,
There's nothing special in the cluster when it stops replaying. It
seems that a journal entry that the local replayer doesn't handle and
just stops. Since it's the local replayer that stops there's no logs
in rbd-mirror. The odd part is that rbd-mirror handles this totally
fine and is the one
19 matches
Mail list logo