[ceph-users] Re: radosgw-admin hangs

2022-08-29 Thread Magdy Tawfik
Hi Boris Thank you then I'm not alone in the sea it seems Mon is fine after migration [mm@cephadm-X~]$ sudo ceph mon stat e29: 5 mons at {xx1.com=[v2:10.3.144.10:3300/0,v1:10.3.144.10:6789/0 ],xx2=[v2:10.3.144.11:3300/0,v1:10.3.144.11:6789/0],xx3=[v2: 10.3.144.12:3300/0,v1:

[ceph-users] Re: Changing the cluster network range

2022-08-29 Thread Burkhard Linke
Hi, some years ago we changed our setup from a IPoIB cluster network to a single network setup, which is a similar operation. The OSD use the cluster network for heartbeats and backfilling operation; both use standard tcp connection. There is no "global view" on the networks involved; OSDs

[ceph-users] Cephadm unable to upgrade/add RGW node

2022-08-29 Thread Reza Bakhshayeshi
Hi I'm using the pacific version with cephadm. After a failed upgrade from 16.2.7 to 17.2.2, 2/3 MGR nodes stopped working (this is a known bug of upgrade) and the orchestrator also didn't respond to rollback services, so I had to remove the daemons and add the correct one manually by running this

[ceph-users] Automanage block devices

2022-08-29 Thread Dominique Ramaekers
Hi, I really like the behavior of ceph to auto-manage block devices. But I get ceph status warnings if I map an image to a /dev/rbd Some log output: Aug 29 11:57:34 hvs002 bash[465970]: Non-zero exit code 2 from /usr/bin/docker run --rm --ipc=host --stop-signal=SIGTERM --net=host --entrypoint

[ceph-users] Re: ceph-dokan: Can not copy files from cephfs to windows

2022-08-29 Thread Lucian Petrut
Hi, I couldn't reproduce the issue using those specific Ceph and Dokany builds. Could you please check the ceph-dokan logs? Thanks, Lucian On 11.08.2022 12:08, Spyros Trigazis wrote: Hello ceph users, I am trying to use ceph-dokan with a testing ceph cluster (versions below). I can mount t

[ceph-users] Re: Automanage block devices

2022-08-29 Thread Dominique Ramaekers
Hi Etienne, Maybe I didn't make myself clear... When I map an rbd-image from my cluster to a /dev/rbd, ceph wants to automatically add the /dev/rbd as an OSD. This is undesirable behavior. Trying to add a /dev/rdb mapped to an image in the same cluster??? Scary... Luckily the automatic creatio

[ceph-users] Re: OSDs growing beyond full ratio

2022-08-29 Thread Wyll Ingersoll
I would think so, but it isn't happening nearly fast enough. It's literally been over 10 days with 40 new drives across 2 new servers and they barely have any PGs yet. A few, but not nearly enough to help with the imbalance. From: Jarett Sent: Sunday, August 2

[ceph-users] Re: Cephadm unable to upgrade/add RGW node

2022-08-29 Thread Reza Bakhshayeshi
I found a misconfiguration in my ceph config dump: mgradvanced mgr/cephadm/migration_current 5 and changing it to 3 solved the issue and the orchestrator is back to working properly. That's something to do with the previous failed upgrade to Quin

[ceph-users] Re: OSDs growing beyond full ratio

2022-08-29 Thread Wyll Ingersoll
Thank You! I will see about trying these out, probably using your suggestion of several iterations with #1 and then #3. From: Stefan Kooman Sent: Monday, August 29, 2022 1:38 AM To: Wyll Ingersoll ; ceph-users@ceph.io Subject: Re: [ceph-users] OSDs growing be

[ceph-users] Re: Automanage block devices

2022-08-29 Thread Dominique Ramaekers
Interesting, but weird... I use Quincy root@hvs001:/# ceph versions { "mon": { "ceph version 17.2.3 (dff484dfc9e19a9819f375586300b3b79d80034d) quincy (stable)": 3 }, "mgr": { "ceph version 17.2.3 (dff484dfc9e19a9819f375586300b3b79d80034d) quincy (stable)": 2 },

[ceph-users] Re: Automanage block devices

2022-08-29 Thread Robert Sander
Am 29.08.22 um 14:14 schrieb Dominique Ramaekers: Nevertheless, I would feel better if ceph just doesn't try to add the /dev/rbd to the cluster. It looks like your drivegroup specification is too generic. Can you post the YAML for that here? You should be as specific as possible with the sp

[ceph-users] Re: Bug in crush algorithm? 1 PG with the same OSD twice.

2022-08-29 Thread Dan van der Ster
Hi Frank, CRUSH can only find 5 OSDs, given your current tree, rule, and reweights. This is why there is a NONE in the UP set for shard 6. But in ACTING we see that it is refusing to remove shard 6 from osd.1 -- that is the only copy of that shard, so in this case it's helping you rather than dele

[ceph-users] Downside of many rgw bucket shards?

2022-08-29 Thread Boris Behrens
Hi there, I have some buckets that would require >100 shards and I would like to ask if there are any downsides to have these many shards on a bucket? Cheers Boris ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-u

[ceph-users] Re: Downside of many rgw bucket shards?

2022-08-29 Thread J. Eric Ivancich
Generally it’s a good thing. There’s less contention for bucket index updates when, for example, lots of writes are happening together. Dynamic resharding will take things up to 1999 shards on its own with the default config. Given that we use hashing of objet names to determine which shard they

[ceph-users] Re: Downside of many rgw bucket shards?

2022-08-29 Thread Anthony D'Atri
Do I recall that the number of shards is ideally odd, or even prime? Performance might be increased by indexless buckets if the application can handle > On Aug 29, 2022, at 10:06 AM, J. Eric Ivancich wrote: > > Generally it’s a good thing. There’s less contention for bucket index > updates

[ceph-users] Re: Downside of many rgw bucket shards?

2022-08-29 Thread Matt Benjamin
We choose prime number shard counts, yes. Indexless buckets do increase insert-delete performance, but by definition, though, an indexless bucket cannot be listed. Matt On Mon, Aug 29, 2022 at 1:46 PM Anthony D'Atri wrote: > Do I recall that the number of shards is ideally odd, or even prime? >

[ceph-users] Re: OSDs growing beyond full ratio

2022-08-29 Thread Wyll Ingersoll
Can anyone explain why OSDs (ceph pacific, bluestore osds) continue to grow well after they have exceeded the "full" level (95%) and is there any way to stop this? "The full_ratio is 0.95 but we have several osds that continue to grow and are approaching 100% utilization. They are reweighted

[ceph-users] Re: OSDs growing beyond full ratio

2022-08-29 Thread Dave Schulz
Hi Wyll, Any chance you're using CephFS and have some really large files in the CephFS filesystem?  Erasure coding? I recently encountered a similar problem and as soon as the end-user deleted the really large files our problem became much more managable. I had issues reweighting OSDs too an

[ceph-users] Re: rbd-mirror stops replaying journal on primary cluster

2022-08-29 Thread Josef Johansson
Hi, There's nothing special in the cluster when it stops replaying. It seems that a journal entry that the local replayer doesn't handle and just stops. Since it's the local replayer that stops there's no logs in rbd-mirror. The odd part is that rbd-mirror handles this totally fine and is the one