Question about the osdmaptool deviation calculations;
For instance,
-
osdmaptool omap --upmap output.txt --upmap-pool cephfs_data-rep3 --upmap-max
1000 --upmap-deviation 5
osdmaptool: osdmap file 'omap'
writing upmap command output to: output.txt
checking for upmap cleanups
upmap, max-coun
I think this is capped at 1000 by the config setting. Ive used the aws
and s3cmd clients to delete more than 1000 objects at a time and it
works even with the config setting capped at 1000. But it is a bit slow.
#> ceph config help rgw_delete_multi_obj_max_num
rgw_delete_multi_obj_max_num - Max
Ramin,
I think youre still going to experience what Casey described.
If your intent is to completely isolate bucket metadata/data in one
zonegroup from another, then I believe you need multiple independent
realms. Each with its own endpoint.
For instance;
Ceph Cluster A
Realm1/zonegroup1/zone1
Unless I'm misunderstanding your situation, you could also tag your
placement targets. You then tag users with the corresponding tag
enabling them to create new buckets at that placement target. If a user
is not tagged with the corresponding tag they cannot create new buckets
at that placement
I ran into this same puzzling behavior and never resolved it. I *think*
it is a benign bug, that can be ignored.
Here's what I found.
The crash service first attempts to ping the cluster to exercise the
key. "pinging cluster" is accomplished with a `ceph -s`.
# v18.2.4:src/ceph-crash.in
1
gistered in "auth ls".
I didn't confirm but my best guess is the chain breaks when auth is created
with cephadm host name but the remote script is derived from the "hostname"
directive. That's also implied in the link you shared.
On Wed, 16 Apr 2025 at 17:25, Robe
16:14, Can Özyurt wrote:
Did you add the host in cephadm with the full name including domain? You
can check with ceph orch host ls.
On Wed, Apr 16, 2025, 5:04 PM Robert Hish wrote:
I ran into this same puzzling behavior and never resolved it. I *think*
it is a benign bug, that can be ignored.
H
I have had similar issues specifying what I'm actually trying to do.
One idea I had, but it may be a bit much, would be to have an
interactive radosgw-admin prompt. Similar to the way switch/router
prompts work. e.g., you would enter a realm by selecting it, (the same
way you enter an interfac
I would also be interested in hearing from anyone who is successfully
using nfs-ganesha in an HA configuration.
Our internal tests have same results; unreliable, bogs down easily, etc.
Looks like the version included with Ceph jumped from v5.5 (included
with Reef) to v5.9 (included with Squ