I recreated the placement target and was able to remove the bucket, so
thats fixed.
Since the bucket is removed, I have deleted the placement target.
When I restart RGW, I am getting the following during startup:
debug 2024-08-27T13:34:55.850+ 7f25d9b0f280 0 WARNING: This zone
does not con
Hi folks,
Currently running a test Ceph cluster on 19.1.0 and seeing some odd behaviour
using the CreateTopic API.
Initially I call the CreateTopic API, and all looks well:
$ rga topic list
{
"topics": [
{
"topics": [
{
"owner": "ahk2"
Hi,
In our environment, only administrator can create/delete volume,
subvolume, and subvolumegroup.
The end-users (cephfs clients) only can access(mount) their "shared
folders (sub-volume)".
I tried this configurations before:
caps: [mds] allow rw fsname=cephfs
path=/volumes/${subvolumeg
Can you share 'ceph versions' output?
Do you see the same behaviour when adding a snapshot schedule, e.g.
rbd -p mirror snapshot schedule add 30m
I can't reproduce it, unfortunately, creating those mirror snapshots
manually still works for me.
Zitat von scott.cai...@tecnica-ltd.co.uk:
We h
Hi Yufan,
Could you please provide a bit more details please? In what way do you
want to restrict your user (ceph client user correct?)
How does your client look like (you can use "ceph auth get client.myuser"
to get the details)
Thank you,
Bogdan V.
croit.io
On Tue, Aug 27, 2024 at 3:31 PM wro
Hi,
is there anything on the road map to be able to choose a specific
daemon type to be entirely removed from a host instead of all cephadm
managed daemons? I just did a quick search in tracker and github, but
it may be "hidden" somewhere else.
I was thinking about colocated daemons on a
We have rbd-mirror daemon running on both sites, however replication is only
one way (i.e. the one on the remote site is the only live one, the one on the
primary site is just there for if we ever need to set up two-way, but this is
not currently set up for any replication - so it makes sense th
Hi All,
How to restrict a user that cannot create volume, subvolumegroup, subvolume
of cephfs.
This user just can access(mount) a subvolume only.
Thanks in advance
Yufan Chen
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send a
Hi,
I just looked into one customer cluster that we upgraded some time ago
from Octopus to Quincy (17.2.6) and I'm wondering why there are still
both pools, "device_health_metrics" and ".mgr".
According to the docs [0], it's supposed to be renamed:
Prior to Quincy, the devicehealth module
Hi,
On 16/05/2024 17:03, Adam King wrote:
At least for the current up-to-date reef branch (not sure what reef
version you're on) when --image is not provided to the shell, it should
try to infer the image in this order
1. from the CEPHADM_IMAGE env. variable
2. if you pass --name with a dae
Hi all,
How to restrict an user that cannot create volume,
subvolume group, subvolume of cephfs?
This user only can access(mount) a subvolume.
Thanks in advance
Yufan Chen
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to
11 matches
Mail list logo