Re: [ceph-users] ceph tell mds.a scrub status "problem getting command descriptions"

2019-12-13 Thread Marc Roos
client.admin, did not have correct rights ceph auth caps client.admin mds "allow *" mgr "allow *" mon "allow *" osd "allow *" -Original Message- To: ceph-users Subject: [ceph-users] ceph tell mds.a scrub status "problem getting command descriptions" ceph tell mds.a scrub status G

[ceph-users] ceph tell mds.a scrub status "problem getting command descriptions"

2019-12-13 Thread Marc Roos
ceph tell mds.a scrub status Generates 2019-12-14 00:46:38.782 7fef4affd700 0 client.3744774 ms_handle_reset on v2:192.168.10.111:6800/3517983549 Error EPERM: problem getting command descriptions from mds.a ___ ceph-users mailing list ceph-users@lis

[ceph-users] deleted snap dirs are back as _origdir_1099536400705

2019-12-13 Thread Marc Roos
I thought I deleted snapshot dirs, but I still have them but with a different name. How to get rid of these? [@ .snap]# ls -1 _snap-1_1099536400705 _snap-2_1099536400705 _snap-3_1099536400705 _snap-4_1099536400705 _snap-5_1099536400705 _snap-6_1099536400705 _snap-7_1099536400705 ___

[ceph-users] ceph-volume sizing osds

2019-12-13 Thread Oscar Segarra
Hi, I have recently started working with Ceph Nautilus release and I have realized that you have to start working with LVM to create OSD instead of the "old fashioned" ceph-disk. In terms of performance and best practices, as far as I must use LVM I can create volume groups that joins or extends

Re: [ceph-users] Ceph assimilated configuration - unable to remove item

2019-12-13 Thread David Herselman
Hi, I've logged a bug report (https://tracker.ceph.com/issues/43296?next_issue_id=43295&prev_issue_id=43297) and Alwin from Proxmox was kind enough to provide a work around: ceph config rm global rbd_default_features; ceph config-key rm config/global/rbd_default_features; ceph config set global

Re: [ceph-users] Ceph rgw pools per client

2019-12-13 Thread Ed Fisher
You're looking for placements: https://docs.ceph.com/docs/master/radosgw/placement/ Basically, create as many placements as you want in your zone and then set the default placement for the user as needed. However, I don't think there's any

[ceph-users] Ceph rgw pools per client

2019-12-13 Thread M Ranga Swami Reddy
Hello - I want to have 2 diff. rgw pools for 2 diff. clients. For ex: For client#1 - rgw.data1, rgw.index1, rgw.user1, rgw.metadata1 For client#2 - rgw.data2, rgw.index2, rgw.user2, rgw.metadata2 Is the above possible with ceph radosgw? Thanks Swami ___