Re: [ceph-users] Ceph pg repair clone_missing?

2019-10-03 Thread Brad Hubbard
On Thu, Oct 3, 2019 at 6:46 PM Marc Roos wrote: > > > > >> > >> I was following the thread where you adviced on this pg repair > >> > >> I ran these rados 'list-inconsistent-obj'/'rados > >> list-inconsistent-snapset' and have output on the snapset. I tried > to > >> extrapolate your commen

Re: [ceph-users] NFS

2019-10-03 Thread Marc Roos
Thanks Matt! Really useful configs. I am still on luminous, so I can forget about this now :( I will try when I am nautilus, I have already updated my configuration. However it is interesting that in the configuration nowhere the tenant is specified, so I guess that is being extracted from th

Re: [ceph-users] NFS

2019-10-03 Thread Daniel Gryniewicz
"Path" is either "/" to indicate the top of the tree, or a bucket name to indicate a limited export for a single bucket. It's not related to the user at all. On Thu, Oct 3, 2019 at 10:34 AM Marc Roos wrote: > > > How should a multi tenant RGW config look like, I am not able get this > working: >

Re: [ceph-users] NFS

2019-10-03 Thread Matt Benjamin
Hi Mark, Here's an example that should work--userx and usery are RGW users created in different tenants, like so: radosgw-admin --tenant tnt1 --uid userx --display-name "tnt1-userx" \ --access_key "userxacc" --secret "test123" user create radosgw-admin --tenant tnt2 --uid usery --display-n

Re: [ceph-users] NFS

2019-10-03 Thread Nathan Fish
We have tried running nfs-ganesha (2.7 - 2.8.1) with FSAL_CEPH backed by a Nautilus CephFS. Performance when doing metadata operations (ie anything with small files) is very slow. On Thu, Oct 3, 2019 at 10:34 AM Marc Roos wrote: > > > How should a multi tenant RGW config look like, I am not able

Re: [ceph-users] NFS

2019-10-03 Thread Marc Roos
How should a multi tenant RGW config look like, I am not able get this working: EXPORT { Export_ID=301; Path = "test:test3"; #Path = "/"; Pseudo = "/rgwtester"; Protocols = 4; FSAL { Name = RGW; User_Id = "test$tester1";

Re: [ceph-users] Unexpected increase in the memory usage of OSDs

2019-10-03 Thread Vladimir Brik
And, just as unexpectedly, things have returned to normal overnight https://icecube.wisc.edu/~vbrik/graph-1.png The change seems to have coincided with the beginning of Rados Gateway activity (before, it was essentially zero). I can see nothing in the logs that would explain what happened thoug

Re: [ceph-users] NFS

2019-10-03 Thread Matt Benjamin
RGW NFS can support any NFS style of authentication, but users will have the RGW access of their nfs-ganesha export. You can create exports with disjoint privileges, and since recent L, N, RGW tenants. Matt On Tue, Oct 1, 2019 at 8:31 AM Marc Roos wrote: > > I think you can run into problems >

Re: [ceph-users] NFS

2019-10-03 Thread Daniel Gryniewicz
So, Ganesha is an NFS gateway, living in userspace. It provides access via NFS (for any NFS client) to a number of clustered storage systems, or to local filesystems on it's host. It can run on any system that has access to the cluster (ceph in this case). One Ganesha instance can serve quite a

Re: [ceph-users] rgw: multisite support

2019-10-03 Thread M Ranga Swami Reddy
Thank you. Do we have a quick document to do this migration? Thanks Swami On Thu, Oct 3, 2019 at 4:38 PM Paul Emmerich wrote: > On Thu, Oct 3, 2019 at 12:03 PM M Ranga Swami Reddy > wrote: > > > > Below url says: "Switching from a standalone deployment to a multi-site > replicated deployment i

Re: [ceph-users] rgw: multisite support

2019-10-03 Thread Paul Emmerich
On Thu, Oct 3, 2019 at 12:03 PM M Ranga Swami Reddy wrote: > > Below url says: "Switching from a standalone deployment to a multi-site > replicated deployment is not supported. > https://docs.openstack.org/project-deploy-guide/charm-deployment-guide/latest/app-rgw-multisite.html this is wrong, m

Re: [ceph-users] ceph pg repair fails...?

2019-10-03 Thread Jake Grimmett
g happened on both pg's [root@ceph-n10 ~]# zgrep "2.2a7" /var/log/ceph/ceph-osd.83.log* /var/log/ceph/ceph-osd.83.log-20191002.gz:2019-10-01 07:19:47.060 7f9adab4b700 -1 log_channel(cluster) log [ERR] : 2.2a7 repair 11 errors, 0 fixed /var/log/ceph/ceph-osd.83.log-20191003.g

Re: [ceph-users] rgw: multisite support

2019-10-03 Thread M Ranga Swami Reddy
Below url says: "Switching from a standalone deployment to a multi-site replicated deployment is not supported. https://docs.openstack.org/project-deploy-guide/charm-deployment-guide/latest/app-rgw-multisite.html Please advise. On Thu, Oct 3, 2019 at 3:28 PM M Ranga Swami Reddy wrote: > Hi, >

[ceph-users] rgw: multisite support

2019-10-03 Thread M Ranga Swami Reddy
Hi, Iam using the 2 ceph clusters in diff DCs (away by 500 KM) with ceph 12.2.11 version. Now, I want to setup rgw multisite using the above 2 ceph clusters. is it possible? if yes, please share good document to do the same. Thanks Swami ___ ceph-users

Re: [ceph-users] rgw S3 lifecycle cannot keep up

2019-10-03 Thread Christian Pedersen
Thank you Robin. Looking at the video it doesn't seem like a fix is anywhere near ready. Am I correct in concluding that Ceph is not the right tool for my use-case? Cheers, Christian On Oct 3 2019, at 6:07 am, Robin H. Johnson wrote: > On Wed, Oct 02, 2019 at 01:48:40PM +0200, Christian Pederse

Re: [ceph-users] Ceph pg repair clone_missing?

2019-10-03 Thread Marc Roos
> >> >> I was following the thread where you adviced on this pg repair >> >> I ran these rados 'list-inconsistent-obj'/'rados >> list-inconsistent-snapset' and have output on the snapset. I tried to >> extrapolate your comment on the data/omap_digest_mismatch_info onto my >> situation.

[ceph-users] Tiering Dirty Objects

2019-10-03 Thread Lazuardi Nasution
Hi, Is there any way to query the list of dirty objects inside tier/hot pool? I just know how to see the number of them per pool. Best regards, ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com