[ceph-users] radosgw - how to grant read-only access to another user by default

2020-06-12 Thread Paul Choi
Hi, I'm new to radosgw (learned more about the MDS than I care to...), and it seems like the buckets and objects created by one user cannot be accessed by another user. Is there a way to make any content created by User A accessible (read-only) by User B? >From the documentation it looks like thi

[ceph-users] Upgrading from Mimic to Nautilus

2020-04-02 Thread Paul Choi
retty straightforward. I'm currently on Mimic 13.2.8, and have been putting off upgrading to Nautilus, but I think it's time. Thanks in advance, -Paul Choi ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email t

[ceph-users] Re: No reply or very slow reply from Prometheus plugin - ceph-mgr 13.2.8 mimic

2020-03-26 Thread Paul Choi
9.2PiB of available space at the moment. > > > On 26/03/2020 17:32, Paul Choi wrote: > > I can't quite explain what happened, but the Prometheus endpoint became > stable after the free disk space for the largest pool went substantially > lower than 1PB. > I wonder if the

[ceph-users] Re: No reply or very slow reply from Prometheus plugin - ceph-mgr 13.2.8 mimic

2020-03-26 Thread Paul Choi
> is somewhat large-ish with 1248 OSDs, so I expect stat collection to > > take "some" time, but it definitely shouldn't crush the MGRs all the > time. > > > > On 21/03/2020 02:33, Paul Choi wrote: > >> Hi Janek, > >> > >> What versi

[ceph-users] Re: No reply or very slow reply from Prometheus plugin - ceph-mgr 13.2.8 mimic

2020-03-20 Thread Paul Choi
ery few hours and need to be restarted. the Promtheus > plugin works, but it's pretty slow and so is the dashboard. > Unfortunately, nobody seems to have a solution for this and I wonder why > not more people are complaining about this problem. > > > On 20/03/2020 19:30, Paul Choi

[ceph-users] Re: No reply or very slow reply from Prometheus plugin - ceph-mgr 13.2.8 mimic

2020-03-20 Thread Paul Choi
py.HTTPError(503, 'No MON connection') HTTPError: (503, 'No MON connection') Powered by http://www.cherrypy.org";>CherryPy 3.5.0 On Fri, Mar 20, 2020 at 6:33 AM Paul Choi wrote: > Hello, > > We are running Mimic 13.

[ceph-users] No reply or very slow reply from Prometheus plugin - ceph-mgr 13.2.8 mimic

2020-03-20 Thread Paul Choi
t throwing it out there) services: mon: 5 daemons, quorum woodenbox0,woodenbox2,woodenbox4,woodenbox3,woodenbox1 mgr: woodenbox2(active), standbys: woodenbox0, woodenbox1 mds: cephfs-1/1/1 up {0=woodenbox6=up:active}, 1 up:standby-replay osd: 3964 osds: 3928 up, 3928 in; 831 rema

[ceph-users] Re: Copying out bluestore's rocksdb, compact, then put back in - Mimic 13.2.6/13.2.8

2019-12-20 Thread Paul Choi
--path /var/lib/ceph/osd-0/ --out-dir /tmp/bluefs-export-ceph-osd-0 To compact: $ ceph-kvstore-tool rocksdb /tmp/bluefs-export-ceph-osd-0 compact Put it back into Bluestore somehow. Profit?? On Fri, Dec 20, 2019 at 11:18 AM Paul Choi wrote: > Hi, > > I have a weird situation where an O

[ceph-users] Copying out bluestore's rocksdb, compact, then put back in - Mimic 13.2.6/13.2.8

2019-12-20 Thread Paul Choi
Hi, I have a weird situation where an OSD's rocksdb fails to compact, because the OSD became full and the osd-full-ratio was 1.0 (not a good idea, I know). Hitting "bluefs enospc" while compacting: -376> 2019-12-18 15:48:16.492 7f2e0a5ac700 1 bluefs _allocate failed to allocate 0x40da486 on b

[ceph-users] Re: Prometheus endpoint hanging with 13.2.7 release?

2019-12-20 Thread Paul Choi
. On Mon, Dec 9, 2019 at 5:01 PM Paul Choi wrote: > Hello, > > Anybody seeing the Prometheus endpoint hanging with the new 13.2.7 release? > With 13.2.6 the endpoint would respond with a payload of 15MB in less than > 10 seconds. > > Now, if you restart ceph-mgr, the Prometh

[ceph-users] Prometheus endpoint hanging with 13.2.7 release?

2019-12-09 Thread Paul Choi
Hello, Anybody seeing the Prometheus endpoint hanging with the new 13.2.7 release? With 13.2.6 the endpoint would respond with a payload of 15MB in less than 10 seconds. Now, if you restart ceph-mgr, the Prometheus endpoint responds quickly for the first run, then successive runs get slower and s