Hi,
I'm new to radosgw (learned more about the MDS than I care to...), and it
seems like the buckets and objects created by one user cannot be accessed
by another user.
Is there a way to make any content created by User A accessible (read-only)
by User B?
>From the documentation it looks like thi
retty straightforward.
I'm currently on Mimic 13.2.8, and have been putting off upgrading to
Nautilus, but I think it's time.
Thanks in advance,
-Paul Choi
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email t
9.2PiB of available space at the moment.
>
>
> On 26/03/2020 17:32, Paul Choi wrote:
>
> I can't quite explain what happened, but the Prometheus endpoint became
> stable after the free disk space for the largest pool went substantially
> lower than 1PB.
> I wonder if the
> is somewhat large-ish with 1248 OSDs, so I expect stat collection to
> > take "some" time, but it definitely shouldn't crush the MGRs all the
> time.
> >
> > On 21/03/2020 02:33, Paul Choi wrote:
> >> Hi Janek,
> >>
> >> What versi
ery few hours and need to be restarted. the Promtheus
> plugin works, but it's pretty slow and so is the dashboard.
> Unfortunately, nobody seems to have a solution for this and I wonder why
> not more people are complaining about this problem.
>
>
> On 20/03/2020 19:30, Paul Choi
py.HTTPError(503, 'No MON connection')
HTTPError: (503, 'No MON connection')
Powered by http://www.cherrypy.org";>CherryPy 3.5.0
On Fri, Mar 20, 2020 at 6:33 AM Paul Choi wrote:
> Hello,
>
> We are running Mimic 13.
t throwing it out there)
services:
mon: 5 daemons, quorum
woodenbox0,woodenbox2,woodenbox4,woodenbox3,woodenbox1
mgr: woodenbox2(active), standbys: woodenbox0, woodenbox1
mds: cephfs-1/1/1 up {0=woodenbox6=up:active}, 1 up:standby-replay
osd: 3964 osds: 3928 up, 3928 in; 831 rema
--path /var/lib/ceph/osd-0/ --out-dir
/tmp/bluefs-export-ceph-osd-0
To compact:
$ ceph-kvstore-tool rocksdb /tmp/bluefs-export-ceph-osd-0 compact
Put it back into Bluestore somehow. Profit??
On Fri, Dec 20, 2019 at 11:18 AM Paul Choi wrote:
> Hi,
>
> I have a weird situation where an O
Hi,
I have a weird situation where an OSD's rocksdb fails to compact, because
the OSD became full and the osd-full-ratio was 1.0 (not a good idea, I
know).
Hitting "bluefs enospc" while compacting:
-376> 2019-12-18 15:48:16.492 7f2e0a5ac700 1 bluefs _allocate failed to
allocate 0x40da486 on b
.
On Mon, Dec 9, 2019 at 5:01 PM Paul Choi wrote:
> Hello,
>
> Anybody seeing the Prometheus endpoint hanging with the new 13.2.7 release?
> With 13.2.6 the endpoint would respond with a payload of 15MB in less than
> 10 seconds.
>
> Now, if you restart ceph-mgr, the Prometh
Hello,
Anybody seeing the Prometheus endpoint hanging with the new 13.2.7 release?
With 13.2.6 the endpoint would respond with a payload of 15MB in less than
10 seconds.
Now, if you restart ceph-mgr, the Prometheus endpoint responds quickly for
the first run, then successive runs get slower and s
11 matches
Mail list logo