Hi
I'm running latest Ceph Pacific 16.2.13 with Cephfs. I need to collect
performance stats per client, but getting empty list without any numbers
I even run dd on client against mounted ceph fs, but output is only like
this:
#> ceph fs perf stats 0 4638 192.168.121.1
{"version": 2, "globa
Figured out how to cleanly relocate daemons via the interface. All is good.
-jeremy
> On Friday, Jun 09, 2023 at 2:04 PM, Me (mailto:jer...@skidrow.la)> wrote:
> I’m doing a drain on a host using cephadm, Pacific, 16.2.11.
>
> ceph orch host drain
>
> removed all the OSDs, but these daemons rema
I’m doing a drain on a host using cephadm, Pacific, 16.2.11.
ceph orch host drain
removed all the OSDs, but these daemons remain:
grafana.cn06 cn06.ceph.la1 *:3000 stopped 5m ago 18M - -
mds.btc.cn06.euxhdu cn06.ceph.la1 running (2d) 5m ago 17M 29.4M - 16.2.11
de4b0b384ad4 017f7ef441ff
mgr.
Hi Yuval,
Thanks for having a look at bucket notifications and collecting
feedback. I also see potential for improvement in the area of bucket
notifications.
We have observed issues in a setup with Rabbit MQ as a broker where the
RADOS queue seems to fill up and cients receive "slow down" re
TL;DR: We could not fix this problem in the end and ended up with a Ceph
fs in read only mode (so we could only backup, delete and restore) and
one broken OSD (we deleted that and restored to a "new disk")
I can now wrap up my whole experience with this problem.
After the OSD usage growing to
Hi Eugen,
thanks for the response! :-)
We have (kind of) solved the problem immediately at hand. The whole process
was stuck because the MDSes were actually getting 'killed'. In fact, the
amount of RAM we allocated to the MDSes was insufficient to accommodate the
logs' complete replay. Therefore,
A bucket with a policy that enforces "bucket-owner-full-control" results in
Access Denied
if multipart is used to upload the object.
It is also discussed in an awscli issue:
https://github.com/aws/aws-cli/issues/1674
aws client exits with "An error occurred (AccessDenied) when calling the
CreateMu
Hi,
we are running a cluster that has been alive for a long time and we tread
carefully regarding updates. We are still a bit lagging and our cluster (that
started around Firefly) is currently at Nautilus. We’re updating and we know
we’re still behind, but we do keep running into challenges alo
Hi Patrick,
I'm afraid your ceph-post-file logs were lost to the nether. AFAICT,
our ceph-post-file storage has been non-functional since the beginning
of the lab outage last year. We're looking into it.
I have it here still. Any other way I can send it to you?
Extremely unlikely.
Okay, ta