Raised one PR to fix this, please see
https://github.com/ceph/ceph/pull/51931.
Thanks
- Xiubo
On 5/24/23 23:52, Sandip Divekar wrote:
Hi Team,
I'm writing to bring to your attention an issue we have encountered with the
"mtime" (modification time) behavior for directories in the Ceph filesy
Hi,
I'm not able to find the information about used size of a storage class.
- bucket stats
- usage show
- user stats ...
Does Radosgw support it? Thanks
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@c
Hi,
In Ceph Radosgw 15.2.17, I get this issue when trying to create a push endpoint
to Kafka
Here is push endpoint configuration:
endpoint_args =
'push-endpoint=kafka://abcef:123456@kafka.endpoint:9093&use-ssl=true&ca-location=/etc/ssl/certs/ca.crt'
attributes = {nvp[0] : nvp[1] for nvp in urlli
Hi everyone!
I have a containerised (cephadm built) 17.2.6 cluster where I have
installed a custom commercial SSL certificate under dashboard.
Before I upgraded from 17.2 to 17.2.6, I successfully installed the
custom SSL cert everywhere, including grafana, but since the upgrade I
am finding
If you can stop the rgws, you can make a new pool with 32 PGs and then
rados cppool this one over the new one, then rename them so this one
has the right name (and application) and start the rgws again.
Den mån 5 juni 2023 kl 16:43 skrev Louis Koo :
>
> ceph version is 16.2.13;
>
> The pg_num is 1
Dear Wes,
thank you for your suggestion! I restarted OSDs 57 and 79 and the
recovery operations restarted as well. In the log I found that for both
of them a kernel issue raised, but they were not in error state.
Probably they got stuck because of this.
Thanks again for your help,
Nicola
s
When PGs are degraded they won't scrub, further, if an OSD is involved with
recovery of another PG it wont accept scrubs either so that is the likely
explanation of your not-scrubbed-in time issue. Its of low concern.
Are you sure that recovery is not progressing? I see: "7349/147534197
objects de
That said, our MON store size has also been growing slowly from 900MB to
5.4GB. But we also have a few remapped PGs right now. Not sure if that
would have an influence.
On 05/06/2023 17:48, Janek Bevendorff wrote:
Hi Patrick, hi Dan!
I got the MDS back and I think the issue is connected to t
Dear Ceph users,
after an outage and recovery of one machine I have several PGs stuck in
active+recovering+undersized+degraded+remapped. Furthermore, many PGs
have not been (deep-)scrubbed in time. See below for status and health
details.
It's been like this for two days, with no recovery I/O
Hi Patrick, hi Dan!
I got the MDS back and I think the issue is connected to the "newly
corrupt dentry" bug [1]. Even though I couldn't see any particular
reason for the SIGABRT at first, I then noticed one of these awfully
familiar stack traces.
I rescheduled the two broken MDS ranks on two
ceph version is 16.2.13;
The pg_num is 1024, and the target_pg_num is 32; there is no any data in the
pool of ".rgw.buckets.index", but it spend much time to reduce the pg num.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an em
Hi folks,
My ceph cluster with Quincy and Rocky9 is up and running.
But I'm having issues with swift authenticating with keystone.
Was wondering if I'm missed anything in the configuration.
>From the debug logs below, it appears that radosgw is still trying to
>authenticate with Swift instead of
Hi,
Is it possible to disable ACL in favor of bucket policy (on a bucket or
globally)?
The goal is to forbid users to use any bucket/object ACLs and only allow
bucket policies.
Seems there is no documentation in that regard which applies to Ceph RGW.
Apology if I am sending this in the wrong maili
I just had the problem again that MDS were constantly reporting slow
metadata IO and the pool was slowly growing. Hence I restarted the MDS
and now ranks 4 and 5 don't come up again.
Every time, they get to the resolve stage, the crash with a SIGABRT
without an error message (not even at debug
Hi Andreas,
> On 5 Jun 2023, at 14:57, Andreas Haupt wrote:
>
> after the update to CEPH 16.2.13 the Prometheus exporter is wrongly
> exporting multiple metric help & type lines for ceph_pg_objects_repaired:
>
> [mon1] /root #curl -sS http://localhost:9283/metrics
> # HELP ceph_pg_objects_repai
Dear all,
after the update to CEPH 16.2.13 the Prometheus exporter is wrongly
exporting multiple metric help & type lines for ceph_pg_objects_repaired:
[mon1] /root #curl -sS http://localhost:9283/metrics
# HELP ceph_pg_objects_repaired Number of objects repaired in a pool Count
# TYPE ceph_pg_ob
Hi Cephers,
In a multisite config, with one zonegroup and 2 zones, when I look at
`radiosgw-admin zonegroup get`,
I see by defaut these two parameters :
"log_meta": "false",
"log_data": "true",
Where can I find documentation on these, I can't find.
I set log_meta to tr
Any other thoughts on this, please? Should I file a bug report?
/Z
On Fri, 2 Jun 2023 at 06:11, Zakhar Kirpichenko wrote:
> Thanks, Josh. The cluster is managed by cephadm.
>
> On Thu, 1 Jun 2023, 23:07 Josh Baergen, wrote:
>
>> Hi Zakhar,
>>
>> I'm going to guess that it's a permissions issue
18 matches
Mail list logo