[ceph-users] RGW: LC not deleting expired files

2021-07-26 Thread Paul JURCO
Hi! I need some help understanding LC processing. On latest versions of octopus installed (tested with 15.2.13 and 15.2.8) we have at least one bucket which is not having the files removed when expiring. The size of the bucket reported with radosgw-admin compared with the one obtained with s3cmd is

[ceph-users] Re: RGW: LC not deleting expired files

2021-07-26 Thread Paul JURCO
reate a delete-marker for every object and move the > object version from current to non-current, thereby reflecting the same > number of objects in bucket stats output ]. > > Vidushi > > On Mon, Jul 26, 2021 at 4:55 PM Paul JURCO wrote: > >> Hi! >> I need some help

[ceph-users] Re: RGW: LC not deleting expired files

2021-07-29 Thread Paul JURCO
e have set rgw_lc_debug_interval to something low and executed lc process. but it ignored this bucket completly as i have in logs. Any suggestion is welcome, as i bet we have other buckets in the same situation. Thank you! Paul On Mon, Jul 26, 2021 at 2:59 PM Paul JURCO wrote: > Hi Vidushi, > aws s

[ceph-users] Re: [Suspicious newsletter] Re: RGW: LC not deleting expired files

2021-07-29 Thread Paul JURCO
.amazonaws.com/doc/2006-03-01/";> > > Incomplete Multipart Uploads > > Enabled > > > 1 > > &g

[ceph-users] Octopus: Cannot delete bucket

2021-09-07 Thread Paul JURCO
Hi! I have upgraded to 15.2.14 in order to be able to delete an old bucket stuck at: *2021-09-08T08:47:15.216+0300 7f96ddfe7080 0 abort_bucket_multiparts WARNING : aborted 34333 incomplete multipart uploads2021-09-08T08:47:17.012+0300 7f96ddfe7080 0 abort_bucket_multiparts WARNING : aborted 343

[ceph-users] Re: Octopus: Cannot delete bucket

2021-09-13 Thread Paul JURCO
How to properly ask for investigation on this bug? It looks like is not fixed. -- Paul On Wed, Sep 8, 2021 at 9:07 AM Paul JURCO wrote: > Hi! > I have upgraded to 15.2.14 in order to be able to delete an old bucket > stuck at: > > > *2021-09-08T08:47:15.216+03

[ceph-users] 16.2.9 High rate of Segmentation fault on ceph-osd processes

2022-08-10 Thread Paul JURCO
2.8 and in 2 days after to 16.2.9 on the cluster with crashes. 6 seg faults are on 2tb disks, 8 are on 1tb disks. 2TB are newer (below 2yo). Could be related to hardware? Thank you! -- Paul Jurco ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: 16.2.9 High rate of Segmentation fault on ceph-osd processes

2022-08-10 Thread Paul JURCO
Hi! All restarted as required in upgrade plan in the proper order, all software was upgraded on all nodes. We are on Ubuntu 18 (all nodes). "ceph versions" output shows all is on "16.2.9". Thank you! -- Paul Jurco On Wed, Aug 10, 2022 at 5:43 PM Eneko Lacunza wrote: &g

[ceph-users] Re: Workload that delete 100 M object daily via lifecycle

2023-07-20 Thread Paul JURCO
Enabling debug lc will execute more often the LC, but, please mind that might not respect expiration time set. By design it will consider a day the time set in interval. So, if will run more often, you will end up removing objects sooner than 365 days (as an example) if set to do so. Please test u

[ceph-users] cephadm, cannot use ECDSA key with quincy

2023-10-07 Thread Paul JURCO
Resent due to moderation when using web interface. Hi ceph users, We have a few clusters with quincy 17.2.6 and we are preparing to migrate from ceph-deploy to cephadm for better management. We are using Ubuntu20 with latest updates (latest openssh). While testing the migration to cephadm on a tes

[ceph-users] Re: cephadm, cannot use ECDSA key with quincy

2023-10-10 Thread Paul JURCO
On Sat, Oct 7, 2023 at 12:03 PM Paul JURCO wrote: > Resent due to moderation when using web interface. > > Hi ceph users, > We have a few clusters with quincy 17.2.6 and we are preparing to migrate > from ceph-deploy to cephadm for better management. > We are using Ubuntu20 w

[ceph-users] cephadm, cannot use ECDSA key with quincy

2023-10-10 Thread paul . jurco
Hi ceph users, We have a few clusters with quincy 17.2.6 and we are preparing to migrate from ceph-deploy to cephadm for better management. We are using Ubuntu20 with latest updates (latest openssh). While testing the migration to cephadm on a test cluster with octopus (v16 latest) we had no issu

[ceph-users] Re: Migrated to cephadm, rgw logs to file even when rgw_ops_log_rados is true

2024-11-28 Thread Paul JURCO
lity > mon_cluster_log_to_syslog > mon_cluster_log_to_syslog_level > mon_cluster_log_to_syslog_facility > > Maybe one of them is what you're looking for. > > Zitat von Paul JURCO : > > > Hi! > > Currently I have limitted the optput of rgw log to syslog from rsyslog &

[ceph-users] Re: Migrated to cephadm, rgw logs to file even when rgw_ops_log_rados is true

2024-11-24 Thread Paul JURCO
advanced rgw_ops_log_rados true Thank you! -- Paul Jurco On Fri, Nov 22, 2024 at 6:11 PM Paul JURCO wrote: > Hi, > we recently migrated to cephadm from ceph-deploy a 18.2.2 ceph cluster > (Ubuntu with docker). > RGWs are separate vms. > We

[ceph-users] Re: Separate gateway for bucket lifecycle

2024-11-24 Thread Paul JURCO
Hi, just remove them from balancer. Also, there are two configs you want to be true on the rgws for LC and GC processing and false on the rgws that are exposed to clients: rgw_enable_lc_threads = true rgw_enable_gc_threads = true -- Paul On Mon, Nov 25, 2024 at 8:40 AM Szabo, Istvan (Agoda)

[ceph-users] Re: Radosgw log Custom Headers

2025-02-12 Thread Paul JURCO
Same here, it worked only after rgw service was restarted using this config: rgw_log_http_headers http_x_forwarded_for -- Paul Jurco On Wed, Feb 12, 2025 at 2:29 PM Ansgar Jazdzewski < a.jazdzew...@googlemail.com> wrote: > Hi folks, > > i like to make sure that the RadosGW

[ceph-users] Migrated to cephadm, rgw logs to file even when rgw_ops_log_rados is true

2024-11-22 Thread Paul JURCO
Hi, we recently migrated to cephadm from ceph-deploy a 18.2.2 ceph cluster (Ubuntu with docker). RGWs are separate vms. We noticed syslog increased a lot due to rgw's access logs sent to it. And because we use to log ops, a huge ops log file on /var/log/ceph/cluster-id/ops-log-ceph-client.rgw.hostn

[ceph-users] Re: Radosgw log Custom Headers

2025-02-13 Thread Paul JURCO
Config reference specifies it should be a list of comma-delimited headers, so remove spaces: Comma-delimited list of HTTP headers to include with ops log entries. > Header names are case insensitive, and use the full header name with words > separated by underscores. -- Paul Jurco