Hi!
I need some help understanding LC processing.
On latest versions of octopus installed (tested with 15.2.13 and 15.2.8) we
have at least one bucket which is not having the files removed when
expiring.
The size of the bucket reported with radosgw-admin compared with the one
obtained with s3cmd is
reate a delete-marker for every object and move the
> object version from current to non-current, thereby reflecting the same
> number of objects in bucket stats output ].
>
> Vidushi
>
> On Mon, Jul 26, 2021 at 4:55 PM Paul JURCO wrote:
>
>> Hi!
>> I need some help
e have
set rgw_lc_debug_interval to something low and executed lc process. but it
ignored this bucket completly as i have in logs.
Any suggestion is welcome, as i bet we have other buckets in the same
situation.
Thank you!
Paul
On Mon, Jul 26, 2021 at 2:59 PM Paul JURCO wrote:
> Hi Vidushi,
> aws s
.amazonaws.com/doc/2006-03-01/";>
>
> Incomplete Multipart Uploads
>
> Enabled
>
>
> 1
>
>
&g
Hi!
I have upgraded to 15.2.14 in order to be able to delete an old bucket
stuck at:
*2021-09-08T08:47:15.216+0300 7f96ddfe7080 0 abort_bucket_multiparts
WARNING : aborted 34333 incomplete multipart
uploads2021-09-08T08:47:17.012+0300 7f96ddfe7080 0 abort_bucket_multiparts
WARNING : aborted 343
How to properly ask for investigation on this bug? It looks like is not
fixed.
--
Paul
On Wed, Sep 8, 2021 at 9:07 AM Paul JURCO wrote:
> Hi!
> I have upgraded to 15.2.14 in order to be able to delete an old bucket
> stuck at:
>
>
> *2021-09-08T08:47:15.216+03
2.8 and in 2 days after to
16.2.9 on the cluster with crashes.
6 seg faults are on 2tb disks, 8 are on 1tb disks. 2TB are newer (below
2yo).
Could be related to hardware?
Thank you!
--
Paul Jurco
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hi!
All restarted as required in upgrade plan in the proper order, all software
was upgraded on all nodes. We are on Ubuntu 18 (all nodes).
"ceph versions" output shows all is on "16.2.9".
Thank you!
--
Paul Jurco
On Wed, Aug 10, 2022 at 5:43 PM Eneko Lacunza wrote:
&g
Enabling debug lc will execute more often the LC, but, please mind that
might not respect expiration time set. By design it will consider a day the
time set in interval.
So, if will run more often, you will end up removing objects sooner than
365 days (as an example) if set to do so.
Please test u
Resent due to moderation when using web interface.
Hi ceph users,
We have a few clusters with quincy 17.2.6 and we are preparing to migrate
from ceph-deploy to cephadm for better management.
We are using Ubuntu20 with latest updates (latest openssh).
While testing the migration to cephadm on a tes
On Sat, Oct 7, 2023 at 12:03 PM Paul JURCO wrote:
> Resent due to moderation when using web interface.
>
> Hi ceph users,
> We have a few clusters with quincy 17.2.6 and we are preparing to migrate
> from ceph-deploy to cephadm for better management.
> We are using Ubuntu20 w
Hi ceph users,
We have a few clusters with quincy 17.2.6 and we are preparing to migrate from
ceph-deploy to cephadm for better management.
We are using Ubuntu20 with latest updates (latest openssh).
While testing the migration to cephadm on a test cluster with octopus (v16
latest) we had no issu
lity
> mon_cluster_log_to_syslog
> mon_cluster_log_to_syslog_level
> mon_cluster_log_to_syslog_facility
>
> Maybe one of them is what you're looking for.
>
> Zitat von Paul JURCO :
>
> > Hi!
> > Currently I have limitted the optput of rgw log to syslog from rsyslog
&
advanced
rgw_ops_log_rados true
Thank you!
--
Paul Jurco
On Fri, Nov 22, 2024 at 6:11 PM Paul JURCO wrote:
> Hi,
> we recently migrated to cephadm from ceph-deploy a 18.2.2 ceph cluster
> (Ubuntu with docker).
> RGWs are separate vms.
> We
Hi, just remove them from balancer.
Also, there are two configs you want to be true on the rgws for LC and GC
processing and false on the rgws that are exposed to clients:
rgw_enable_lc_threads = true
rgw_enable_gc_threads = true
--
Paul
On Mon, Nov 25, 2024 at 8:40 AM Szabo, Istvan (Agoda)
Same here, it worked only after rgw service was restarted using this config:
rgw_log_http_headers http_x_forwarded_for
--
Paul Jurco
On Wed, Feb 12, 2025 at 2:29 PM Ansgar Jazdzewski <
a.jazdzew...@googlemail.com> wrote:
> Hi folks,
>
> i like to make sure that the RadosGW
Hi,
we recently migrated to cephadm from ceph-deploy a 18.2.2 ceph cluster
(Ubuntu with docker).
RGWs are separate vms.
We noticed syslog increased a lot due to rgw's access logs sent to it.
And because we use to log ops, a huge ops log file on
/var/log/ceph/cluster-id/ops-log-ceph-client.rgw.hostn
Config reference specifies it should be a list of comma-delimited headers,
so remove spaces:
Comma-delimited list of HTTP headers to include with ops log entries.
> Header names are case insensitive, and use the full header name with words
> separated by underscores.
--
Paul Jurco
18 matches
Mail list logo