Hi,
on a Ceph cluster deployed with cephadm the orchestrator installs a
config file /etc/logrotate.d/cephadm automatically to rotate its logfile.
This creates a conflict when the ceph-common package is also installed:
Aug 25 00:00:03 cephtest21 logrotate[203869]: error: cephadm:2 duplicate log
FWIW, cephadm only writes that file out if it doesn't exist entirely. You
might be able to just remove anything actional functional from it and just
leave a sort of dummy file with only a comment there as a workaround. Also,
was this an upgraded cluster? I tried quickly bootstrapping a
cephadm clus
Am 25.08.22 um 13:41 schrieb Adam King:
FWIW, cephadm only writes that file out if it doesn't exist entirely.
You might be able to just remove anything actional functional from it
and just leave a sort of dummy file with only a comment there as a
workaround.
I am trying that.
>
Also, was th
You were correct about the difference between the distros. Was able to
reproduce fine on ubuntu 20.04 (was using centos 8.stream before). I
opened a tracker as well https://tracker.ceph.com/issues/57293
On Thu, Aug 25, 2022 at 7:44 AM Robert Sander
wrote:
> Am 25.08.22 um 13:41 schrieb Adam King
Hi,
What exactly you mean in rwg compression? Another storage class?
k
Sent from my iPhone
> On 21 Aug 2022, at 22:14, Danny Webb wrote:
>
> Hi,
>
> What is the difference between using rgw compression vs enabling compression
> on a pool? Is there any reason why you'd use one over the othe
Hi Konstantin,
https://docs.ceph.com/en/latest/radosgw/compression/
vs say:
https://www.redhat.com/en/blog/red-hat-ceph-storage-33-bluestore-compression-performance
Cheers,
Danny
From: Konstantin Shalygin
Sent: 25 August 2022 13:23
To: Danny Webb
Cc: ceph-user
My cluster (ceph pacific) is complaining about one of the OSD being
backfillfull:
[WRN] OSD_BACKFILLFULL: 1 backfillfull osd(s)
osd.31 is backfill full
backfillfull ratios:
full_ratio 0.95
backfillfull_ratio 0.9
nearfull_ratio 0.85
ceph osd df shows:
31hdd 5.55899 1.0 5.
On Fri, Aug 19, 2022 at 7:17 AM Patrick Donnelly wrote:
>
> On Fri, Aug 19, 2022 at 5:02 AM Jesper Lykkegaard Karlsen
> wrote:
> >
> > Hi,
> >
> > I have recently been scanning the files in a PG with "cephfs-data-scan
> > pg_files ...".
>
> Why?
>
> > Although, after a long time the scan was sti
With RGW, you can set pool compression mode to passive, then RGW will can set
compression hint when your application make a PUT
k
Sent from my iPhone
> On 25 Aug 2022, at 17:18, Danny Webb wrote:
> Hi Konstantin,
>
> https://docs.ceph.com/en/latest/radosgw/compression/
>
> vs say:
>
> http
Hi,
I’ve seen this many times in older clusters, mostly Nautilus (can’t
say much about Octopus or later). Apparently the root cause hasn’t
been fixed yet, but it should resolve after the recovery has finished.
Zitat von Wyll Ingersoll :
My cluster (ceph pacific) is complaining about one of
That problem seems to have cleared up. We are in the middle of a massive
rebalancing effort for a 700 OSD, 10PB cluster that is wildly out of whack
(because it got too full) and see lots of strange numbers reported occasionally.
From: Eugen Block
Sent: Thurs
This was seen today in Pacific 16.2.9.
From: Stefan Kooman
Sent: Thursday, August 25, 2022 3:17 PM
To: Eugen Block ; Wyll Ingersoll
Cc: ceph-users@ceph.io
Subject: Re: [ceph-users] Re: backfillfull osd - but it is only at 68% capacity
On 8/25/22 20:56, Eugen
Just last week on 14.2.22, the customer is currently in the process of
rebuilding OSD nodes to migrate to lvm.
Zitat von Stefan Kooman :
On 8/25/22 20:56, Eugen Block wrote:
Hi,
I’ve seen this many times in older clusters, mostly Nautilus (can’t
say much about Octopus or later). Apparentl
Could anyone answe this question? There are many questions but it's of
course really helpful to know the answer of just one question.
The summary of my questions.
- a. About QA process
- a.1 The nunber of test cases differ between the QA for merging a PR and
the QA for release?
- a.2 If a.1
14 matches
Mail list logo