Re-adding the dev list and adding the user list because others might
benefit from this information.
Thanks,
Neha
On Tue, Jan 10, 2023 at 10:21 AM Wyll Ingersoll <
wyllys.ingers...@keepertech.com> wrote:
> Also, it was only my ceph-users account that was lost, dev account was
> still active.
> --
Eugen,
I never insinuated my circumstance is resultant from buggy software, and
acknowledged operational missteps. Let's please leave that there. Ceph
remains a technology I like and will continue to use. Our operational
understanding has evolved greatly as a result of current circumstances.
What else is going on? (ceph -s). If there is a lot of data being shuffled
around, it may just be because its waiting for some other actions to complete
first.
Thanks,
Kevin
From: Torkil Svensgaard
Sent: Tuesday, January 10, 2023 2:36 AM
To: ceph-users@
When adding a new OSD to a ceph orchestrated system (16.2.9) on a storage node
that has a specification profile that dictates which devices to use as the
db_devices (SSDs), the newly added OSDs seem to be ignoring the db_devices
(there are several available) and putting the data and db/wal on
Running ceph-pacific 16.2.9 using ceph orchestrator.
We made a mistake adding a disk to the cluster and immediately issued a command
to remove it using "ceph orch osd rm ### --replace --force".
This OSD had no data on it at the time and was removed after just a few
minutes. "ceph orch osd rm s
Could this be a temporal co-incidence? E.g. each host got a different model
drive in slot 19 via an incremental expansion.
> On Jan 10, 2023, at 05:27, Frank Schilder wrote:
>
> Following up on my previous post, we have identical OSD hosts. The very
> strange observation now is, that all outl
Everyone,
I have been able to move the text using the "scroll-top-margin" parameter
in custom.css. This means that the top bar no longer gets in the way (which
is likely why John was unable to replicate the issue).
Here is the pull request that addresses this issue:
https://github.com/ceph/ceph/p
Slot 19 is inside the chassis? Do you check chassis temperature? I
sometimes have more failure rate in chassis HDDs than in front of the
chassis. In our case it was related to the temperature difference.
On Tue, Jan 10, 2023 at 1:28 PM Frank Schilder wrote:
>
> Following up on my previous post, w
Hi,
Actually, the test case was even more simple than that. A misaligned
discard (discard_granularity_bytes=4096, offset=0, length=4096+512)
made the journal stop replaying entries. This is now well covered in
tests and example e2e-tests.
The workaround is quite easy, set `rbd_discard_granularity
Hi
Ceph version 17.2.3 (dff484dfc9e19a9819f375586300b3b79d80034d) quincy
(stable)
Looking at this:
"
Low space hindering backfill (add storage if this doesn't resolve
itself): 2 pgs backfill_toofull
"
"
[WRN] PG_BACKFILL_FULL: Low space hindering backfill (add storage if
this doesn't reso
Following up on my previous post, we have identical OSD hosts. The very strange
observation now is, that all outlier OSDs are in exactly the same disk slot on
these hosts. We have 5 problematic OSDs and they are all in slot 19 on 5
different hosts. This is an extremely strange and unlikely co-in
Hi Dongdong and Igor,
thanks for pointing to this issue. I guess if its a memory leak issue (well,
cache pool trim issue), checking for some indicator and an OSD restart should
be a work-around? Dongdong promised a work-around but talks only about a patch
(fix).
Looking at the tracker items, m
Hi,
I am currently trying to figure out how to resolve the
"large objects found in pool 'rgw.usage'"
error.
In the past I trimmed the usage log, but now I am at the point that I need
to trim it down to two weeks.
I checked and amount of omapkeys and the distribution is quite off:
# for OBJECT in
Hi,
Backups will be challenging. I honestly didn't anticipate this kind of
failure with ceph to be possible, we've been using it for several years now
and were encouraged by orchestrator and performance improvements in the 17
code branch.
that's exactly what a backup is for, to be prepared f
14 matches
Mail list logo