Hi,
there were multiple reports in this list that sometimes the mgr daemon
seems to quit working without an indication of a root cause. I have
also experienced this quite a few times in my test clusters, failing
the mgr seemed to help most of the time:
ceph mgr fail
As for the OSDs do yo
Hello all.
I am new to ceph and want to contribute to the community. I have some
knowledge of ansible, python , linux(centos) and shell scripting and
learning docker and kubernetes
There are so many repositories and issues but I don't know where to start.
Are there any issues for new contributors a
Bonjour,
During benchmarking[1], I noticed that intensive RBD client I/O can hurt
performances and latency. That is, if clients read and write as much as they
can, the total throughput may be negatively impacted and the latency of
individual reads and writes will increase. If the clients throt
Hi all, I needed to reduce the number of active MDS daemons from 4 to 1.
Unfortunately, the last MDS to stop is stuck in stopping state. Ceph version is
mimic 13.2.10. Each MDS has 3 blocked OPS, that seem to be related to deleted
snapshots; more info below. I failed the MDS in stopping state al
I see Xiubo started discussing this on
https://tracker.ceph.com/issues/53542 as well.
So the large writes are going to the journal file, and sometimes it's
s single write of a full segment size, which is what I was curious
about.
At this point the next step is seeing what is actually taking up th
I am interested, I have been postponing upgrading from CentOS7, just because of
this.
>
> As we're getting closer to CentOS 8 EOL, I'm sure plenty of Ceph users are
> looking to migrate from CentOS 8 to CentOS Stream 8 or one of the new RHEL
> derivatives, e.g. Rocky and Alma.
>
> The questio
This looks awkward — just from the ops, it seems mds.1 is trying to
move some stray items (presumably snapshots of since-deleted files,
from what you said?) into mds0's stray directory, and then mds.0 tries
to get auth pins from mds.1 but that fails for some reason which isn't
apparent from the dum
Hmm. Glad it's working now, at least.
On Mon, Dec 13, 2021 at 9:10 AM Frank Schilder wrote:
>
> Dear Gregory,
>
> thanks for your fast response. The situation started worsening shortly after
> I sent my e-mail and I had to take action. More operations got stuck in the
> active MDS, leading to a
Hi- I have a 3 host cluster with 3 HDs and 1 SSD per host.
The hosts are on RHEL 8.5, using PODMAN containers deployed via cephadm, with
one OSD per HD and SSD.
In my current crush map, I have a rule for the SSD and the HDD, and put the
cephfs meta data pool and rbd on the ssd pool.
>From thin
Hello ceph users,
I am using "ubuntu 20.04" and I am trying to install "ceph pacific" version
with "cephadm".
Are there any instructions available about using "cephadm bootstrap" and other
related commands in an airgap environment (that is: on the local network,
without internet access)?
In pa
Has the hub.docker.com repository been discontinued for containerized
ceph updates moving forward?
I see that 15.2.15 has been released to quay.io but dockerhub's latest
15.2 release is 15.2.13.
According to https://docs.ceph.com/en/latest/install/containers/ both
should contain the latest rel
> -Original Message-
> From: Gary Molenkamp
> Sent: 13 December 2021 19:54
> To: Ceph Users
> Subject: [ceph-users] Ceph container image repos
>
> Has the hub.docker.com repository been discontinued for containerized
I think so. I saw some discussion about using an own redhat soluti
"As of August 2021, new container images are pushed to quay.io
registry only. Docker hub won't receive new content for that specific
image but current images remain available.As of August 2021, new
container images are pushed to quay.io registry only. Docker hub won't
receive new content for that s
Dear Ceph users,
I'm writing to inform the community about a new performance channel
that will be added to the telemetry module in the upcoming Quincy
release. Like all other channels, this channel is also on an opt-in
basis, but we’d like to know if there are any concerns regarding this
new colle
On Mon, Dec 13, 2021 at 7:02 AM Benoit Knecht wrote:
>
> Hi,
>
> As we're getting closer to CentOS 8 EOL, I'm sure plenty of Ceph users are
> looking to migrate from CentOS 8 to CentOS Stream 8 or one of the new RHEL
> derivatives, e.g. Rocky and Alma.
>
> The question of upstream support has alre
15 matches
Mail list logo