Quick update: we decided to switch to wpq to see if that would confirm
our suspicion, and it did. After a few hours all PGs in the snaptrim
queue had been processed. We haven't looked into the average object
sizes yet, maybe we'll try that approach next week or so. If you have
any other ide
Hi,
What would be nice is if the drain command could take care of OSDs
by default and drain all services only when called with a
--remove-all-services flag or something similar.
but that would mean that you wouldn't be able to drain only specific
services, and OSDs would be drained either
Hi Eugen,
I know, but removing other services is generally done by removing labels on
hosts, isn't it?
Whatever the way, another concern would be how to deal with _no_schedule and
_no_conf_keyring labels when not draining all services on the host. Would it
require per service type _no_schedule
The label removal approach is great, but it still doesn't allow you to
drain only OSDs, no other daemons. I didn't think about the other
labels, that's a good point. Let's see what the devs have to say. :-)
Zitat von Frédéric Nass :
Hi Eugen,
I know, but removing other services is generall
I hope someone can help us with a MDS caching problem.
Ceph version 18.2.4 with cephadm container deployment.
Question 1:
For me it's not clear how much cache/memory you should allocate for the MDS. Is
this based on the number of open files, caps or something else?
Question 2/Problem:
At the mo
steps 2 to 4 are exactly my idea.
In step 1 All I will check is all active (o non inactive or unknown). Clean
is not necessary, I can boot VMs with degraded.as long they are active.
El jue, 29 ago 2024 a las 8:03, Bogdan Adrian Velica ()
escribió:
> Hi,
>
> It's a hacky idea but create a script
Details of this release are summarized here:
https://tracker.ceph.com/issues/67779#note-1
Release Notes - TBD
Gibba upgrade -TBD
LRC upgrade - TBD
It was decided and agreed upon that there would be limited testing for
this release, given it is based on 19.1.1 rather than a full rebase.
Seeking
Dear team ,
I configuring ceph cluster using ceph-ansible , ubuntu OS 20.04 . my
previous production cluster have been configured using the same
configuration and it is working perfectly , now I am trying to build the
new cluster using the same configurations , but I am facing the
following error
I believe that the original Ansible installation process is deprecated.
It was pretty messy, anyway, since it had to do a lot of grunt work.
Likewise the ceph-install program, which is in the Octopus docs, but
wasn't actually available in the release of Octopus I installed on my
servers.
The Ansib
"Ceph is ready" covers a lot of territory. It's more like, "How can I
delay util Ceph is available for the particular service I need?
I've been taking a systemd-bsaed approach. Since I don't actually care
bout Ceph in the abstract, but I'm actually looking for the Ceph or
Ceph NFS shares, I create
Hi Michel,
This is likely related to your ansible installation or system locale
configuration. Try to gather more info. E.g.:
which ansible
ansible --version
locale
locale -a
Also check if you run matching versions of ansible, ceph-ansible and
ceph as is listed in Releases section at
https:
On 8/30/24 12:38, Tim Holloway wrote:
I believe that the original Ansible installation process is deprecated.
This would be a bad news as I repeatedly hear from admins running large
storage deployments that they prefer to stay away from containers.
Milan
--
Milan Kupcevic
Research Computing
On Fri, Aug 30, 2024 at 9:22 PM Sake Ceph wrote:
>
> I hope someone can help us with a MDS caching problem.
>
> Ceph version 18.2.4 with cephadm container deployment.
>
> Question 1:
> For me it's not clear how much cache/memory you should allocate for the MDS.
> Is this based on the number of op
@Anthony: it's a small virtualized cluster and indeed SWAP shouldn't be used,
but this doesn't change the problem.
@Alexander: the problem is in the active nodes, the standby replay don't have
issues anymore.
Last night's backup run increased the memory usage to 86% when rsync was
running fo
Hello Sake,
The combination of two active MDSs and RHEL8 does ring a bell, and I
have seen this with Quincy, too. However, what's relevant is the
kernel version on the clients. If they run the default 4.18.x kernel
from RHEL8, please either upgrade to the mainline kernel or decrease
max_mds to 1.
I was talking about the hosts where the MDS containers are running on. The
clients are all RHEL 9.
Kind regards,
Sake
> Op 31-08-2024 08:34 CEST schreef Alexander Patrakov :
>
>
> Hello Sake,
>
> The combination of two active MDSs and RHEL8 does ring a bell, and I
> have seen this with Qui
16 matches
Mail list logo