Hi,
I had similar problem on my larce cluster.
What I found and helped me to solve it:
Due to bad drives and replacing drives too often due to scrub error
there was always some recovery operations going on.
I did set this:
osd_scrub_during_recovery true
and it basically solved my issue.
I
Strange, I just copied that page on my phone, I’ll try again:
https://docs.ceph.com/en/pacific/cephadm/adoption/
I understand your hesitation, stability is key for a storage service.
But if you familiarize with cephadm it’s a pretty neat tool. I’m
planning to upgrade and adopt our own produc
Thanks for the reply!
Also sorry for the double message, I forgot to hit reply to list and
instead replied directly.
Maybe I did something wrong, but that page 404'ed for me.
I'm migrating to dockerized Ceph for all daemons, and the only ones left
are the OSDs which is blocking my upgrade to
Hi,
are you also planning to switch to cephadm? In that case you could
just adopt all the daemons [1], I believe docker would also work (I
use it with podman).
[1] https://docs.ceph.com/en/pacific/cephadm/adoption.html
Zitat von Zachary Winnerman :
I have an existing install of Ceph and
I have an existing install of Ceph and I'm trying to migrate to a
dockerized install. I setup the OSDs with LVM activation originally, but
I can't figure out how to get LVM based OSDs working inside docker. I
see in the setup scripts that there is some limited support for this,
but I can't quit
That's what I thought. We looked at the cluster storage nodes and found them
all to be less than .2 normalized maximum load.
Our 'normal' BW for client IO according to ceph -s is around 60MB/s-100MB/s. I
don't usually look at the IOPs so I don't have that number right now. We have
seen GB/s nu
On Fri, Mar 11, 2022 at 12:02 PM Szabo, Istvan (Agoda)
wrote:
>
> Hi,
>
> OSDs are not full and pool I don't really see full either.
> This doesn't say anything like which pool it is talking about.
Hi Istvan,
Yes, that's unfortunate. But you should be able to tell which pool
reached quota from
Thanks a lot !!
Your answer will help a lot of admins (myself included).
I will study your answer and implement your suggestions and let you know
All the best
Arnaud
Le ven. 11 mars 2022 à 13:25, Milind Changire a
écrit :
> Here's some answers to your questions:
>
> On Sun, Mar 6, 2022 at 3:
On Fri, Mar 11, 2022 at 8:04 AM Kai Stian Olstad wrote:
>
> Hi
>
> I'm trying to create namespace in an rbd pool, but get operation not
> supported.
> This is on a 16.2.6 Cephadm installed on Ubuntu 20.04.3.
>
> The pool is erasure encoded and the commands I run was the following.
>
> cephadm shel
Here's some answers to your questions:
On Sun, Mar 6, 2022 at 3:57 AM Arnaud M wrote:
> Hello to everyone :)
>
> Just some question about filesystem scrubbing
>
> In this documentation it is said that scrub will help admin check
> consistency of filesystem:
>
> https://docs.ceph.com/en/latest/ce
Hello.
So there is no workaround...? I guess that's on me for upgrading to the
latest version instead of staying on a stable one. :)
Just as a warning for the future, if anyone is planning on upgrading a
cluster from Nautilus to Pacific (16.2.7), beware that your scrubs may stop
working.
Best re
On 10.03.2022 14:48, Jimmy Spets wrote:
I have a Ceph Pacific cluster managed by cephadm.
The nodes have six HDD:s and one NVME that is shared between the six
HDD:s.
The OSD spec file looks like this:
service_type: osd
service_id: osd_spec_default
placement:
host_pattern: '*'
data_devices:
Hi Erik,
On 3/10/2022 6:19 PM, Anderson, Erik wrote:
Hi Everyone,
I am running a containerized pacific cluster 15.2.15 with 80 spinning disks and
20 SSD. Currently the SSDs are being used as a cach tier and holds the metadata
pool for cephfs. I think we could make better use of the SSDs by mo
13 matches
Mail list logo