mpus
Bygning 109, rum S14
From: Szabo, Istvan (Agoda)
Sent: Monday, October 28, 2024 4:41 AM
To: Frank Schilder
Subject: Re: [ceph-users] Re: Procedure for temporary evacuation and replacement
Hi Frank,
Finally what was the best way to do this evacuati
Hi Frank,
> Does this setting affect PG removal only or is it affecting other operations
> as well? Essentially: can I leave it at its current value or should I reset
> it to default?
Only PG removal, which is why we set it high enough that it
effectively disables that process.
Josh
__
56 PM
To: Wesley Dillingham
Cc: Frank Schilder; ceph-users@ceph.io
Subject: Re: [ceph-users] Re: Procedure for temporary evacuation and replacement
Is this a high-object-count application (S3 or small files in cephfs)?
My guess is that they're going down at the end of PG deletions, where
a ro
evacuation and replacement
> >
> > Hi Robert,
> >
> > thanks, that solves it then.
> >
> > Best regards,
> > =
> > Frank Schilder
> > AIT Risø Campus
> > Bygning 109, rum S14
> >
> > ___
I will report back how the
> > replacement+rebalancing is going.
> >
> > Best regards,
> > =========
> > Frank Schilder
> > AIT Risø Campus
> > Bygning 109, rum S14
> >
> >
> > From: Frank Schi
st regards,
=
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
From: Frank Schilder
Sent: Friday, October 11, 2024 12:18 PM
To: Robert Sander; ceph-users@ceph.io
Subject: [ceph-users] Re: Procedure for temporary evacuation and replacement
esley Dillingham
Sent: Thursday, October 17, 2024 3:28 PM
To: Frank Schilder
Cc: ceph-users@ceph.io
Subject: Re: [ceph-users] Re: Procedure for temporary evacuation and replacement
Interesting and yea does sound like a bug of sorts. I would consider increasing
your osd_heartbeat_grace (at global
>
> ____
> From: Frank Schilder
> Sent: Friday, October 11, 2024 12:18 PM
> To: Robert Sander; ceph-users@ceph.io
> Subject: [ceph-users] Re: Procedure for temporary evacuation and
> replacement
regards,
=
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
From: Frank Schilder
Sent: Friday, October 11, 2024 12:18 PM
To: Robert Sander; ceph-users@ceph.io
Subject: [ceph-users] Re: Procedure for temporary evacuation and replacement
Hi Rob
Hi Robert,
thanks, that solves it then.
Best regards,
=
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
From: Robert Sander
Sent: Friday, October 11, 2024 10:20 AM
To: ceph-users@ceph.io
Subject: [ceph-users] Re: Procedure for
On 10/11/24 10:07, Frank Schilder wrote:
Only problem is that setting an OSD OUT might not be sticky. If the OSD reboots
for some reason it might mark itself IN again.
The Ceph cluster distinguishes between manually marked out ("ceph osd
out N") and automatically marked out, when an OSD is do
m S14
From: Wesley Dillingham
Sent: Thursday, October 10, 2024 8:04 PM
To: Frank Schilder
Cc: Anthony D'Atri; ceph-users@ceph.io
Subject: Re: [ceph-users] Re: Procedure for temporary evacuation and replacement
If you are replacing the OSDs with the same size/weight device, I
If you are replacing the OSDs with the same size/weight device, I agree
with your reweight approach. I've been doing some similar work myself that
does require crush reweighting to 0 and have been in that headspace.
I did a bit of testing around this:
- Even with the lowest possible reweight an O
Thanks Anthony and Wesley for your input.
Let me explain in more detail why I'm interested in the somewhat obscure
looking procedure in step 1.
Whats the difference between "ceph osd reweight" and "ceph osd crush reweight"?
the difference is that command 1 only remaps shards within the same fai
I dont think your plan will work as expected.
In step 3 you will introduce additional data movement with the manner in
which you have tried to accomplish this.
I suggest you do set the CRUSH weight to 0 for the OSD in which you intend
to replace; do this for all OSDs you wish to replace whilst th
>
> We need to replace about 40 disks distributed over all 12 hosts backing a
> large pool with EC 8+3. We can't do it host by host as it would take way too
> long (replace disks per host and let recovery rebuild the data)
This is one of the false economies of HDDs ;)
> Therefore, we would l
For 1s I thought you were in Florida!
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
17 matches
Mail list logo