On Wed, May 2, 2018 at 12:18 PM, Nicolas Huillard wrote:
> Le dimanche 08 avril 2018 à 20:40 +, Jens-U. Mozdzen a écrit :
>> sorry for bringing up that old topic again, but we just faced a
>> corresponding situation and have successfully tested two migration
>> scenarios.
>
> Thank you very mu
Le dimanche 08 avril 2018 à 20:40 +, Jens-U. Mozdzen a écrit :
> sorry for bringing up that old topic again, but we just faced a
> corresponding situation and have successfully tested two migration
> scenarios.
Thank you very much for this update, as I needed to do exactly that,
due to an
, ceph-users
Subject: Re: [ceph-users] Proper procedure to replace DB/WAL SSD
Message-ID:
Content-Type: text/plain; charset="utf-8"
Caspar, it looks like your idea should work. Worst case scenario seems like
the osd wouldn't start, you'd put the old SSD back i
On 23/02/2018 14:27, Caspar Smit wrote:
Hi All,
What would be the proper way to preventively replace a DB/WAL SSD (when
it is nearing it's DWPD/TBW limit and not failed yet).
It hosts DB partitions for 5 OSD's
Maybe something like:
1) ceph osd reweight 0 the 5 OSD's
2) let backfilling compl
Thanks for making this clear.
Dietmar
On 02/27/2018 05:29 PM, Alfredo Deza wrote:
> On Tue, Feb 27, 2018 at 11:13 AM, Dietmar Rieder
> wrote:
>> ... however, it would be nice if ceph-volume would also create the
>> partitions for the WAL and/or DB if needed. Is there a special reason,
>> why thi
On Tue, Feb 27, 2018 at 11:13 AM, Dietmar Rieder
wrote:
> ... however, it would be nice if ceph-volume would also create the
> partitions for the WAL and/or DB if needed. Is there a special reason,
> why this is not implemented?
Yes, the reason is that this was one of the most painful points in
c
... however, it would be nice if ceph-volume would also create the
partitions for the WAL and/or DB if needed. Is there a special reason,
why this is not implemented?
Dietmar
On 02/27/2018 04:25 PM, David Turner wrote:
> Gotcha. As a side note, that setting is only used by ceph-disk as
> ceph-v
Gotcha. As a side note, that setting is only used by ceph-disk as
ceph-volume does not create partitions for the WAL or DB. You need to
create those partitions manually if using anything other than a whole block
device when creating OSDs with ceph-volume.
On Tue, Feb 27, 2018 at 8:20 AM Caspar S
David,
Yes i know, i use 20GB partitions for 2TB disks as journal. It was just to
inform other people that Ceph's default of 1GB is pretty low.
Now that i read my own sentence it indeed looks as if i was using 1GB
partitions, sorry for the confusion.
Caspar
2018-02-27 14:11 GMT+01:00 David Turne
If you're only using a 1GB DB partition, there is a very real possibility
it's already 100% full. The safe estimate for DB size seams to be 10GB/1TB
so for a 4TB osd a 40GB DB should work for most use cases (except loads and
loads of small files). There are a few threads that mention how to check
h
2018-02-26 23:01 GMT+01:00 Gregory Farnum :
> On Mon, Feb 26, 2018 at 3:23 AM Caspar Smit
> wrote:
>
>> 2018-02-24 7:10 GMT+01:00 David Turner :
>>
>>> Caspar, it looks like your idea should work. Worst case scenario seems
>>> like the osd wouldn't start, you'd put the old SSD back in and go back
2018-02-26 18:02 GMT+01:00 David Turner :
> I'm glad that I was able to help out. I wanted to point out that the
> reason those steps worked for you as quickly as they did is likely that you
> configured your blocks.db to use the /dev/disk/by-partuuid/{guid} instead
> of /dev/sdx#. Had you confi
On Mon, Feb 26, 2018 at 3:23 AM Caspar Smit wrote:
> 2018-02-24 7:10 GMT+01:00 David Turner :
>
>> Caspar, it looks like your idea should work. Worst case scenario seems
>> like the osd wouldn't start, you'd put the old SSD back in and go back to
>> the idea to weight them to 0, backfilling, then
I'm glad that I was able to help out. I wanted to point out that the
reason those steps worked for you as quickly as they did is likely that you
configured your blocks.db to use the /dev/disk/by-partuuid/{guid} instead
of /dev/sdx#. Had you configured your osds with /dev/sdx#, then you would
have
2018-02-24 7:10 GMT+01:00 David Turner :
> Caspar, it looks like your idea should work. Worst case scenario seems
> like the osd wouldn't start, you'd put the old SSD back in and go back to
> the idea to weight them to 0, backfilling, then recreate the osds.
> Definitely with a try in my opinion,
Caspar, it looks like your idea should work. Worst case scenario seems like
the osd wouldn't start, you'd put the old SSD back in and go back to the
idea to weight them to 0, backfilling, then recreate the osds. Definitely
with a try in my opinion, and I'd love to hear your experience after.
Nico,
A very interesting question and I would add the follow up question:
Is there an easy way to add an external DB/WAL devices to an existing
OSD?
I suspect that it might be something on the lines of:
- stop osd
- create a link in ...ceph/osd/ceph-XX/block.db to the target device
- (maybe run some
Hi All,
What would be the proper way to preventively replace a DB/WAL SSD (when it
is nearing it's DWPD/TBW limit and not failed yet).
It hosts DB partitions for 5 OSD's
Maybe something like:
1) ceph osd reweight 0 the 5 OSD's
2) let backfilling complete
3) destroy/remove the 5 OSD's
4) replace
18 matches
Mail list logo