Den fre 23 nov. 2018 kl 15:19 skrev Marco Gaiarin :
>
>
> Previous (partial) node failures and my current experiments on adding a
> node lead me to the fact that, when rebalancing are needed, ceph
> rebalance also on intra-node: eg, if an OSD of a node die, data are
> rebalanced on all OSD, even if
Greetings,
You need to set the following configuration option under [osd] in your
ceph.conf file for your new OSDs.
[osd]
osd_crush_initial_weight = 0
This will ensure your new OSDs come up with a 0 crush weight, thus preventing
the automatic rebalance that you see occuring.
Good luck,
_
Hi all,
We've 8 osd hosts, 4 in room 1 and 4 in room2.
A pool with size = 3 using following crush map is created, to cater for room
failure.
rule multiroom {
id 0
type replicated
min_size 2
max_size 4
step take default
step choose firstn 2 type
Background: I'm running single-node Ceph with CephFS as an experimental
replacement for "traditional" filesystems. In this case I have 11 OSDs,
1 mon, and 1 MDS.
I just had an unclean shutdown (kernel panic) while a large (>1TB) file
was being copied to CephFS (via rsync). Upon bringing the system
On 07/11/2018 23:28, Neha Ojha wrote:
> For those who haven't upgraded to 12.2.9 -
>
> Please avoid this release and wait for 12.2.10.
Any idea when 12.2.10 is going to be here, please?
Regards,
Matthew
--
The Wellcome Sanger Institute is operated by Genome Research
Limited, a charity re
Previous (partial) node failures and my current experiments on adding a
node lead me to the fact that, when rebalancing are needed, ceph
rebalance also on intra-node: eg, if an OSD of a node die, data are
rebalanced on all OSD, even if i've pool molteplicity 3 and 3 nodes.
This, indeed, make per
Hi Daniel, thanks a lot for your help.
Do you know how I can recover the data again in this scenario since I lost
1 node with 6 OSD?
My configuration had 12 OSD (6 per host).
Regards
On Wed, Nov 21, 2018 at 3:16 PM Daniel Baumann
wrote:
> Hi,
>
> On 11/21/2018 07:04 PM, Rodrigo Embeita wrote:
>
Dear Igor
Thank you for your help!
I am working with Florian.
We have built the ceph-bluestore-tool with your patch on SLES 12SP3.
We will post back the results ASAP.
Best Regards
Francois Scheurer
Weitergeleitete Nachricht
Betreff: Re: [ceph-users] RocksDB and WAL mig
Den fre 23 nov. 2018 kl 11:08 skrev Marco Gaiarin :
> Reading ceph docs lead to me that 'ceph osd reweight' and 'ceph osd crush
> reweight' was roughly the same, the first is effectively 'temporary'
> and expressed in percentage (0-1), while the second is 'permanent' and
> expressed, normally, as
Mandi! Paweł Sadowski
In chel di` si favelave...
> This is most probably due to big difference in weights between your hosts (the
> new one has 20x lower weight than the old ones) which in combination with
> straw
> algorithm is a 'known' issue.
Ok. I've reweighted back that disk to '1' and st
Mandi! Paweł Sadowsk
In chel di` si favelave...
> Exactly, your 'new' OSD have weight 1.81999 (osd.12, osd.13) and 0.90999
> (osd.14, osd.15). As Jarek pointed out you should add them using
> 'osd crush initial weight = 0'
> and the use
> 'ceph osd crush reweight osd.x 0.05'
> to slowly incr
11 matches
Mail list logo