:
-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Piotr
Dzionek
Sent: 30 November 2016 11:04
To: Brad Hubbard
Cc: Ceph Users
Subject: Re: [ceph-users] - cluster stuck and undersized if at least one osd is
down
Hi,
Ok, but I still don't get what adva
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Piotr Dzionek
> Sent: 30 November 2016 11:04
> To: Brad Hubbard
> Cc: Ceph Users
> Subject: Re: [ceph-users] - cluster stuck and undersized if at least one osd
>
Hi,
Ok, but I still don't get what advantage would I get from blocked IOs.
If I set size=2 and min_size=2 and during rebuild another disk dies on
the other node, I will loose data. I know that I should set size=3, it
is the much safer. But I don't see what is the advantage of blocked io ?
May
On Wed, Nov 30, 2016 at 1:54 PM, Christian Balzer wrote:
>
> Hello,
>
> On Wed, 30 Nov 2016 13:39:50 +1000 Brad Hubbard wrote:
>
>>
>>
>> On Tue, Nov 29, 2016 at 11:37 PM, Piotr Dzionek
>> wrote:
>> > Hi,
>> >
>> > As far as I understand if I set pool size 2, there is a chance to loose
>> > d
Hello,
On Wed, 30 Nov 2016 13:39:50 +1000 Brad Hubbard wrote:
>
>
> On Tue, Nov 29, 2016 at 11:37 PM, Piotr Dzionek
> wrote:
> > Hi,
> >
> > As far as I understand if I set pool size 2, there is a chance to loose data
> > when another osd dies while there is rebuild ongoing. However, it has
On Tue, Nov 29, 2016 at 11:37 PM, Piotr Dzionek wrote:
> Hi,
>
> As far as I understand if I set pool size 2, there is a chance to loose data
> when another osd dies while there is rebuild ongoing. However, it has to
> occur on the different host, because my crushmap forbids to store replicas
>
Hi,
As far as I understand if I set pool size 2, there is a chance to loose
data when another osd dies while there is rebuild ongoing. However, it
has to occur on the different host, because my crushmap forbids to store
replicas on the same physical node. I am not sure what would change if I
this
message is prohibited.
*From:* ceph-users [ceph-users-boun...@lists.ceph.com] on behalf of
Piotr Dzionek [piotr.dzio...@seqr.com]
*Sent:* Monday, November 28, 2016 4:54 AM
*To:* ceph-users@lists.ceph.com
*Subject:* [ceph-users] - cluster stuck
On Mon, Nov 28, 2016 at 9:54 PM, Piotr Dzionek wrote:
> Hi,
> I recently installed 3 nodes ceph cluster v.10.2.3. It has 3 mons, and 12
> osds. I removed default pool and created the following one:
>
> pool 7 'data' replicated size 2 min_size 1 crush_ruleset 0 object_hash
> rjenkins pg_num 1024
h-users-boun...@lists.ceph.com] on behalf of Piotr Dzionek
[piotr.dzio...@seqr.com]
Sent: Monday, November 28, 2016 4:54 AM
To: ceph-users@lists.ceph.com
Subject: [ceph-users] - cluster stuck and undersized if at least one osd is down
Hi,
I recently installed 3 nodes ceph cluster v.10.2.3. It has 3 mons, and
Hi,
I recently installed 3 nodes ceph cluster v.10.2.3. It has 3 mons, and
12 osds. I removed default pool and created the following one:
/pool 7 'data' replicated size 2 min_size 1 crush_ruleset 0 object_hash
rjenkins pg_num 1024 pgp_num 1024 last_change 126 flags hashpspool
stripe_width 0/
11 matches
Mail list logo