Re: [ceph-users] - cluster stuck and undersized if at least one osd is down

2016-12-01 Thread Piotr Dzionek
: -Original Message- From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Piotr Dzionek Sent: 30 November 2016 11:04 To: Brad Hubbard Cc: Ceph Users Subject: Re: [ceph-users] - cluster stuck and undersized if at least one osd is down Hi, Ok, but I still don't get what adva

Re: [ceph-users] - cluster stuck and undersized if at least one osd is down

2016-11-30 Thread Nick Fisk
> -Original Message- > From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of > Piotr Dzionek > Sent: 30 November 2016 11:04 > To: Brad Hubbard > Cc: Ceph Users > Subject: Re: [ceph-users] - cluster stuck and undersized if at least one osd >

Re: [ceph-users] - cluster stuck and undersized if at least one osd is down

2016-11-30 Thread Piotr Dzionek
Hi, Ok, but I still don't get what advantage would I get from blocked IOs. If I set size=2 and min_size=2 and during rebuild another disk dies on the other node, I will loose data. I know that I should set size=3, it is the much safer. But I don't see what is the advantage of blocked io ? May

Re: [ceph-users] - cluster stuck and undersized if at least one osd is down

2016-11-29 Thread Brad Hubbard
On Wed, Nov 30, 2016 at 1:54 PM, Christian Balzer wrote: > > Hello, > > On Wed, 30 Nov 2016 13:39:50 +1000 Brad Hubbard wrote: > >> >> >> On Tue, Nov 29, 2016 at 11:37 PM, Piotr Dzionek >> wrote: >> > Hi, >> > >> > As far as I understand if I set pool size 2, there is a chance to loose >> > d

Re: [ceph-users] - cluster stuck and undersized if at least one osd is down

2016-11-29 Thread Christian Balzer
Hello, On Wed, 30 Nov 2016 13:39:50 +1000 Brad Hubbard wrote: > > > On Tue, Nov 29, 2016 at 11:37 PM, Piotr Dzionek > wrote: > > Hi, > > > > As far as I understand if I set pool size 2, there is a chance to loose data > > when another osd dies while there is rebuild ongoing. However, it has

Re: [ceph-users] - cluster stuck and undersized if at least one osd is down

2016-11-29 Thread Brad Hubbard
On Tue, Nov 29, 2016 at 11:37 PM, Piotr Dzionek wrote: > Hi, > > As far as I understand if I set pool size 2, there is a chance to loose data > when another osd dies while there is rebuild ongoing. However, it has to > occur on the different host, because my crushmap forbids to store replicas >

Re: [ceph-users] - cluster stuck and undersized if at least one osd is down

2016-11-29 Thread Piotr Dzionek
Hi, As far as I understand if I set pool size 2, there is a chance to loose data when another osd dies while there is rebuild ongoing. However, it has to occur on the different host, because my crushmap forbids to store replicas on the same physical node. I am not sure what would change if I

Re: [ceph-users] - cluster stuck and undersized if at least one osd is down

2016-11-29 Thread Piotr Dzionek
this message is prohibited. *From:* ceph-users [ceph-users-boun...@lists.ceph.com] on behalf of Piotr Dzionek [piotr.dzio...@seqr.com] *Sent:* Monday, November 28, 2016 4:54 AM *To:* ceph-users@lists.ceph.com *Subject:* [ceph-users] - cluster stuck

Re: [ceph-users] - cluster stuck and undersized if at least one osd is down

2016-11-28 Thread Brad Hubbard
On Mon, Nov 28, 2016 at 9:54 PM, Piotr Dzionek wrote: > Hi, > I recently installed 3 nodes ceph cluster v.10.2.3. It has 3 mons, and 12 > osds. I removed default pool and created the following one: > > pool 7 'data' replicated size 2 min_size 1 crush_ruleset 0 object_hash > rjenkins pg_num 1024

Re: [ceph-users] - cluster stuck and undersized if at least one osd is down

2016-11-28 Thread David Turner
h-users-boun...@lists.ceph.com] on behalf of Piotr Dzionek [piotr.dzio...@seqr.com] Sent: Monday, November 28, 2016 4:54 AM To: ceph-users@lists.ceph.com Subject: [ceph-users] - cluster stuck and undersized if at least one osd is down Hi, I recently installed 3 nodes ceph cluster v.10.2.3. It has 3 mons, and

[ceph-users] - cluster stuck and undersized if at least one osd is down

2016-11-28 Thread Piotr Dzionek
Hi, I recently installed 3 nodes ceph cluster v.10.2.3. It has 3 mons, and 12 osds. I removed default pool and created the following one: /pool 7 'data' replicated size 2 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 1024 pgp_num 1024 last_change 126 flags hashpspool stripe_width 0/