Re: [ceph-users] How many nodes/OSD can fail

2016-07-03 Thread Tu Holmes
I am kind of a newbie but I thought you needed 2 mons working at a minimum. You should split those away onto some really budget hardware. //Tu Hello Tu, yes that's correct. The mon nodes run as well on the OSD nodes. So I have 3 nodes in total. OSD, MDS and Mon on each Node. Regards - Willi A

Re: [ceph-users] How many nodes/OSD can fail

2016-07-03 Thread Willi Fehler
Hello Sean, great. Thank you for your feedback. Have a nice sunday. Regards - Willi Am 03.07.16 um 10:00 schrieb Sean Redmond: Hi, You will need 2 mons to be online. Thanks On 3 Jul 2016 8:58 a.m., "Willi Fehler" > wrote: Hello Tu, yes that's cor

Re: [ceph-users] How many nodes/OSD can fail

2016-07-03 Thread Sean Redmond
Hi, You will need 2 mons to be online. Thanks On 3 Jul 2016 8:58 a.m., "Willi Fehler" wrote: > Hello Tu, > > yes that's correct. The mon nodes run as well on the OSD nodes. So I have > > 3 nodes in total. OSD, MDS and Mon on each Node. > > Regards - Willi > > Am 03.07.16 um 09:56 schrieb Tu Hol

Re: [ceph-users] How many nodes/OSD can fail

2016-07-03 Thread Willi Fehler
Hello Tu, yes that's correct. The mon nodes run as well on the OSD nodes. So I have 3 nodes in total. OSD, MDS and Mon on each Node. Regards - Willi Am 03.07.16 um 09:56 schrieb Tu Holmes: Where are your mon nodes? Were you mixing mon and OSD together? Are 2 of the mon nodes down as well?

Re: [ceph-users] How many nodes/OSD can fail

2016-07-03 Thread Tu Holmes
Where are your mon nodes? Were you mixing mon and OSD together? Are 2 of the mon nodes down as well? On Jul 3, 2016 12:53 AM, "Willi Fehler" wrote: > Hello Sean, > > I've powered down 2 nodes. So 6 of 9 OSD are down. But my client can't > write and read anymore from my Ceph mount. Also 'ceph -s

Re: [ceph-users] How many nodes/OSD can fail

2016-07-03 Thread Willi Fehler
Hello Sean, I've powered down 2 nodes. So 6 of 9 OSD are down. But my client can't write and read anymore from my Ceph mount. Also 'ceph -s' hangs. pool 1 'cephfs_data' replicated size 3 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 300 pgp_num 300 last_change 447 flags hashpspool c

Re: [ceph-users] How many nodes/OSD can fail

2016-07-03 Thread Sean Redmond
It would need to be set to 1 On 3 Jul 2016 8:17 a.m., "Willi Fehler" wrote: > Hello David, > > so in a 3 node Cluster how should I set min_size if I want that 2 nodes > could fail? > > Regards - Willi > > Am 28.06.16 um 13:07 schrieb David: > > Hi, > > This is probably the min_size on your cephfs

Re: [ceph-users] How many nodes/OSD can fail

2016-07-03 Thread Willi Fehler
Hello David, so in a 3 node Cluster how should I set min_size if I want that 2 nodes could fail? Regards - Willi Am 28.06.16 um 13:07 schrieb David: Hi, This is probably the min_size on your cephfs data and/or metadata pool. I believe the default is 2, if you have less than 2 replicas ava

Re: [ceph-users] How many nodes/OSD can fail

2016-06-28 Thread David
Hi, This is probably the min_size on your cephfs data and/or metadata pool. I believe the default is 2, if you have less than 2 replicas available I/O will stop. See: http://docs.ceph.com/docs/master/rados/operations/pools/#set-the-number-of-object-replicas On Tue, Jun 28, 2016 at 10:23 AM, willi

[ceph-users] How many nodes/OSD can fail

2016-06-28 Thread willi.feh...@t-online.de
Hello, I'm still very new to Ceph. I've created a small test Cluster. ceph-node1 osd0 osd1 osd2 ceph-node2 osd3 osd4 osd5 ceph-node3 osd6 osd7 osd8 My pool for CephFS has a replication count of 3. I've powered of 2 nodes(6 OSDs went down) and my cluster status became critical and my ceph cli