day, January 1, 2015 12:50 PM
To: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] redundancy with 2 nodes
Hi,
I noticed this message after shutting down the other node. You might be right
that I need 3 monitors.
2015-01-01 15:47:35.990260 7f22858dd700 0 monclient: hunting for new mon
But
On 01/01/15 23:16, Christian Balzer wrote:
Hello,
On Thu, 01 Jan 2015 18:25:47 +1300 Mark Kirkwood wrote:
but I agree that you should probably not get a HEALTH OK status when you
have just setup 2 (or in fact any even number of) monitors...HEALTH WARN
would make more sense, with a wee message
Hi,
I noticed this message after shutting down the other node. You might be
right that I need 3 monitors.
2015-01-01 15:47:35.990260 7f22858dd700 0 monclient: hunting for new mon
But what is quite unexpected is that you cannot run even "ceph status"
on the running node t find out the state o
Hello,
On Thu, 01 Jan 2015 18:25:47 +1300 Mark Kirkwood wrote:
> The number of monitors recommended and the fact that a voting quorum is
> the way it works is covered here:
>
> http://ceph.com/docs/master/rados/deployment/ceph-deploy-mon/
>
> but I agree that you should probably not get a HEA
The number of monitors recommended and the fact that a voting quorum is
the way it works is covered here:
http://ceph.com/docs/master/rados/deployment/ceph-deploy-mon/
but I agree that you should probably not get a HEALTH OK status when you
have just setup 2 (or in fact any even number of) mon
Hi,
I think you are right. I was too focused on the following line in docs:
"A cluster will run fine with a single monitor; however,*a single
monitor is a single-point-of-failure*." I will try to add another
monitor. Hopefully, this will fix my issue.
Anyway, I think that "ceph status" or "c
On Thu, 01 Jan 2015 14:59:05 +1100 Jiri Kanicky wrote:
>
> monmap e1: 2 mons at
> {ceph1=192.168.30.21:6789/0,ceph2=192.168.30.22:6789/0}, election epoch
> 12, quorum 0,1 ceph1,ceph2
>
That's your problem, re-read the Ceph documentation about Paxos.
You need a third monitor to retain a vi
On Thu, 1 Jan 2015 03:46:33 PM Jiri Kanicky wrote:
> Hi,
>
> I have:
> - 2 monitors, one on each node
> - 4 OSDs, two on each node
> - 2 MDS, one on each node
POOMA U here, but I don't think you can reach quorum with one out of two
monitors, you need a odd number:
http://ceph.com/docs/master/ra
Hi,
I have:
- 2 monitors, one on each node
- 4 OSDs, two on each node
- 2 MDS, one on each node
Yes, all pools are set with size=2 and min_size=1
cephadmin@ceph1:~$ ceph osd dump
epoch 88
fsid bce2ff4d-e03b-4b75-9b17-8a48ee4d7788
created 2014-12-27 23:38:00.455097
modified 2014-12-30 20:45:51.3
On Thu, 1 Jan 2015 02:59:05 PM Jiri Kanicky wrote:
> I would expect that if I shut down one node, the system will keep
> running. But when I tested it, I cannot even execute "ceph status"
> command on the running node.
2 osd Nodes, 3 Mon nodes here, works perfectly for me.
How many monitors do
10 matches
Mail list logo