Re: [ceph-users] [CLUSTER STUCK] Luminous cluster stuck when adding monitor

2017-10-08 Thread Nico Schottelius
Not sure if I mentioned before: adding a new monitor also puts the whole cluster into stuck state. Some minutes ago I did: root@server1:~# ceph mon add server2 2a0a:e5c0::92e2:baff:fe4e:6614 port defaulted to 6789; adding mon.server2 at [2a0a:e5c0::92e2:baff:fe4e:6614]:6789/0 And then started

Re: [ceph-users] PGs get placed in the same datacenter (Trying to make a hybrid NVMe/HDD pool with 6 servers, 2 in each datacenter)

2017-10-08 Thread David Turner
That's correct. It doesn't matter how many copies of the data you have in each datacenter. The mons control the maps and you should be good as long as you have 1 mon per DC. You should test this to see how the recovery goes, but there shouldn't be a problem. On Sat, Oct 7, 2017, 6:10 PM Дробышевск

[ceph-users] blustore - howto remove object that is crashing osd

2017-10-08 Thread Marek Grzybowski
Hi I have single node cephfs on top of EC pool on top of bluestore ( my dream setup ;) ) . I hit bug that is crushing my osd ( first only one , then second before data backfilled ): http://tracker.ceph.com/issues/20997 After some investigation i found that there is object ("3#22:addebba8:::10

Re: [ceph-users] PGs get placed in the same datacenter (Trying to make a hybrid NVMe/HDD pool with 6 servers, 2 in each datacenter)

2017-10-08 Thread Peter Linder
Oh, you mean monitor quorum is enforced? I never really considered that. However, I think I found another solution: I created a second tree called "ldc" and under it I made 3 "logical datacenters" (waiting for a better name) and grouped the servers under it so that one logical datacenter contains

[ceph-users] Snapshot space

2017-10-08 Thread Josy
Hello, I noticed that when we create a snapshot of a clone, the first snapshot seems to be quite large. For example: Clone VM is taking up 136MBs according to rbd du First snapshot: 10GBs Second snapshot: 104MBs Third snapshot: 57MBs The clone is a Windows virtual machine, which does take ar

Re: [ceph-users] [CLUSTER STUCK] Luminous cluster stuck when adding monitor

2017-10-08 Thread Nico Schottelius
After spending some hours on debugging packets on the wire, without seeing a good reason for things not to work, the monitor on server2 eventually joined the quorum. Being happy for some time and then our alarming sends a message that the quorum is lost. And indeed, the monitor on server2 died an

Re: [ceph-users] [MONITOR SEGFAULT] Luminous cluster stuck when adding monitor

2017-10-08 Thread Joao Eduardo Luis
This looks a lot like a bug I fixed a week or so ago, but for which I currently don't recall the ticket off the top of my head. It was basically a crash each time a "ceph osd df" was called, if a mgr was not available after having set the luminous osd require flag. I will check the log in the morni

Re: [ceph-users] [MONITOR SEGFAULT] Luminous cluster stuck when adding monitor

2017-10-08 Thread kefu chai
On Mon, Oct 9, 2017 at 8:07 AM, Joao Eduardo Luis wrote: > This looks a lot like a bug I fixed a week or so ago, but for which I > currently don't recall the ticket off the top of my head. It was basically a http://tracker.ceph.com/issues/21300 > crash each time a "ceph osd df" was called, if a

Re: [ceph-users] [MONITOR SEGFAULT] Luminous cluster stuck when adding monitor

2017-10-08 Thread Nico Schottelius
Good morning Joao, thanks for your feedback! We do actually have three managers running: cluster: id: 26c0c5a8-d7ce-49ac-b5a7-bfd9d0ba81ab health: HEALTH_WARN 1/3 mons down, quorum server5,server3 services: mon: 3 daemons, quorum server5,server3, out of quorum: s