[ceph-users] giant release osd down

2014-11-02 Thread Shiv Raj Singh
Hi All I am new to ceph and I have been trying to configure 3 node ceph cluster with 1 monitor and 2 osd nodes. I have reinstall and recreated the cluster three teams and I ma stuck against the wall . My monitor is working as desired (I guess) but the status of the ods is down. I am following this

Re: [ceph-users] giant release osd down

2014-11-02 Thread Christian Balzer
Hello, On Mon, 3 Nov 2014 00:48:20 +1300 Shiv Raj Singh wrote: > Hi All > > I am new to ceph and I have been trying to configure 3 node ceph cluster > with 1 monitor and 2 osd nodes. I have reinstall and recreated the > cluster three teams and I ma stuck against the wall . My monitor is > worki

Re: [ceph-users] prioritizing reads over writes

2014-11-02 Thread Chen, Xiaoxi
Hi Simon Do your workload has lots of RAW? Since Ceph has RW lock in each object, so if you have a write to RBD and the following read happen to hit the same object, the latency will be higher. Another possibility is the OSD op_wq, it’s a priority queue but read and write have same pr

Re: [ceph-users] giant release osd down

2014-11-02 Thread Gregory Farnum
What happened when you did the OSD prepare and activate steps? Since your OSDs are either not running or can't communicate with the monitors, there should be some indication from those steps. -Greg On Sun, Nov 2, 2014 at 6:44 AM Shiv Raj Singh wrote: > Hi All > > I am new to ceph and I have been

Re: [ceph-users] giant release osd down

2014-11-02 Thread Sage Weil
On Mon, 3 Nov 2014, Christian Balzer wrote: > c) But wait, you specified a pool size of 2 in your OSD section! Tough > luck, because since Firefly there is a bug that at the very least prevents > OSD and RGW parameters from being parsed outside the global section (which > incidentally is what the d

Re: [ceph-users] issue with activate osd in ceph with new partition created

2014-11-02 Thread Vickie CH
Is any errors disply when execute "ceph-deploy osd prepare" ? Best wishes, Mika 2014-10-31 17:36 GMT+08:00 Subhadip Bagui : > Hi, > > Can anyone please help on this > > Regards, > Subhadip > > > -

Re: [ceph-users] giant release osd down

2014-11-02 Thread Christian Balzer
On Sun, 2 Nov 2014 14:07:23 -0800 (PST) Sage Weil wrote: > On Mon, 3 Nov 2014, Christian Balzer wrote: > > c) But wait, you specified a pool size of 2 in your OSD section! Tough > > luck, because since Firefly there is a bug that at the very least > > prevents OSD and RGW parameters from being par

[ceph-users] rhel7 krbd backported module repo ?

2014-11-02 Thread Alexandre DERUMIER
Hi, I would like to known if a repository is available for rhel7/centos7 with last krbd module backported ? I known that such module is available in ceph enterprise repos, but is it available for non subscribers ? Regards, Alexandre ___ ceph-users

Re: [ceph-users] giant release osd down

2014-11-02 Thread Mark Kirkwood
On 03/11/14 14:56, Christian Balzer wrote: On Sun, 2 Nov 2014 14:07:23 -0800 (PST) Sage Weil wrote: On Mon, 3 Nov 2014, Christian Balzer wrote: c) But wait, you specified a pool size of 2 in your OSD section! Tough luck, because since Firefly there is a bug that at the very least prevents OSD

Re: [ceph-users] giant release osd down

2014-11-02 Thread Ian Colle
Christian, Why are you not fond of ceph-deploy? Ian R. Colle Global Director of Software Engineering Red Hat (Inktank is now part of Red Hat!) http://www.linkedin.com/in/ircolle http://www.twitter.com/ircolle Cell: +1.303.601.7713 Email: ico...@redhat.com - Original Message - From: "Chri

[ceph-users] ceph version 0.79, rbd flatten report Segmentation fault (core dumped)

2014-11-02 Thread duan . xufeng
root@CONTROLLER-4F:~# rbd -p volumes flatten f3e81ea3-1d5b-487a-a55e-53efff604d54_disk *** Caught signal (Segmentation fault) ** in thread 7fe99984f700 ceph version 0.79 (4c2d73a5095f527c3a2168deb5fa54b3c8991a6e) 1: (()+0x22a4f) [0x7fe9a1745a4f] 2: (()+0x10340) [0x7fe9a00f2340] 3: (librbd::ai