Re: [ceph-users] ceph -s slow return result

2015-03-29 Thread Chu Duc Minh
Thank you very much! On 29 Mar 2015 11:25, "Kobi Laredo" wrote: > I'm glad it worked. > You can set a warning to catch this early next time (1GB) > > *mon leveldb size warn = 10* > > > > *Kobi Laredo* > *Cloud Systems Engineer* | (*408) 409-KOBI* > > On Fri, Mar 27, 2015 at 5:45 PM, Chu D

Re: [ceph-users] running Qemu / Hypervisor AND Ceph on the same nodes

2015-03-29 Thread Martin Millnert
On Thu, Mar 26, 2015 at 12:36:53PM -0500, Mark Nelson wrote: > Having said that, small nodes are > absolutely more expensive per OSD as far as raw hardware and > power/cooling goes. The smaller volume manufacturers have on the units, the worse the margin typically (from buyers side). Also, CPUs t

Re: [ceph-users] running Qemu / Hypervisor AND Ceph on the same nodes

2015-03-29 Thread Nick Fisk
There's probably a middle ground where you get the best of both worlds. Maybe 2-4 OSD's per compute node alongside dedicated Ceph nodes. That way you get a bit of extra storage and can still use lower end CPU's, but don't have to worry so much about resource contention. > -Original Message

Re: [ceph-users] ceph cluster on docker containers

2015-03-29 Thread Sebastien Han
You can have a look at: https://github.com/ceph/ceph-docker > On 23 Mar 2015, at 17:16, Pavel V. Kaygorodov wrote: > > Hi! > > I'm using ceph cluster, packed to a number of docker containers. > There are two things, which you need to know: > > 1. Ceph OSDs are using FS attributes, which may no

[ceph-users] Ceph osd is all up and in, but every pg is incomplete

2015-03-29 Thread Kai KH Huang
Hi, all I'm a newbie to Ceph, and just setup a whole new Ceph cluster (0.87) with two servers. But when its status is always warning: [root@serverA ~]# ceph osd tree # idweight type name up/down reweight -1 62.04 root default -2 36.4host serverA 0 3.64

Re: [ceph-users] Ceph osd is all up and in, but every pg is incomplete

2015-03-29 Thread Yueliang
Hi  Kai KH ceph -s report "493 pgs undersized”, I guess you create the pool with default parameter size=3, but you only have two host, so there it not enough host two service the pool. you should add host or set size=2 when create pool or modify crush rule. --  Yueliang Sent with Airmail On M

Re: [ceph-users] Ceph osd is all up and in, but every pg is incomplete

2015-03-29 Thread Kai KH Huang
Thanks for the quick response, and it seems to work! But what I expect to have is (replica number = 3) on two servers ( 1 host will store 2 copies, and the other store the 3rd one -- do deal with disk failure, rather only server failure). Is there a simple way to configure that, rather than bui

Re: [ceph-users] Ceph osd is all up and in, but every pg is incomplete

2015-03-29 Thread Yueliang
I think there no other way. :) --  Yueliang Sent with Airmail On March 30, 2015 at 13:17:55, Kai KH Huang (huangk...@lenovo.com) wrote: Thanks for the quick response, and it seems to work! But what I expect to have is (replica number = 3) on two servers ( 1 host will store 2 copies, and the ot