Re: [ceph-users] Basic questions

2013-07-26 Thread Hariharan Thantry
ut of 4 is a majority too. Some people like using an odd number of > monitors, since you never have an equal number of monitors that are > up/down; however, this isn't a requirement for Paxos. 3 out of 4 and 3 > out of 5 both constitute a majority. > > > > > > On Fri,

Re: [ceph-users] Basic questions

2013-07-26 Thread Hariharan Thantry
adlock. > (c) Someone else can probably answer that better than me. > (d) At least three. Paxos requires a simple majority, so 2 out of 3 is > sufficient. See > http://ceph.com/docs/master/rados/configuration/mon-config-ref/#background > particularly the monitor quorum section. > >

[ceph-users] Basic questions

2013-07-24 Thread Hariharan Thantry
Hi folks, Some very basic questions. (a) Can I be running more than 1 ceph cluster on the same node (assume that I have no more than 1 monitor/node, but storage is contributed by one node into more than 1 cluster) (b) Are there any issues with running Ceph clients on the same node as the other Ce

[ceph-users] ceph-deploy

2013-07-23 Thread Hariharan Thantry
I'm seeing quite a few errors with ceph-deploy that makes me wonder if the tool is stable. For instance, ceph-deploy disk list ; returns a partial set of disks, misses a few partitions and returns incorrect partitions (for XFS type that aren't listed by parted) ceph-deploy osd prepare :/dev/sd{a}

[ceph-users] Errors on OSD creation

2013-07-23 Thread Hariharan Thantry
Following up, I kept the default "ceph" name of the cluster, and didn't muck with any defaults in the ceph.conf file (for auth settings). Using ceph-deploy to prepare an OSD resulted in the following error. It created a 1G journal file on the mount I had specified, and I do see a new partition bein

[ceph-users] using ceph-deploy with no authentication

2013-07-23 Thread Hariharan Thantry
Hi, I'm trying to setup a 3-node ceph cluster using ceph deploy from an admin machine (VM box). Firstly, I did the following from the admin node: 1. ceph-deploy --cluster test new ceph-1 ceph-2 ceph-3 {3 monitors} 2. Edited the resultant test.conf file to put auth supported = none 3. Then did $ce

[ceph-users] Taking down a ceph-cluster

2013-07-16 Thread Hariharan Thantry
Hi folks, Just a bit confused about how I'd go about taking down a ceph-cluster. I assume that unmounting all the clients (rbd, fuse, fs) and running ceph-deploy purgedata for each of the nodes that make up the ceph storage cluster should do the trick, correct? However, executing ceph -s immediate

Re: [ceph-users] Problems mounting the ceph-FS

2013-07-16 Thread Hariharan Thantry
: 192 pgs: 192 active+clean; 197 MB data, 14166 MB used, 1242 GB / 1323 GB avail mdsmap e527: 1/1/1 up {0=ceph-1=up:active} On Tue, Jul 16, 2013 at 10:37 AM, Gregory Farnum wrote: > On Tue, Jul 16, 2013 at 10:29 AM, Hariharan Thantry > wrote: > > Thanks, now I get a dif

Re: [ceph-users] Problems mounting the ceph-FS

2013-07-16 Thread Hariharan Thantry
x27;s some translation that > needs to be done in userspace before the kernel sees the mount which > isn't happening. On Debian it's in the ceph-fs-common package. > -Greg > Software Engineer #42 @ http://inktank.com | http://ceph.com > > > On Tue, Jul 16, 2013 at

[ceph-users] Problems mounting the ceph-FS

2013-07-16 Thread Hariharan Thantry
While trying to execute these steps in the ceph users guide, I get libceph errors no secret set (for auth_x protocol) error -22 on auth protocol 2 init Also, when providing the authentication keys (Step #3 below), I get the following error: bad option at secretfile=admin.secret Any ideas where

[ceph-users] Using ceph with SLES11 SP2

2013-07-03 Thread Hariharan Thantry
Hi folks, I'm trying to get a ceph cluster going on machines running the SLES11 SP2 Xen. Ideally, I'd like it to work without a kernel upgrade (my current kernel is (3.0.13-0.27-xen), because we'd like to deploy this on some funky hardware (telco provider) that currently has this kenel version run