Re: [ceph-users] v0.61.2 'ceph df' can't use

2013-05-23 Thread Kelvin_Huang
Hi Sage & Joao, Thanks your reply !! after checking, I missed upgrade one package on monitor node therefore ceph df can't use ... I upgrade it and restart mon daemons then 'ceph df ' can be used Thanks all - Kelvin > -Original Message- > From: Sage Weil [mailto:s...@inktank.com] > Sen

Re: [ceph-users] scrub error: found clone without head

2013-05-23 Thread Olivier Bonvalet
Not yet. I keep it for now. Le mercredi 22 mai 2013 à 15:50 -0700, Samuel Just a écrit : > rb.0.15c26.238e1f29 > > Has that rbd volume been removed? > -Sam > > On Wed, May 22, 2013 at 12:18 PM, Olivier Bonvalet > wrote: > > 0.61-11-g3b94f03 (0.61-1.1), but the bug occured with bobtail. > > > >

Re: [ceph-users] mon problems after upgrading to cuttlefish

2013-05-23 Thread Smart Weblications GmbH - Florian Wiessner
Hi, please do not forget to respond to the list ceph-users@lists.ceph.com find answer below. Am 23.05.2013 17:16, schrieb Bryan Stillwell: > This is what I currently have configured: > > > # Ceph config file > > [global] > auth cluster required = none > auth service required

[ceph-users] radosgw with nginx

2013-05-23 Thread Erdem Agaoglu
Hi all, We are trying to run radosgw with nginx. We've found an example https://gist.github.com/guilhem/4964818 And changed our nginx.conf like below: http { server { listen 0.0.0.0:80 ; server_name _; access_log off; location / {

[ceph-users] ceph-deploy

2013-05-23 Thread Dewan Shamsul Alam
Hi, I tried ceph-deploy all day. Found that it has a python-setuptools as dependency. I knew about python-pushy. But is there any other dependency that I'm missing? The problem I'm getting are as follows: #ceph-deploy gatherkeys ceph0 ceph1 ceph2 returns the following error, Unable to find /etc/

[ceph-users] ceph-deploy

2013-05-23 Thread Dewan Shamsul Alam
Hi, I tried ceph-deploy all day. Found that it has a python-setuptools as dependency. I knew about python-pushy. But is there any other dependency that I'm missing? The problem I'm getting are as follows: #ceph-deploy gatherkeys ceph0 ceph1 ceph2 returns the following error, Unable to find /etc/

[ceph-users] mkcephfs

2013-05-23 Thread Dewan Shamsul Alam
Hi, I had a running ceph cluster on bobtail. It was on 0.56.4. It is my test cluster. I upgraded it to 0.56.6, now mkcephfs doesn't work with the same working configuration and the following command: /sbin/mkcephfs -a -c /etc/ceph/ceph.conf ceph.conf [global] auth supported = none

[ceph-users] FW: About RBD

2013-05-23 Thread Mensah, Yao (CIV)
FYI From: Mensah, Yao (CIV) Sent: Wednesday, May 22, 2013 5:59 PM To: 'i...@inktank.com' Subject: About RBD Hello, I was doing some reading on your web site about ceph and what it capable of. I have one question and maybe you can help on this: Can ceph RBD be used by 2 physical hosts at the sa

Re: [ceph-users] FW: About RBD

2013-05-23 Thread Dave Spano
Unless something changed, each RBD needs to be attached to 1 host at a time like an ISCSI lun. Dave Spano Optogenics - Original Message - From: "Yao Mensah (CIV)" To: ceph-users@lists.ceph.com Sent: Thursday, May 23, 2013 1:10:53 PM Subject: [ceph-users] FW: About RBD FYI

Re: [ceph-users] FW: About RBD

2013-05-23 Thread Gregory Farnum
You can attach an RBD device to multiple volumes, and if you don't use the cache then the RBD layer will even be coherent. But of course that's just the disk, so unless you're using a cluster-aware FS like OCFS2 on top then mounting from multiple places will blow up your data. -Greg Software Engine

Re: [ceph-users] mkcephfs

2013-05-23 Thread Sage Weil
Can you be more specific? How does it fail? A copy of the actual output would be ideal, thanks! sage On Thu, 23 May 2013, Dewan Shamsul Alam wrote: > Hi, > > I had a running ceph cluster on bobtail. It was on 0.56.4. It is my test > cluster. I upgraded it to 0.56.6, now mkcephfs doesn't work

Re: [ceph-users] FW: About RBD

2013-05-23 Thread Mensah, Yao (CIV)
Thank you very much for your prompt response… So basically I can’t use cluster aware tool like Microsoft CSV on the RBD, is that correct? What I am trying to understand is that can I have 2 physical hosts (Maybe Dell PowerEdge2950) *host1 with VM#0-10 *host2 with VM #10-20 And both of these

Re: [ceph-users] mkcephfs

2013-05-23 Thread Dewan Shamsul Alam
Hi, This is what I get while building the cluster: #/sbin/mkcephfs -a -c /etc/ceph/ceph.conf temp dir is /tmp/mkcephfs.yzl9PFOJYo preparing monmap in /tmp/mkcephfs.yzl9PFOJYo/monmap /usr/bin/monmaptool --create --clobber --add a 192.168.128.10:6789 --add b 192.168.128.11:6789 --add c 192.168.128.

Re: [ceph-users] RADOS Gateway Configuration

2013-05-23 Thread Daniel Curran
Hey John, Thanks for the reply. I'll check out that other doc you have there. Just for future reference do you know where ceph-deploy puts the ceph keyring? Daniel On Wed, May 22, 2013 at 7:19 PM, John Wilkins wrote: > Daniel, > > It looks like I need to update that portion of the docs too, as

Re: [ceph-users] RADOS Gateway Configuration

2013-05-23 Thread John Wilkins
It puts it in the same directory where you executed ceph-deploy. On Thu, May 23, 2013 at 10:57 AM, Daniel Curran wrote: > Hey John, > > Thanks for the reply. I'll check out that other doc you have there. Just for > future reference do you know where ceph-deploy puts the ceph keyring? > > Daniel >

[ceph-users] ZFS on RBD?

2013-05-23 Thread Tim Bishop
Hi all, I'm evaluating Ceph and one of my workloads is a server that provides home directories to end users over both NFS and Samba. I'm looking at whether this could be backed by Ceph provided storage. So to test this I built a single node Ceph instance (Ubuntu precise, ceph.com packages) in a V

[ceph-users] MDS dying on cuttlefish

2013-05-23 Thread Giuseppe 'Gippa' Paterno'
Hi! I've got a cluster of two nodes on Ubuntu 12.04 with cuttlefish from the ceph.com repo. ceph version 0.61.2 (fea782543a844bb277ae94d3391788b76c5bee60) The MDS process is dying after a while with a stack trace, but I can't understand why. I reproduced the same problem on debian 7 with the same

Re: [ceph-users] scrub error: found clone without head

2013-05-23 Thread Samuel Just
Do all of the affected PGs share osd.28 as the primary? I think the only recovery is probably to manually remove the orphaned clones. -Sam On Thu, May 23, 2013 at 5:00 AM, Olivier Bonvalet wrote: > Not yet. I keep it for now. > > Le mercredi 22 mai 2013 à 15:50 -0700, Samuel Just a écrit : >> rb

Re: [ceph-users] scrub error: found clone without head

2013-05-23 Thread Olivier Bonvalet
No : pg 3.7c is active+clean+inconsistent, acting [24,13,39] pg 3.6b is active+clean+inconsistent, acting [28,23,5] pg 3.d is active+clean+inconsistent, acting [29,4,11] pg 3.1 is active+clean+inconsistent, acting [28,19,5] But I suppose that all PG *was* having the osd.25 as primary (on the same

Re: [ceph-users] scrub error: found clone without head

2013-05-23 Thread Samuel Just
Can you send the filenames in the pg directories for those 4 pgs? -Sam On Thu, May 23, 2013 at 3:27 PM, Olivier Bonvalet wrote: > No : > pg 3.7c is active+clean+inconsistent, acting [24,13,39] > pg 3.6b is active+clean+inconsistent, acting [28,23,5] > pg 3.d is active+clean+inconsistent, acting [

Re: [ceph-users] mkcephfs

2013-05-23 Thread Dewan Shamsul Alam
Hi, The previous log is based on cuttlefish. This one is based on bobtail. I'm not using cephx, may be that's what causing the problem? temp dir is /tmp/mkcephfs.xf5TsinRsL preparing monmap in /tmp/mkcephfs.xf5TsinRsL/monmap /usr/bin/monmaptool --create --clobber --add a 192.168.128.10:6789 --add

Re: [ceph-users] ceph-deploy

2013-05-23 Thread Dewan Shamsul Alam
I just found that #ceph-deploy gatherkeys ceph0 ceph1 ceph2 works only if I have bobtail. cuttlefish can't find ceph.client.admin. keyring and then when I try this on bobtail, it says, root@cephdeploy:~/12.04# ceph-deploy osd create ceph0:/dev/sda3 ceph1:/dev/sda3 ceph2:/dev/sda3 ceph-disk: Err

Re: [ceph-users] mon problems after upgrading to cuttlefish

2013-05-23 Thread Bryan Stillwell
On Thu, May 23, 2013 at 9:58 AM, Smart Weblications GmbH - Florian Wiessner wrote: > you may need to update your [mon.a] section in your ceph.conf like this: > > > [mon.a] >mon data = /var/lib/ceph/mon/ceph-a/ That didn't seem to make a difference, it kept trying to use ceph-admin. I tri