[ceph-users] Ceph MDS replaying journal

2014-03-17 Thread Wong Ming Tat
Hi, I receive the MDS replaying journal error as below. Hope anyone can give some information to solve this problem. # ceph health detail HEALTH_WARN mds cluster is degraded mds cluster is degraded mds.mon01 at x.x.x.x:6800/26426 rank 0 is replaying journal # ceph -s cluster

Re: [ceph-users] erasure coding testing

2014-03-17 Thread Loic Dachary
Hi Gruher, You can wait for 0.78 this week as Ian suggested. If you feel more adventurous there are various ways to test and contribute back as described here http://permalink.gmane.org/gmane.comp.file-systems.ceph.devel/18760 Cheers On 17/03/2014 03:11, Gruher, Joseph R wrote: > Hey all- > >

Re: [ceph-users] Ceph MDS replaying journal

2014-03-17 Thread Wong Ming Tat
Hi, Additional info. [5165030.941804] init: ceph-mds (ceph/mon01) main process (2264) killed by ABRT signal [5165030.941919] init: ceph-mds (ceph/mon01) main process ended, respawning [5165040.907291] init: ceph-mds (ceph/mon01) main process (2302) killed by ABRT signal

[ceph-users] ceph compile with zfs

2014-03-17 Thread Tim Zhang
Hi guys, I wanna compile ceph rpm packages with zfs support on Centos. Ceph version is 0.72. 1 First I install the zfs-devel, and the relevant files are under:/usr/include/libzfs/ # ls /usr/include/libzfs/ libnvpair.hlibuutil.h libzfs.h linux zfeature_common.h zfs_deleg.h

[ceph-users] Mounting with dmcrypt still fails

2014-03-17 Thread Michal Luczak
Hi, I tried to use whole new blank disk to create two separate partition (one for data and second for journal) and use dmcrypt, but there is a problem with use this. It's looks like there is a problem with mounting or formatting partitions. OS is Ubuntu 13.04 with ceph v0.72 (emperor) I used c

[ceph-users] Migrate filesystem and volumes to Ceph

2014-03-17 Thread Shengjie Min
Hi guys, We are trying to move the storage of our cloud platform(openstack based) towards Ceph. All the vms are currently running their filesystem and volumes using host's(where the supervisor is at) local disk. We are trying to find am efficient and safe way to do migration. If possible, it's bet

Re: [ceph-users] qemu-rbd

2014-03-17 Thread Sebastien Han
There is a RBD engine for FIO, have a look at http://telekomcloud.github.io/ceph/2014/02/26/ceph-performance-analysis_fio_rbd.html Sébastien Han Cloud Engineer "Always give 100%. Unless you're giving blood.” Phone: +33 (0)1 49 70 99 72 Mail: sebastien@enovance.com Address : 11 bi

Re: [ceph-users] qemu non-shared storage migration of nova instances?

2014-03-17 Thread Sebastien Han
Hi, I use the following live migration flags: VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_PERSIST_DEST It deletes the libvirt.xml and re-creates it on the other side. Cheers. Sébastien Han Cloud Engineer "Always give 100%. Unless you're giving blood.”

Re: [ceph-users] Ceph MDS replaying journal

2014-03-17 Thread John Spray
Hello, To understand what's gone wrong here, we'll need to increase the verbosity of the logging from the MDS service and then trying starting it again. 1. Stop the MDS service (on ubuntu this would be "stop ceph-mds-all") 2. Move your old log file away so that we will have a fresh one mv /var/lo

Re: [ceph-users] Ceph MDS replaying journal

2014-03-17 Thread John Spray
Clarification: in step 1, stop the MDS service on *all* MDS servers (I notice there are standby daemons in the "ceph status" output). John On Mon, Mar 17, 2014 at 4:45 PM, John Spray wrote: > Hello, > > To understand what's gone wrong here, we'll need to increase the > verbosity of the logging f

Re: [ceph-users] Mounting with dmcrypt still fails

2014-03-17 Thread Michael Lukzak
Hi again, I used another host for osd (with that same name), but now with Debian 7.4 ceph-deploy osd prepare ceph-node0:sdb --dmcrypt [ceph_deploy.cli][INFO ] Invoked (1.3.5): /usr/bin/ceph-deploy osd prepare ceph-node0:sdb --dmcrypt [ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks ceph-

Re: [ceph-users] Ceph MDS replaying journal

2014-03-17 Thread John Spray
Thanks for sending the logs so quickly. 626 2014-03-18 00:58:01.009623 7fba5cbbe700 10 mds.0.journal EMetaBlob.replay sessionmap v8632368 -(1|2) == table 7235981 prealloc [141df86~1] used 141db9e 627 2014-03-18 00:58:01.009627 7fba5cbbe700 20 mds.0.journal (session prealloc [13734

Re: [ceph-users] qemu non-shared storage migration of nova instances?

2014-03-17 Thread Blair Bethwaite
> > Message: 20 > Date: Mon, 17 Mar 2014 16:05:17 +0100 > From: Sebastien Han > To: "Don Talton (dotalton)" > Cc: "ceph-users@lists.ceph.com" > Subject: Re: [ceph-users] qemu non-shared storage migration of nova > instances? > Message-ID: <86ed699a-de8b-4864-860f-e324a6f11...@enovance.co

Re: [ceph-users] Ceph MDS replaying journal

2014-03-17 Thread Luke Jing Yuan
Hi John, Thanks for the info and the instructions to solve the problem. However how can this bug be triggered the first place. In our search through the logs, we noticed something happened between the MDS and the client before the Silat error messages started to pop up. Regards, Luke > On Mar

[ceph-users] firefly timing

2014-03-17 Thread Sage Weil
Hi everyone, It's taken longer than expected, but the tests for v0.78 are calming down and it looks like we'll be able to get the release out this week. However, we've decided NOT to make this release firefly. It will be a normal development release. This will be the first release that includ

[ceph-users] Please Help

2014-03-17 Thread Ashraful Arefeen
Hi, I want to use ceph for testing purpose. While setting the system up I have faced some problem with keyrings. Whenever I ran this command from my admin node (ceph-deploy gatherkeys node01) I got these warning. [ceph_deploy.gatherkeys][WARNIN] Unable to find /etc/ceph/ceph.client.admin.keyring o

Re: [ceph-users] Ceph MDS replaying journal

2014-03-17 Thread Luke Jing Yuan
Hi John, Is there a way for us to verify that step 2 is working properly? We are seeing the process running for almost 4 hours but there is no indication when it will end. Thanks. Regards, Luke -Original Message- From: John Spray [mailto:john.sp...@inktank.com] Sent: Tuesday, 18 March,