[ceph-users] Ceph OSD with OCFS2

2015-06-05 Thread gjprabu
Dear Team, We are newly using ceph with two OSD and two clients. Both clients are mounted with OCFS2 file system. Here suppose i transfer 500MB of data in the client its showing double of the size 1GB after finish data transfer. Is the behavior is correct or is there any solution for this.

[ceph-users] MDS closing stale session

2015-06-05 Thread 谷枫
Hi everyone, I hava a five nodes ceph cluster with cephfs.Mount the ceph partition with ceph-fuse tools. I met a serious problem has no omens. One of the node the ceph-fuse procs down and the ceph partition that mounted with the ceph-fuse tools change to unavailable. ls the ceph partition, it's lik

Re: [ceph-users] MDS closing stale session

2015-06-05 Thread 谷枫
sorry i send this mail careless, continue The mds error is : 2015-06-05 09:59:25.012130 7fa1ed118700 0 -- 10.3.1.5:6800/1365 >> 10.3.1.4:0/18748 pipe(0x5f81000 sd=22 :6800 s=2 pgs=1252 cs=1 l=0 c=0x4f935a0).fault with nothing to send, going to standby 2015-06-05 10:03:40.767822 7fa1f0a27700 0 log

Re: [ceph-users] MDS closing stale session

2015-06-05 Thread John Spray
On 05/06/2015 15:41, 谷枫 wrote: sorry i send this mail careless, continue The mds error is : 2015-06-05 09:59:25.012130 7fa1ed118700 0 -- 10.3.1.5:6800/1365 >> 10.3.1.4:0/18748 pipe(0x5f81000 sd=22 :6800 s=2 pgs=1252 cs=1 l=0 c=0x4f935a0)

Re: [ceph-users] MDS closing stale session

2015-06-05 Thread 谷枫
This is the /var/log/ceph/ceph-client.admin.log, but i found the time is late to the fault. 2015-06-05 10:29:07.180531 7f3f601dd7c0 0 ceph version 0.94.1 (e4bfad3a3c51054df7e234234ac8d0bf9be972ff), process ceph-fuse, pid 14002 2015-06-05 10:29:07.186763 7f3f601dd7c0 -1 init, newargv = 0x2c846e0 n

Re: [ceph-users] MDS closing stale session

2015-06-05 Thread 谷枫
Sorry to send a warong log with the apport. because i met the same problem twice today. This is the right time apport log . ERROR: apport (pid 7601) Fri Jun 5 09:58:45 2015: called for pid 18748, signal 6, core limit 0 ERROR: apport (pid 7601) Fri Jun 5 09:58:45 2015: executable: /usr/bin/ceph-f

Re: [ceph-users] Ceph OSD with OCFS2

2015-06-05 Thread Somnath Roy
Yes, Ceph will be writing twice , one for journal and one for actual data. Considering you configured journal in the same device , this is what you end up seeing if you are monitoring the device BW. Thanks & Regards Somnath From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf O

Re: [ceph-users] Old vs New pool on same OSDs - Performance Difference

2015-06-05 Thread Nick Fisk
> -Original Message- > From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of > Somnath Roy > Sent: 04 June 2015 22:41 > To: Nick Fisk; 'Gregory Farnum' > Cc: ceph-users@lists.ceph.com > Subject: Re: [ceph-users] Old vs New pool on same OSDs - Performance > Difference

Re: [ceph-users] krbd and blk-mq max queue depth=128?

2015-06-05 Thread Nick Fisk
> -Original Message- > From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of > Ilya Dryomov > Sent: 04 June 2015 09:21 > To: Nick Fisk > Cc: ceph-users > Subject: Re: [ceph-users] krbd and blk-mq max queue depth=128? > > On Wed, Jun 3, 2015 at 8:03 PM, Nick Fisk wro

Re: [ceph-users] Old vs New pool on same OSDs - Performance Difference

2015-06-05 Thread Somnath Roy
You don't need to enable debug_optracker. Basically, I was taking about admin socket perf dump only which you seems to be dumping. I meant to say with recent version there is one optracker enable/disable flag and if it is disabled, the perf dump will not give you proper data. Hopefully, no scrub

Re: [ceph-users] Recovering from multiple OSD failures

2015-06-05 Thread Robert LeBlanc
-BEGIN PGP SIGNED MESSAGE- Hash: SHA256 Did you try to deep-scrub the PG after copying it to 29? - Robert LeBlanc GPG Fingerprint 79A2 9CA4 6CC4 45DD A904 C70E E654 3BB2 FA62 B9F1 On Thu, Jun 4, 2015 at 10:26 PM, Aaron Ten Clay wrote: > Hi Cephers, > > I recently had a

Re: [ceph-users] Recovering from multiple OSD failures

2015-06-05 Thread Aaron Ten Clay
Robert, I did try scrubbing and deep-scrubbing - it seems the OSD is ignoring deep-scrub and scrub commands for the PG (I imagine because the state does not include "active".) However, I came across this blog post last night and am currently pursuing: https://ceph.com/community/incomplete-pgs-oh-

Re: [ceph-users] Recovering from multiple OSD failures

2015-06-05 Thread Robert LeBlanc
-BEGIN PGP SIGNED MESSAGE- Hash: SHA256 When copying to the primary OSD, a deep-scrub has worked for me, but I've not done this exact scenario. Did you try bouncing the OSD process? - Robert LeBlanc GPG Fingerprint 79A2 9CA4 6CC4 45DD A904 C70E E654 3BB2 FA62 B9F1 On Fr

[ceph-users] ceph-disk activate /dev/sda1 seem to get stuck?

2015-06-05 Thread Jelle de Jong
Hello everybody, I am new to ceph and I am trying to build a cluster for testing. after running: ceph-deploy osd prepare --zap-disk ceph02:/dev/sda It seems udev rules find the disk and try to activate them, but then gets stuck: http://paste.debian.net/plain/204723 Does someone know what is go

[ceph-users] rbd delete operation hangs, ops blocked

2015-06-05 Thread Ugis
Hi, I had recent problem with flapping hdd and in result I need to delete broken rbd. Problem is all operations towards this rbd stuck. I even cannot delete rbd - it sits on 6% done and I found this line in one of osds logs: 2015-06-06 08:03:31.770812 7fe5002c2700 0 log_channel(default) log [WRN]

Re: [ceph-users] OSD trashed by simple reboot (Debian Jessie, systemd?)

2015-06-05 Thread Mark Kirkwood
Righty - I'll see if I can replicate what you see if I setup an 0.80.9 cluster using the same workstation hardware (WD Raptors and Intel 520s) that showed up the issue previously at 0.83 (I wonder if I never tried a fresh install using the 0.80.* tree)... May be a few days... On 05/06/15 16:4

Re: [ceph-users] rbd delete operation hangs, ops blocked

2015-06-05 Thread Ugis
Update: I wonder if I cah follow advice here: http://cephnotes.ksperis.com/blog/2014/07/04/remove-big-rbd-image There is shown how to delete rbd objects directly via rados: $rados -p rbd rm rbd_id.rbdname $rados -p rbd rm rbd_header.18b3c2ae8944a $rados -p temp1 ls | grep '^rbd_data.18b3c2ae8944a.