[ceph-users] OSD failing to restart

2015-05-03 Thread sourabh saryal
Hi, On starting OSD its failing $ /etc/init.d/ceph start osd.119 with errors $ tail -f /var/lib/ceph/osd/ceph-119/ceph-osd.119.log |grep -i err -1/-1 (stderr threshold) 2015-05-03 11:38:44.366984 7f0794e5b780 -1 journal _check_disk_write_cache: fclose error: (61) No data available 2015-05-0

[ceph-users] 1 unfound object (but I can find it on-disk on the OSDs!)

2015-05-03 Thread Alex Moore
Hi all, I need some help getting my 0.87.1 cluster back into a healthy state... Overnight, a deep scrub detected an inconsistent object pg. Ceph health detail said the following: # ceph health detail HEALTH_ERR 1 pgs inconsistent; 2 scrub errors pg 2.3b is active+clean+inconsistent, acting [1

Re: [ceph-users] Kicking 'Remapped' PGs

2015-05-03 Thread Paul Evans
Thanks, Greg. Following your lead, we discovered the proper 'set_choose_tries xxx’ value had not been applied to *this* pool’s rule, and we updated the cluster accordingly. We then moved a random OSD out and back in to ‘kick’ things, but no joy: we still have the 4 ‘remapped’ PGs. BTW: the 4 P

Re: [ceph-users] 1 unfound object (but I can find it on-disk on the OSDs!)

2015-05-03 Thread Alex Moore
Okay I have now ended up returning the cluster into a healthy state but instead using the version of the object from OSDs 0 and 2 rather than OSD 1. I set the "noout" flag, and shut down OSD 1. That appears to have resulted in the cluster being happy to use the version of the object that was pr

Re: [ceph-users] Help with CEPH deployment

2015-05-03 Thread Mark Kirkwood
On 04/05/15 05:42, Venkateswara Rao Jujjuri wrote: Here is the output..I am still stuck at this step. :( (multiple times tried to by purging and restarting from scratch) vjujjuri@rgulistan-wsl10:~/ceph-cluster$ ceph-deploy mon create-initial [ceph_deploy.conf][DEBUG ] found configuration file at

[ceph-users] Btrfs defragmentation

2015-05-03 Thread Lionel Bouton
Hi, we began testing one Btrfs OSD volume last week and for this first test we disabled autodefrag and began to launch manual btrfs fi defrag. During the tests, I monitored the number of extents of the journal (10GB) and it went through the roof (it currently sits at 8000+ extents for example). I

Re: [ceph-users] Btrfs defragmentation

2015-05-03 Thread Sage Weil
On Mon, 4 May 2015, Lionel Bouton wrote: > Hi, > > we began testing one Btrfs OSD volume last week and for this first test > we disabled autodefrag and began to launch manual btrfs fi defrag. > > During the tests, I monitored the number of extents of the journal > (10GB) and it went through the r

Re: [ceph-users] Btrfs defragmentation

2015-05-03 Thread Lionel Bouton
On 05/04/15 01:34, Sage Weil wrote: > On Mon, 4 May 2015, Lionel Bouton wrote: >> Hi, we began testing one Btrfs OSD volume last week and for this >> first test we disabled autodefrag and began to launch manual btrfs fi >> defrag. During the tests, I monitored the number of extents of the >> journa

[ceph-users] How to add a slave to rgw

2015-05-03 Thread 周炳华
Hi, geeks: I have a ceph cluster for rgw service in production, which was setup according to the simple configuration tutorial, with only one deafult region and one default zone. Even worse, I didn't enable neither the meta logging nor the data logging. Now i want to add a slave zone to the rgw fo

Re: [ceph-users] Upgrade from Giant to Hammer and after some basic operations most of the OSD's went down

2015-05-03 Thread Tuomas Juntunen
Hi Thanks Sage, I got it working now. Everything else seems to be ok, except mds is reporting "mds cluster is degraded", not sure what could be wrong. Mds is running and all osds are up and pg's are active+clean and active+clean+replay. Had to delete some empty pools which were created while the