Re: [ceph-users] Disaster recovery of monitor

2015-11-16 Thread Jose Tavares
The problem is that I think I don't have any good monitor anymore. How do I know if the map I am trying is ok? I also saw in the logs that the primary mon was trying to contact a removed mon at IP .112 .. So, I added .112 again ... and it didn't help. Attached are the logs of what is going on and

Re: [ceph-users] can't stop ceph

2015-11-16 Thread Yonghua Peng
Thanks a lot. that works. Do you know how to stop all ceph-osd daemons via one command? On 2015/11/17 星期二 9:35, wd_hw...@wistron.com wrote: Hi, You may try the following command 'sudo stop ceph-mon id=ceph2' WD -Original Message- From: ceph-users [mailto:ceph-users-boun...@lists.ceph.

Re: [ceph-users] can't stop ceph

2015-11-16 Thread WD_Hwang
Hi, You may try the following command 'sudo stop ceph-mon id=ceph2' WD -Original Message- From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Yonghua Peng Sent: Tuesday, November 17, 2015 9:34 AM To: ceph-users@lists.ceph.com Subject: [ceph-users] can't stop ceph He

[ceph-users] can't stop ceph

2015-11-16 Thread Yonghua Peng
Hello, My system is ubuntu 12.04, ceph 0.80.10 installed. I followed the document here, http://docs.ceph.com/docs/argonaut/init/ But couldn't stop a mon daemon successfully. root@ceph2:~# ps -efw|grep ceph- root 2763 1 0 Oct28 ?00:05:11 /usr/bin/ceph-mon --cluster=ceph -i cep

[ceph-users] radosgw and ec pools

2015-11-16 Thread Deneau, Tom
I normally use ceph-deploy rgw create hostname to get radosgw started. And this works well, with all the gw pools created as replicated with the default replication size I have seen mails on the list where people used ec pools for at least some of the gw pools. Do you just create the gw poo

[ceph-users] Disaster recovery of monitor

2015-11-16 Thread Jose Tavares
Hi guys ... I need some help as my cluster seems to be corrupted. I saw here .. https://www.mail-archive.com/ceph-users@lists.ceph.com/msg01919.html .. a msg from 2013 where Peter had a problem with his monitors. I had the same problem today when trying to add a new monitor, and than playing with

Re: [ceph-users] OSD:s failing out after upgrade to 9.2.0 on Ubuntu 14.04

2015-11-16 Thread Claes Sahlström
Yes I upgraded from Hammer 0.94.4. And "ceph-osd --version" gives the correct version 9.2.0, I think it is a problem with the communication between my OSDs and either the MONs or the other OSDs or maybe both. I will check out those archives from Sage also... I have probably done somethi

Re: [ceph-users] OSD:s failing out after upgrade to 9.2.0 on Ubuntu 14.04

2015-11-16 Thread Josef Johansson
And if you look through the archives Sage did release a version of Infernalis that fixed if you didn’t do it that way as well. > On 16 Nov 2015, at 22:15, David Clarke wrote: > > On 17/11/15 09:46, Claes Sahlström wrote: >> Did some more logging and for some reason it seems like I do have some

Re: [ceph-users] OSD:s failing out after upgrade to 9.2.0 on Ubuntu 14.04

2015-11-16 Thread David Clarke
On 17/11/15 09:46, Claes Sahlström wrote: > Did some more logging and for some reason it seems like I do have some > problem communicating with my OSDs: > > > > “ceph tell osd.* version” gives two different errors that might shed > some light on what is going on… > > > > osd.0: Error ENXIO:

Re: [ceph-users] OSD:s failing out after upgrade to 9.2.0 on Ubuntu 14.04

2015-11-16 Thread Josef Johansson
Hi, That piece of code is keeping your OSD from booting. Well you could run the below to check the version as well. Might do that with the mon as well just to be sure. # /usr/bin/ceph-osd --version ceph version 0.94.2 (5fb85614ca8f354284c713a2f9c610860720bbf3) Regards, /Josef > On 16 Nov 2015,

Re: [ceph-users] OSD:s failing out after upgrade to 9.2.0 on Ubuntu 14.04

2015-11-16 Thread Claes Sahlström
Did some more logging and for some reason it seems like I do have some problem communicating with my OSDs: “ceph tell osd.* version” gives two different errors that might shed some light on what is going on… osd.0: Error ENXIO: problem getting command descriptions from osd.0 osd.0: problem gett

[ceph-users] Nov Ceph Tech Talk Cancelled

2015-11-16 Thread Patrick McGarry
Hey cephers, Just letting you know that due to unforeseen circumstances, and then the holidays and travel concerns, the Ceph Tech Talk program will be placed on hold until after the new year. See you all in Jan 2016! http://ceph.com/ceph-tech-talks/ -- Best Regards, Patrick McGarry Director

Re: [ceph-users] OSD:s failing out after upgrade to 9.2.0 on Ubuntu 14.04

2015-11-16 Thread Claes Sahlström
After some time 4 more OSD:s from one server dropped out and it now seems that only 3 OSD:s from 1 server (I have 3 servers each with 4 OSD:s) are marked as up the other 9 are down. I have shut the servers down for now since I will not have any time to work with this until the weekend. Any sugg

Re: [ceph-users] Math behind : : OSD count vs OSD process vs OSD ports

2015-11-16 Thread Vickey Singh
Hello Community Need your help in understanding this. I have the below node, which is hosting 60 physical disks, running 1 OSD per disk so total 60 Ceph OSD daemons *[root@node01 ~]# service ceph status | grep -i osd | grep -i running | wc -l* *60* *[root@node01 ~]#* However if i check OSD proc

[ceph-users] Math behind : : OSD count vs OSD process vs OSD ports

2015-11-16 Thread Vickey Singh
Hello Community Need your help in understanding this. I have the below node, which is hosting 60 physical disks, running 1 OSD per disk so total 60 Ceph OSD daemons *[root@node01 ~]# service ceph status | grep -i osd | grep -i running | wc -l* *60* *[root@node01 ~]#* However if i check OSD proc

[ceph-users] Fixing inconsistency

2015-11-16 Thread Межов Игорь Александрович
Hi! We have a hard crash on one node - it hangs in an indefinite state and do not respond neither network requests, nor even console commands. After node restart, all OSDs successfully mount their filesystems (ext4) and rejoin the cluster. Some time later, scrub process found two errors. The f

Re: [ceph-users] OSD:s failing out after upgrade to 9.2.0 on Ubuntu 14.04

2015-11-16 Thread Claes Sahlström
Tried shutting down my 3 servers and then started them again but I just got back to where I was yesterday with 7 working OSD:s and 5 down. Will have to look more into this, as long as the disks are ok and I do not erase the data on the OSD:s then I hope I will be able to get the system online ag

Re: [ceph-users] all pgs of erasure coded pool stuck stale

2015-11-16 Thread Kenneth Waegeman
On 13/11/15 19:14, Gregory Farnum wrote: Somebody else will need to do the diagnosis, but it'll help them if you can get logs with "debug ms = 1", "debug osd = 20" in the log. Based on the required features update in the crush map, it looks like maybe you've upgraded some of your OSDs — is tha

Re: [ceph-users] Ceph Openstack deployment

2015-11-16 Thread Iban Cabrillo
cephvolume:~ # cinder-manage service list (from cinder) /usr/lib/python2.7/dist-packages/oslo_messaging/_drivers/base.py:20: DeprecationWarning: The oslo namespace package is deprecated. Please use oslo_config instead. from oslo.config import cfg 2015-11-16 13:01:42.203 23787 DEBUG oslo_db.api [r

Re: [ceph-users] Ceph Openstack deployment

2015-11-16 Thread M Ranga Swami Reddy
Hi, Can you share the output of below command: cinder-manage service list On Mon, Nov 16, 2015 at 4:45 PM, Iban Cabrillo wrote: > cloud:~ # cinder list > > +--+---+--+--+-+--+-+ > | ID

Re: [ceph-users] Ceph Openstack deployment

2015-11-16 Thread Iban Cabrillo
cloud:~ # cinder list +--+---+--+--+-+--+-+ | ID | Status | Display Name | Size | Volume Type | Bootable | Attached to | +--+-

Re: [ceph-users] Multipath Support on Infernalis

2015-11-16 Thread John Spray
On Sat, Nov 14, 2015 at 4:30 AM, Ramakrishna Nishtala (rnishtal) wrote: > Hi, > > It appears that Multipath support is available on 512 and not 4k sector > size. This is on RHEL 7.1. Can someone please confirm? > > > > 4k Sector size > > == > > Nov 13 16:20:16 colusa5-ceph kernel: device-m

Re: [ceph-users] OSD:s failing out after upgrade to 9.2.0 on Ubuntu 14.04

2015-11-16 Thread Nick Fisk
I think I may have experienced something similar after upgrading to Infernalis as well. After rebooting all the Mons and OSD nodes everything returned to normal. I wasn’t suspicious of it at the time, but seeing this has got me thinking. I was seeing the same in the logs as you, the last lin

Re: [ceph-users] cephfs: Client hp-s3-r4-compute failingtorespondtocapabilityrelease

2015-11-16 Thread Burkhard Linke
Hi, On 11/13/2015 03:42 PM, Yan, Zheng wrote: On Tue, Nov 10, 2015 at 12:06 AM, Burkhard Linke > wrote: > Hi, *snipsnap* it seems the hang is related to async invalidate. please try the following patch --- diff --git a/src/client/Cl