Re: [ceph-users] Monitor rename / recreate issue -- probing state

2015-12-13 Thread deeepdish
On 12/10/2015 04:00 AM, deeepdish wrote: > Hello, > > I encountered a strange issue when rebuilding monitors reusing same > hostnames, however different IPs. > > Steps to reproduce: > > - Build monitor using ceph-deploy create mon > - Remove monitor > via http://docs.ceph.com/docs/master/rados/

[ceph-users] about federated gateway

2015-12-13 Thread 孙方臣
Hi, All, I'm setting up federated gateway. One is master zone, the other is slave zone. Radosgw-agent is running in slave zone. I have encountered some problems, can anybody help answering this: 1. When put a object to radosgw, there are two bilogs to generate. One is "pending" state, the other

[ceph-users] where is the client

2015-12-13 Thread Linux Chips
Hi all, i have been trying to send this to the dev mailing list, but the mail was rejected! for what ever reason, though i am subscribed. any one facing this issue with the dev list? i thought it is related to the dev the most since i was digging inside the code for a wil

Re: [ceph-users] Monitor rename / recreate issue -- probing state

2015-12-13 Thread Joao Eduardo Luis
On 12/13/2015 12:26 PM, deeepdish wrote: >> >> This appears to be consistent with a wrongly populated 'mon_host' and >> 'mon_initial_members' in your ceph.conf. >> >> -Joao > > > Thanks Joao. I had a look but my other 3 monitors are working just > fine. To be clear, I’ve confirmed the same b

Re: [ceph-users] Monitor rename / recreate issue -- probing state

2015-12-13 Thread deeepdish
Perhaps I’m not understanding something.. The “extra_probe_peers” ARE the other working monitors in quorum out of the mon_host line in ceph.conf. In the example below 10.20.1.8 = b20s08; 10.20.10.251 = smon01s; 10.20.10.252 = smon02s The monitor is not reaching out to the other IPs and syncing

[ceph-users] All pgs stuck peering

2015-12-13 Thread Chris Dunlop
Hi, ceph 0.94.5 After restarting one of our three osd hosts to increase the RAM and change from linux 3.18.21 to 4.1., the cluster is stuck with all pgs peering: # ceph -s cluster c6618970-0ce0-4cb2-bc9a-dd5f29b62e24 health HEALTH_WARN 3072 pgs peering 3072 pgs s

Re: [ceph-users] All pgs stuck peering

2015-12-13 Thread Varada Kari
Can get the details of 1. ceph health detail 2. ceph pg query of any one PG stuck peering Varada > -Original Message- > From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of > Chris Dunlop > Sent: Monday, December 14, 2015 8:22 AM > To: ceph-users@lists.ceph.com >

[ceph-users] Cephfs I/O when no I/O operations are submitted

2015-12-13 Thread xiafei
Hi all, I have a question about the I/O of cephfs. I configure cephfs with 2 OSDs, and 300PGs in two HDDs. Then I use the iostat (iostat -kx 1 /dev/sdd1) to monitor the I/Os of the HDDs (/dev/sdd1). The result is as follows: The iostat shows that there are write requests every second. H

Re: [ceph-users] Cephfs I/O when no I/O operations are submitted

2015-12-13 Thread Christian Balzer
Hello, On Mon, 14 Dec 2015 11:46:43 +0800 xiafei wrote: > Hi all, > I have a question about the I/O of cephfs. > I configure cephfs with 2 OSDs, and 300PGs in two HDDs. Then I use the > iostat (iostat -kx 1 /dev/sdd1) to monitor the I/Os of the HDDs > (/dev/sdd1). The result is as follows:

Re: [ceph-users] All pgs stuck peering

2015-12-13 Thread Chris Dunlop
Hi Varada, On Mon, Dec 14, 2015 at 03:23:20AM +, Varada Kari wrote: > Can get the details of > > 1. ceph health detail > 2. ceph pg query > > of any one PG stuck peering > > > Varada The full health detail is over 9000 lines, but here's a summary: # ceph health detail | head HEALTH_WA

Re: [ceph-users] All pgs stuck peering

2015-12-13 Thread Robert LeBlanc
-BEGIN PGP SIGNED MESSAGE- Hash: SHA256 I've had something similar to this when there was an MTU mismatch, the smaller I/O would get through, but the larger I/O would be blocked and prevent peering. - Robert LeBlanc PGP Fingerprint 79A2 9CA4 6CC4 45DD A904 C70E E654 3BB2

Re: [ceph-users] All pgs stuck peering

2015-12-13 Thread Chris Dunlop
On Sun, Dec 13, 2015 at 09:10:34PM -0700, Robert LeBlanc wrote: > I've had something similar to this when there was an MTU mismatch, the > smaller I/O would get through, but the larger I/O would be blocked and > prevent peering. > > Robert LeBlanc > PGP Fingerprint 79A2 9CA4 6CC4 4

Re: [ceph-users] Cephfs I/O when no I/O operations are submitted

2015-12-13 Thread xiafei
Hi Chirstian, My configuration is as follows: Two nodes: node0 (10.10.0.23), node1(10.10.0.24) node0: Monitor, MDS, OSD0 in /dev/sdd1 node1: OSD1 in /dev/sdd1 The cephfs is mounted using: mount.ceph 10.10.0.23:6789:/ /mnt/mycephfs/ After finishing configuration (PGs are