On 12/10/2015 04:00 AM, deeepdish wrote:
> Hello,
>
> I encountered a strange issue when rebuilding monitors reusing same
> hostnames, however different IPs.
>
> Steps to reproduce:
>
> - Build monitor using ceph-deploy create mon
> - Remove monitor
> via http://docs.ceph.com/docs/master/rados/
Hi, All,
I'm setting up federated gateway. One is master zone, the other is slave
zone. Radosgw-agent is running in slave zone. I have encountered some
problems, can anybody help answering this:
1. When put a object to radosgw, there are two bilogs to generate. One is
"pending" state, the other
Hi all,
i have been trying to send this to the dev mailing list, but the
mail was rejected! for what ever reason, though i am subscribed. any
one facing this issue with the dev list? i thought it is related to
the dev the most since i was digging inside the code for a wil
On 12/13/2015 12:26 PM, deeepdish wrote:
>>
>> This appears to be consistent with a wrongly populated 'mon_host' and
>> 'mon_initial_members' in your ceph.conf.
>>
>> -Joao
>
>
> Thanks Joao. I had a look but my other 3 monitors are working just
> fine. To be clear, I’ve confirmed the same b
Perhaps I’m not understanding something..
The “extra_probe_peers” ARE the other working monitors in quorum out of the
mon_host line in ceph.conf.
In the example below 10.20.1.8 = b20s08; 10.20.10.251 = smon01s; 10.20.10.252 =
smon02s
The monitor is not reaching out to the other IPs and syncing
Hi,
ceph 0.94.5
After restarting one of our three osd hosts to increase the RAM and change
from linux 3.18.21 to 4.1., the cluster is stuck with all pgs peering:
# ceph -s
cluster c6618970-0ce0-4cb2-bc9a-dd5f29b62e24
health HEALTH_WARN
3072 pgs peering
3072 pgs s
Can get the details of
1. ceph health detail
2. ceph pg query
of any one PG stuck peering
Varada
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Chris Dunlop
> Sent: Monday, December 14, 2015 8:22 AM
> To: ceph-users@lists.ceph.com
>
Hi all,
I have a question about the I/O of cephfs.
I configure cephfs with 2 OSDs, and 300PGs in two HDDs. Then I use the iostat
(iostat -kx 1 /dev/sdd1) to monitor the I/Os of the HDDs (/dev/sdd1).
The result is as follows:
The iostat shows that there are write requests every second. H
Hello,
On Mon, 14 Dec 2015 11:46:43 +0800 xiafei wrote:
> Hi all,
> I have a question about the I/O of cephfs.
> I configure cephfs with 2 OSDs, and 300PGs in two HDDs. Then I use the
> iostat (iostat -kx 1 /dev/sdd1) to monitor the I/Os of the HDDs
> (/dev/sdd1). The result is as follows:
Hi Varada,
On Mon, Dec 14, 2015 at 03:23:20AM +, Varada Kari wrote:
> Can get the details of
>
> 1. ceph health detail
> 2. ceph pg query
>
> of any one PG stuck peering
>
>
> Varada
The full health detail is over 9000 lines, but here's a summary:
# ceph health detail | head
HEALTH_WA
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
I've had something similar to this when there was an MTU mismatch, the
smaller I/O would get through, but the larger I/O would be blocked and
prevent peering.
-
Robert LeBlanc
PGP Fingerprint 79A2 9CA4 6CC4 45DD A904 C70E E654 3BB2
On Sun, Dec 13, 2015 at 09:10:34PM -0700, Robert LeBlanc wrote:
> I've had something similar to this when there was an MTU mismatch, the
> smaller I/O would get through, but the larger I/O would be blocked and
> prevent peering.
>
> Robert LeBlanc
> PGP Fingerprint 79A2 9CA4 6CC4 4
Hi Chirstian,
My configuration is as follows:
Two nodes: node0 (10.10.0.23), node1(10.10.0.24)
node0: Monitor, MDS, OSD0 in /dev/sdd1
node1: OSD1 in /dev/sdd1
The cephfs is mounted using: mount.ceph 10.10.0.23:6789:/ /mnt/mycephfs/
After finishing configuration (PGs are
13 matches
Mail list logo