On Mon, Jul 29, 2013 at 2:55 PM, Chen, Xiaoxi wrote:
> ** **
>
> ** **
>
> *From:* zrz...@gmail.com [mailto:zrz...@gmail.com] *On Behalf Of *Rongze
> Zhu
> *Sent:* Monday, July 29, 2013 2:18 PM
> *To:* Chen, Xiaoxi
> *Cc:* Gregory Farnum; ceph-users@lists.ceph.com
>
> *Subject:* Re: [ceph-users]
http://newsok.com/gallery/feedid/544616/50/pictures/2096830
Sorry for contacting you in this medium without a previous notice. My name is
Floriane,I need your collaboration in a partnership business in your country; I
have money to invest under your care as my business manager. Reply for More
d
From: zrz...@gmail.com [mailto:zrz...@gmail.com] On Behalf Of Rongze Zhu
Sent: Monday, July 29, 2013 2:18 PM
To: Chen, Xiaoxi
Cc: Gregory Farnum; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] add crush rule in one command
On Sat, Jul 27, 2013 at 4:25 PM, Chen, Xiaoxi
mailto:xiaoxi.c...@
I'm currently running test pools using mkcephfs, and am now
investigating deploying using ceph-deploy. I've hit a couple of
conceptual changes which I can't find any documentation for, and was
wondering if someone here could give me some answers as to how things
now work.
While ceph-deploy create
Hello,
I have a small test cluster that I deploy using puppet-ceph. Both the MON and
the OSDs deploy properly, and appear to have all of the correct configurations.
However, the OSDs are never marked as up. Any input is appreciated. The daemons
are running on each OSD server, the OSDs are liste
I've had a 4 node ceph cluster working well for month.
This weekend I added a 5th node to the cluster and after many hours of
rebalancing I have the following warning:
HEALTH_WARN 1 pgs incomplete; 1 pgs stuck inactive; 1 pgs stuck
unclean
But, my big problem is that the cluster is
On Mon, Jul 29, 2013 at 11:36 AM, Don Talton (dotalton)
wrote:
> Hello,
>
> I have a small test cluster that I deploy using puppet-ceph. Both the MON and
> the OSDs deploy properly, and appear to have all of the correct
> configurations. However, the OSDs are never marked as up. Any input is
>
Greeting,
Does anyone came across an issue where when used Ceph-Deploy to deploy
a new named cluster (e.g. openstack) and when the nodes restarts all
the daemons starts with the exception of the MON since it used a
cluster ID. The only way I can start the MON daemon again is to run
the "m
The entirety of the osd log file is below. I tried this on both bobtail and
cuttlefish. Bobtail I noticed errors about all of the xfs features not being
supported, which has gone away in cuttlefish. So I am assuming that issue is
resolved. I don't see any other errors
2013-07-29 20:46:25.813383
Sorry, forgot to point out this:
2013-07-29 20:46:26.366916 7f4f28c6e700 0 -- 2.4.1.7:6801/13319 >>
2.4.1.8:6802/18344 pipe(0x284378 0 sd=30 :53729
s=1 pgs=0 cs=0 l=0).connect claims to be 2.4.1.8:6802/17272 not
2.4.1.8:6802/18344 -
Hi,
The spec file used for building rpm's misses a build time dependency on
snappy-devel. Please see attached patch to fix.
Kind regards,
Erik.
--- ceph.spec-orig 2013-07-30 00:24:54.70500 +0200
+++ ceph.spec 2013-07-30 00:25:34.19900 +0200
@@ -42,6 +42,7 @@
BuildRequires: libxml2-deve
Thanks Erik!
Adding ceph-devel since it has a patch on it.
Best Regards,
Patrick McGarry
Director, Community || Inktank
http://ceph.com || http://inktank.com
@scuttlemonkey || @ceph || @inktank
On Mon, Jul 29, 2013 at 7:07 PM, Erik Logtenberg wrote:
> Hi,
>
> The spec file used for build
Hi Joao,
Oh! Yes. Thanks for pointing it out to me. It works after I upgraded all mons
to 0.61.7.
Keith
- Original Message -
From: "Joao Eduardo Luis"
To: ceph-users@lists.ceph.com
Sent: Friday, July 26, 2013 10:36:57 PM
Subject: Re: [ceph-users] Upgrade from 0.61.4 to 0.61.6 mon fai
My servers all have 4 x 1gb network adapters, and I'm presently using DRBD over
a bonded rr link.
Moving to ceph, I'm thinking for each server:
eth0 - LAN traffic for server and VM's
eth1 - "public" ceph traffic
eth2+eth3 - LACP bonded for "cluster" ceph traffic
I'm thinking LACP should work ok
On Mon, Jul 29, 2013 at 9:38 PM, James Harper
wrote:
> My servers all have 4 x 1gb network adapters, and I'm presently using DRBD
> over a bonded rr link.
>
> Moving to ceph, I'm thinking for each server:
>
> eth0 - LAN traffic for server and VM's
> eth1 - "public" ceph traffic
> eth2+eth3 - LACP
15 matches
Mail list logo