Hi
We found something else.
After osd.72 flapp, one PG '3.54d' was recovering long time.
--
ceph health details
HEALTH_WARN 1 pgs recovering; recovery 1/39821745 degraded (0.000%)
pg 3.54d is active+recovering, acting [72,108,23]
recovery 1/39821745 degraded (0.000%)
--
Last flap down/up osd.72 w
Any news on this? I'm not sure if you guys received the link to the log
and monitor files. One monitor and osd is still crashing with the error
below.
On 2013-07-24 09:57, pe...@2force.nl wrote:
Hi Sage,
I just had a 0.61.6 monitor crash and one osd. The mon and all osds
restarted just fine a
On 07/25/2013 11:46 AM, pe...@2force.nl wrote:
Any news on this? I'm not sure if you guys received the link to the log
and monitor files. One monitor and osd is still crashing with the error
below.
I think you are seeing this issue: http://tracker.ceph.com/issues/5737
You can try with new pack
On 2013-07-25 11:52, Wido den Hollander wrote:
On 07/25/2013 11:46 AM, pe...@2force.nl wrote:
Any news on this? I'm not sure if you guys received the link to the
log
and monitor files. One monitor and osd is still crashing with the
error
below.
I think you are seeing this issue: http://track
On 07/25/2013 12:01 PM, pe...@2force.nl wrote:
On 2013-07-25 11:52, Wido den Hollander wrote:
On 07/25/2013 11:46 AM, pe...@2force.nl wrote:
Any news on this? I'm not sure if you guys received the link to the log
and monitor files. One monitor and osd is still crashing with the error
below.
I
On 2013-07-25 12:08, Wido den Hollander wrote:
On 07/25/2013 12:01 PM, pe...@2force.nl wrote:
On 2013-07-25 11:52, Wido den Hollander wrote:
On 07/25/2013 11:46 AM, pe...@2force.nl wrote:
Any news on this? I'm not sure if you guys received the link to the
log
and monitor files. One monitor and
I think to make pool-per-user (primary for cephfs; for security, quota, etc),
hundreds (or even more) of them. But I remember 2 facts:
1) info in manual about slowdown on many pools;
2) something in later changelog about hashed pool IDs (?).
How about now and numbers of pools?
And how to avoid ser
On 07/25/2013 11:20 AM, pe...@2force.nl wrote:
On 2013-07-25 12:08, Wido den Hollander wrote:
On 07/25/2013 12:01 PM, pe...@2force.nl wrote:
On 2013-07-25 11:52, Wido den Hollander wrote:
On 07/25/2013 11:46 AM, pe...@2force.nl wrote:
Any news on this? I'm not sure if you guys received the li
On 2013-07-25 15:21, Joao Eduardo Luis wrote:
On 07/25/2013 11:20 AM, pe...@2force.nl wrote:
On 2013-07-25 12:08, Wido den Hollander wrote:
On 07/25/2013 12:01 PM, pe...@2force.nl wrote:
On 2013-07-25 11:52, Wido den Hollander wrote:
On 07/25/2013 11:46 AM, pe...@2force.nl wrote:
Any news on
Hi List,
I've been having issues getting mons deployed following the
ceph-deploy instructions here[0]. My steps were:
$ ceph-deploy new host{1..3}
$ vi ceph.conf # Add in public network/cluster network details, as
well as change the mon IPs to those on the correct interface
$ ceph-deploy insta
On 07/25/2013 02:39 PM, pe...@2force.nl wrote:
On 2013-07-25 15:21, Joao Eduardo Luis wrote:
On 07/25/2013 11:20 AM, pe...@2force.nl wrote:
On 2013-07-25 12:08, Wido den Hollander wrote:
On 07/25/2013 12:01 PM, pe...@2force.nl wrote:
On 2013-07-25 11:52, Wido den Hollander wrote:
On 07/25/20
On 2013-07-25 15:55, Joao Eduardo Luis wrote:
On 07/25/2013 02:39 PM, pe...@2force.nl wrote:
On 2013-07-25 15:21, Joao Eduardo Luis wrote:
On 07/25/2013 11:20 AM, pe...@2force.nl wrote:
On 2013-07-25 12:08, Wido den Hollander wrote:
On 07/25/2013 12:01 PM, pe...@2force.nl wrote:
On 2013-07-2
Links I forgot to include the first time:
[0] http://ceph.com/docs/master/rados/deployment/ceph-deploy-install/
[1] http://tracker.ceph.com/issues/5195
[2] http://tracker.ceph.com/issues/5205
Apologies for the noise,
Josh
___
ceph-users mailing list
ceph
Hi ceph-users,
I'm actually evaluating ceph for a project and I'm getting quite low
write performances, so please if you have time reading this post and
give me some advices :)
My test setup using some free hardware we have laying in our datacenter:
Three ceph server nodes, on each one is r
On Thursday, July 25, 2013, Dzianis Kahanovich wrote:
> I think to make pool-per-user (primary for cephfs; for security, quota,
> etc),
> hundreds (or even more) of them. But I remember 2 facts:
> 1) info in manual about slowdown on many pools;
Yep, this is still a problem; pool-per-user isn't g
On Thu, 25 Jul 2013, Josh Holland wrote:
> Hi List,
>
> I've been having issues getting mons deployed following the
> ceph-deploy instructions here[0]. My steps were:
>
> $ ceph-deploy new host{1..3}
> $ vi ceph.conf # Add in public network/cluster network details, as
> well as change the mon I
mount.nfs 10.254.253.9:/xen/9f9aa794-86c0-9c36-a99d-1e5fdc14a206 -o
soft,timeo=133,retrans=2147483647,tcp,noac
this gives mount -o doesnt exist
Moya Solutions, Inc.
am...@moyasolutions.com
0 | 646-918-5238 x 102
F | 646-390-1806
- Original Message -
From: "Sébastien RICCIO"
To: "Andr
On Thu, Jul 25, 2013 at 12:47 AM, Mostowiec Dominik
wrote:
> Hi
> We found something else.
> After osd.72 flapp, one PG '3.54d' was recovering long time.
>
> --
> ceph health details
> HEALTH_WARN 1 pgs recovering; recovery 1/39821745 degraded (0.000%)
> pg 3.54d is active+recovering, acting [72,1
Am Mittwoch, 24. Juli 2013, 22:45:55 schrieb Sage Weil:
> Go forth and test!
I just upgraded a 0.61.7 cluster to 0.67-rc2. I restarted the mons first, and
as expected, they did not join a quorum with the 0.61.7 mons, but after all of
the mons were restarted, there was no problem any more.
One o
On Wed, 24 Jul 2013, pe...@2force.nl wrote:
> On 2013-07-24 07:19, Sage Weil wrote:
> > On Wed, 24 Jul 2013, S?bastien RICCIO wrote:
> > >
> > > Hi! While trying to install ceph using ceph-deploy the monitors nodes are
> > > stuck waiting on this process:
> > > /usr/bin/python /usr/sbin/ceph-creat
Hi Sage,
On 25 July 2013 17:21, Sage Weil wrote:
> I suspect the difference here is that the dns names you are specifying in
> ceph-deploy new do not match.
Aha, this could well be the problem. The current DNS names resolve to
the address bound to an interface that is intended to be used mostly
yes, those drives are horrible, and you have them partitioned etc.
- don't use MDADM for Ceph OSDs, in my experience it *does* impair performance,
it just doesn't play nice with OSDs.
-- Ceph does its own block replication - though be careful, a size of "2" is
not necessarily as "safe" as raid10
On 07/24/2013 09:37 PM, Mikaël Cluseau wrote:
Hi,
I have a bug in the 3.10 kernel under debian, be it a self compiled
linux-stable from the git (built with make-kpkg) or the sid's package.
I'm using format-2 images (ceph version 0.61.6
(59ddece17e36fef69ecf40e239aeffad33c9db35)) to make snapsho
On 07/23/2013 06:09 AM, Oliver Schulz wrote:
Dear Ceph Experts,
I remember reading that at least in the past I wasn't recommended
to mount Ceph storage on a Ceph cluster node. Given a recent kernel
(3.8/3.9) and sufficient CPU and memory resources on the nodes,
would it now be safe to
* Mount R
Title: Update your account information
Â
Update your account information
Â
Dear valued customer
To
Hi folks,
Recently, I use puppet to deploy Ceph and integrate Ceph with OpenStack. We
put computeand storage together in the same cluster. So nova-compute and
OSDs will be in each server. We will create a local pool for each server,
and the pool only use the disks of each server. Local pools will
Any idea how we tweak this? If I want to keep my ceph node root
volume at 85% used, that's my business, man.
Thanks.
--Greg
On Mon, Jul 8, 2013 at 4:27 PM, Mike Bryant wrote:
> Run "ceph health detail" and it should give you more information.
> (I'd guess an osd or mon has a full hard disk)
>
Hi all,
2 days ago, i upgraded one of my mon from 0.61.4 to 0.61.6. The mon failed to
start. I checked the mailing list and found reports of mon failed after
upgrading to 0.61.6. So I wait for the next release and upgraded the failed
mon from 0.61.6 to 0.61.7. My mon still fail to start up.
On Thu, Jul 25, 2013 at 7:41 PM, Rongze Zhu wrote:
> Hi folks,
>
> Recently, I use puppet to deploy Ceph and integrate Ceph with OpenStack. We
> put computeand storage together in the same cluster. So nova-compute and
> OSDs will be in each server. We will create a local pool for each server,
> an
On Thu, Jul 25, 2013 at 7:42 PM, Greg Chavez wrote:
> Any idea how we tweak this? If I want to keep my ceph node root
> volume at 85% used, that's my business, man.
There are config options you can set. On the monitors they are "mon
osd full ratio" and "mon osd nearfull ratio"; on the OSDs you m
On Fri, Jul 26, 2013 at 1:22 PM, Gregory Farnum wrote:
> On Thu, Jul 25, 2013 at 7:41 PM, Rongze Zhu
> wrote:
> > Hi folks,
> >
> > Recently, I use puppet to deploy Ceph and integrate Ceph with OpenStack.
> We
> > put computeand storage together in the same cluster. So nova-compute and
> > OSDs
On Fri, Jul 26, 2013 at 2:27 PM, Rongze Zhu wrote:
>
>
>
> On Fri, Jul 26, 2013 at 1:22 PM, Gregory Farnum wrote:
>
>> On Thu, Jul 25, 2013 at 7:41 PM, Rongze Zhu
>> wrote:
>> > Hi folks,
>> >
>> > Recently, I use puppet to deploy Ceph and integrate Ceph with
>> OpenStack. We
>> > put computean
32 matches
Mail list logo