Hi,
from my experience both "ceph osd crush reweight" and "ceph osd
reweight" will lead to CRUSH map changes and PGs remapping. So both
commands eventually redistribute data beween OSDs. Is there any good
reason in terms of ceph performance best practices to choose the one
over the other?
On 26 Ju
Hi,
during PGs remapping, the cluster recovery process sometimes gets
stuck on PGs with backfill_toofull state. The obvious solution is to
reweight the impacted OSD until we add new OSDs to the cluster. In
order to force the remapping process to complete asap we try to inject
a higher value on "osd
Hello,
thanks for you answer the question.
But when there are less than 50 thousand objects, and latency is very big . I
see the write ops for the bucket index object., from
"journaled_completion_queue" to "op_commit" cost 3.6 seconds,this mean that
from “writing journal finish” to "op_commit"
On 06/30/2014 09:26 AM, Kostis Fardelas wrote:
Hi,
from my experience both "ceph osd crush reweight" and "ceph osd
reweight" will lead to CRUSH map changes and PGs remapping. So both
commands eventually redistribute data beween OSDs. Is there any good
reason in terms of ceph performance best prac
Hello,
On Fri, 27 Jun 2014 11:00:58 -0700 Erich Weiler wrote:
> Hi Folks,
>
> We're going to spin up a ceph cluster with the following general specs:
>
> * Six 10Gb/s connected servers, each with 45 4TB disks in a JBOD
>
Interesting amount of disks, what case/server is this?
> * Each disk is
Hi,
My ceph-rest-api api is not starting. I have this error:
>> ceph-rest-api -c /etc/ceph/ceph.conf
Traceback (most recent call last):
File "/usr/bin/ceph-rest-api", line 59, in
rest,
File "/usr/lib/python2.7/dist-packages/ceph_rest_api.py", line 496, in
generate_app
addr, port =
On 06/30/2014 02:20 PM, alphaoumar.s...@orange.com wrote:
Hi,
My ceph-rest-api api is not starting. I have this error:
ceph-rest-api -c /etc/ceph/ceph.conf
Traceback (most recent call last):
File "/usr/bin/ceph-rest-api", line 59, in
rest,
File "/usr/lib/python2.7/dist-packages/
Erik, I don't think we are building for FC19 anymore.
There are some dependencies that could not be met for Ceph in FC19 so we
decided to stop trying to get builds out for that.
On Sun, Jun 29, 2014 at 2:52 PM, Erik Logtenberg wrote:
> Nice work! When will the new rpm's be released on
> http://c
I assume that the credentials are correct because the rest-api was running
before I send that bad request.
On Mon, Jun 30, 2014 at 2:29 PM, Wido den Hollander wrote:
> On 06/30/2014 02:20 PM, alphaoumar.s...@orange.com wrote:
>
>> Hi,
>>
>> My ceph-rest-api api is not starting. I have this err
Ah, okay I missed that.
So, what distributions/versions are supported then? I see that the FC20
part of the ceph repository (http://ceph.com/rpm/fc20/x86_64) doesn't
contain ceph itself, so I am assuming you'd have to use the ceph package
from FC20 itself, however they are still at 0.80.1:
ceph-0
It looks like we are actually missing packages for FC20 even. I am
investigating this right now and
should have it fixed today.
On Mon, Jun 30, 2014 at 9:02 AM, Alfredo Deza wrote:
> Erik, I don't think we are building for FC19 anymore.
>
> There are some dependencies that could not be met for Ce
Hi,
After the upgrade to firefly, I have some PG in peering state.
I seen the output of 0.82 so I try to upgrade for solved my problem.
My three MDS crash and some OSD triggers a chain reaction that kills
other OSD.
I think my MDS will not start because of the metadata are on the OSD.
I have
Hi Alfredo and folk,
Could you have a look at this?
Someone else has any idea why i am getting this error?
Thanks in advance, I
2014-06-27 16:37 GMT+02:00 Iban Cabrillo :
> Hi Alfredo,
> This is the complete procedure:
>
>
> On OSD node:
>
> [ceph@ceph02 ~]$ sudo parted /dev/xvdb
>
> GN
It looks like that value isn't live-updateable, so you'd need to
restart after changing the daemon's config. Sorry!
Made a ticket: http://tracker.ceph.com/issues/8695
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com
On Mon, Jun 30, 2014 at 12:41 AM, Kostis Fardelas wrote:
> Hi,
Directory sharding is even less stable than the rest of the MDS, but
if you need it I have some hope that things willow work. You just need
to set the "mds bal frag" option to "true". You can configure the
limits as well; see the options following:
https://github.com/ceph/ceph/blob/master/src/commo
What's the backtrace from the crashing OSDs?
Keep in mind that as a dev release, it's generally best not to upgrade
to unnamed versions like 0.82 (but it's probably too late to go back
now).
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com
On Mon, Jun 30, 2014 at 8:06 AM, Pierr
Umm...there are hooks for that, but they're for debug purposes only.
And running multiple MDSes *will* break something, in ways that
fragmenting the directories won't.
If you're dead set on this course, you can dig through the qa
directory for the MDS tests to see what commands it's running to forc
Hi,all.
I finally was succeded.
Maybe somebody will be intresting.
A script read the content from a fuse-rbd files (i wonder, what is actual use
case of fuse-rbd?) with "dd"
and, in a case of timeout (alarmed by a background process), killed entire fuse
daemon, remount fuse-rbd and resumed at
On Mon, Jun 30, 2014 at 11:22 AM, Iban Cabrillo wrote:
> Hi Alfredo and folk,
> Could you have a look at this?
> Someone else has any idea why i am getting this error?
>
> Thanks in advance, I
>
>
>
> 2014-06-27 16:37 GMT+02:00 Iban Cabrillo :
>
>> Hi Alfredo,
>> This is the complete procedur
On Mon, Jun 30, 2014 at 9:33 AM, Erik Logtenberg wrote:
> Ah, okay I missed that.
>
> So, what distributions/versions are supported then? I see that the FC20
> part of the ceph repository (http://ceph.com/rpm/fc20/x86_64) doesn't
> contain ceph itself, so I am assuming you'd have to use the ceph p
Hi Alfredo,
During this morning, I have purged all the deployment.
I just prepared 4 SAN Servers with 4 FC-Atacched disk (2.7 TB per disk)
each one of them.
Tomorrow I will try to deploy anew installation leaving the VMs machines
as mons and the OSDs with this physical servers.
The local
well, at least for me it is live-updateable (0.80.1). It may be that
during recovery OSDs are currently backfilling other pgs, so stats are
not updated (because pg were not tried to backfill after setting change).
On 2014.06.30 18:31, Gregory Farnum wrote:
> It looks like that value isn't live-upd
Ahhh now -that- is some useful information, thanks!
On 06/30/2014 07:57 PM, Alfredo Deza wrote:
> On Mon, Jun 30, 2014 at 9:33 AM, Erik Logtenberg wrote:
>> Ah, okay I missed that.
>>
>> So, what distributions/versions are supported then? I see that the FC20
>> part of the ceph repository (http:
Oh, you're right — I just ran a grep and didn't look closely enough.
It looks like once they're in that too_full state, they need to get
kicked by the OSD to try again though. I believe (haven't checked)
that that can happen if other backfills finish, but if none are
running and all the PGs needing
Just for reference, I've opened http://tracker.ceph.com/issues/8702
On 6/26/2014 10:18 PM, Brian Rak wrote:
My current workaround plan is to just upload both versions of the
file... I think this is probably the simplest solution with the least
possibility of breaking later on.
On 6/26/2014 6:
On 2014-06-16 13:16, lists+c...@deksai.com wrote:
I've just tried setting up the radosgw on centos6 according to
http://ceph.com/docs/master/radosgw/config/
While I can run the admin commands just fine to create users etc.,
making a simple wget request to the domain I set up returns a 500 due
Well, I was hoping for a reply from Inktank, but I'll describe the process
I plan to test:
Best Case:
Primary zone is down
Disable radosgw-agent in secondary zone
Update the region in the secondary to enable data and metadata logging
Update DNS/Load balancer to send primary traffic to secondary
Se
That sounds like you have some kind of odd situation going on. We only
use radosgw with nginx/tengine so I can't comment on the apache part of it.
My understanding is this:
You start ceph-radosgw, this creates a fastcgi socket somewhere (verify
this is created with lsof, there are some permis
You should check out Calamari (https://github.com/ceph/calamari), Inktank's
monitoring and administration tool.
I started before Calamari was announced, so I rolled my own using using
Zabbix. It handles all the monitoring, graphing, and alerting in one tool.
It's kind of a pain to setup, but wo
RadosGW stripes data by default. Objects larger than 4MiB are broken up
into 4MiB chunks.
On Wed, Jun 25, 2014 at 3:49 AM, Florent B wrote:
> Hi,
>
> Is it possible to get data striped with radosgw, as in RBD or CephFS ?
>
> Thank you
> ___
> ceph-us
On Jun 30, 2014, at 3:59 PM, baijia...@126.com wrote:
> Hello,
> thanks for you answer the question.
> But when there are less than 50 thousand objects, and latency is very big . I
> see the write ops for the bucket index object., from
> "journaled_completion_queue" to "op_commit" cost 3.6 seco
31 matches
Mail list logo