Keep an eye on the new thread "OSD (and probably other settings) not
being picked up outside of the [global] section". You may be running
into something similar.
Regards
Mark
On 17/10/14 11:52, lakshmi k s wrote:
Thank you Mark. Strangely, Icehouse install that I have didn't seem to
have one.
Maybe will try to update repository and see if this resolves the issue? Thanks
for the help guys
James
-Original Message-
From: Ian Colle [mailto:ico...@redhat.com]
Sent: 16 October 2014 18:14
To: Loic Dachary
Cc: Support - Avantek; ceph-users
Subject: Re: [ceph-users] Error deploying C
samuel writes:
> Hi all,This issue is also affecting us (centos6.5 based icehouse) and,
> as far as I could read,
> comes from the fact that the path /var/lib/nova/instances (or whatever
> configuration path you have in nova.conf) is not shared. Nova does not
> see this shared path and therefore
>> I assume you added more clients and checked that it didn't scale past
>> that?
Yes, correct.
>> You might look through the list archives; there are a number of
discussions about how and how far you can scale SSD-backed cluster
performance.
I have look at those discussions before, particular the
Hi,
>>With 0.86, the following options and disabling debugging can improve
>>obviously.
>> osd enable op tracker = false
I think this one has been optimized by Somnath
https://github.com/ceph/ceph/commit/184773d67aed7470d167c954e786ea57ab0ce74b
- Mail original -
De: "Mark Wu"
À: "Gr
Decrease the rbd debug level from 5 to 0, and almost all the debugging and
logging are disabled. It doesn't help.
ceph tell osd.* injectargs '--debug_rbd 0\/0'
ceph tell osd.* injectargs '--debug_objectcacher 0\/0'
ceph tell osd.* injectargs '--debug_rbd_replay 0\/0'
2014-10-17 8:45 GMT+08:00 Sh
The client doesn't hit any bottleneck. I also tried to run multiple
clients on different host. There's no change.
2014-10-17 14:36 GMT+08:00 Alexandre DERUMIER :
> Hi,
> >>Thanks for the detailed information. but I am already using fio with rbd
> engine. Almost 4 volumes can reach the peak.
>
>
1. why erasure coded pool does not work with rbd?
2. i used rados command to put a file into erasue coded pool,then rm it. why
the file remains on osd's backend fs all the time?
3. what is the best use case with erasure coded pool?
4. command of 'rados ls' is to list objects, where are the objec
Hello List,
I've troubles with our radosgw. The "backup" bucket was mounted via
s3fs. But now I'm only getting a "transport endpoint not connected" when
trying to access the directory.
Strange thing is, that differnt buckets show different behaviours in the
webbrowser:
https://example.com/
At least historically, high CPU usage and likely context switching and
lock contention have been the limiting factor during high IOPS workloads
on the test hardware at Inktank (and now RH). I ran benchmarks with a
parametric sweep of ceph parameters a while back on SSDs to see if
changing any
Sure Mark, I saw that thread last night. It will be interesting to see the
resolution.
Thanks,
Lakshmi.
On Friday, October 17, 2014 12:21 AM, Mark Kirkwood
wrote:
Keep an eye on the new thread "OSD (and probably other settings) not
being picked up outside of the [global] section". You ma
Hello Christian -
On a side note, I am facing similar issues with Keystone flags on
0.80.5/0.80.6. If they are declared under radosgw section, they are not picked
up. But if they are under global section, OpenStack keystone works like a
charm. I would really like to see a solution for this.
T
I haven't used the libvirt pools too much. To me, they are fairly
confusing as support for them seems to vary based on what you are doing.
On 10/16/2014 2:45 PM, Dan Geist wrote:
Thanks, Brian. That helps a lot. I suspect that wasn't needed if the MON hosts
were defined within ceph.conf, but
On 17/10/2014 00:39, Support - Avantek wrote:
> Maybe will try to update repository and see if this resolves the issue?
> Thanks for the help guys
>
> James
>
> -Original Message-
> From: Ian Colle [mailto:ico...@redhat.com]
> Sent: 16 October 2014 18:14
> To: Loic Dachary
> Cc: Suppo
14 matches
Mail list logo