On Sep 19, 2013, at 6:10 PM, Gregory Farnum
wrote:
> On Wed, Sep 18, 2013 at 11:43 PM, Dan Van Der Ster
> wrote:
>>
>> On Sep 18, 2013, at 11:50 PM, Gregory Farnum
>> wrote:
>>
>>> On Wed, Sep 18, 2013 at 6:33 AM, Dan Van Der Ster
>>> wrote:
Hi,
We just finished debugging a probl
Hi guys
Do you have any list of companies that use Ceph in production?
regards
--
Maciej Gałkiewicz
Shelly Cloud Sp. z o. o., Sysadmin
http://shellycloud.com/, mac...@shellycloud.com
KRS: 440358 REGON: 101504426
___
ceph-users mailing list
ceph-use
Hi Yehuda,
I did try bumping up pg_num on .rgw, .rgw.buckets, and .rgw.buckets.index
from 8 to 220 prior to writing to the list but when I saw no difference in
performance I set back to 8 (by creating new pools etc.)
One thing we have since noticed is that radosgw is validating tokens on
each req
On Thu, Sep 19, 2013 at 8:12 PM, Joao Eduardo Luis
wrote:
> On 09/19/2013 04:46 PM, Andrey Korolyov wrote:
>>
>> On Thu, Sep 19, 2013 at 1:00 PM, Joao Eduardo Luis
>> wrote:
>>>
>>> On 09/18/2013 11:25 PM, Andrey Korolyov wrote:
Hello,
Just restarted one of my mons after
On 09/20/2013 05:51 AM, Matt Thompson wrote:
Hi Yehuda,
I did try bumping up pg_num on .rgw, .rgw.buckets,
and .rgw.buckets.index from 8 to 220 prior to writing to the list but
when I saw no difference in performance I set back to 8 (by creating
new pools etc.)
Hi Matt,
You'll want to bump
Hi,
I'm using CephFS 0.67.3 as a backend for Hypertable and ElasticSearch.
Active reading/writing to the cephfs causes uncontrolled OSD memory growth
and at the final stage swapping and server unavailability.
To keep the cluster in working condition I have to restart OSD's with
excessive memory co
On Fri, Sep 20, 2013 at 3:51 AM, Matt Thompson wrote:
> Hi Yehuda,
>
> I did try bumping up pg_num on .rgw, .rgw.buckets, and .rgw.buckets.index
> from 8 to 220 prior to writing to the list but when I saw no difference in
> performance I set back to 8 (by creating new pools etc.)
>
> One thing we
On Sep 19, 2013, at 3:43 PM, Mark Nelson wrote:
> If you set:
>
> osd pool default flag hashpspool = true
>
> Theoretically that will cause different pools to be distributed more randomly.
The name seems to imply that it should be settable per pool. Is that possible
now?
If set globally, do
On Fri, Sep 20, 2013 at 6:40 AM, Serge Slipchenko
wrote:
> Hi,
>
> I'm using CephFS 0.67.3 as a backend for Hypertable and ElasticSearch.
> Active reading/writing to the cephfs causes uncontrolled OSD memory growth
> and at the final stage swapping and server unavailability.
What kind of memory g
Hello,
I'm trying to connect radosgw to keystone, it seems it is all working but it
won't create the bucket:
Error Swift Client:
DEBUG:root:HTTP PERF: 0.05760 seconds to PUT 10.10.10.51:80 /swift/v1/test)
DEBUG:swiftclient:REQ: curl -i http://10.10.10.51/swift/v1/test -X PUT -H
"Content-Length
Am 17.09.2013 um 14:55 schrieb Wido den Hollander :
> On 09/16/2013 11:29 AM, Nico Massenberg wrote:
>> Am 16.09.2013 um 11:25 schrieb Wido den Hollander :
>>
>>> On 09/16/2013 11:18 AM, Nico Massenberg wrote:
Hi there,
I have successfully setup a ceph cluster with a healthy statu
On Fri, Sep 20, 2013 at 4:03 AM, Rick Stokkingreef
wrote:
> Hello,
>
> I'm trying to connect radosgw to keystone, it seems it is all working but it
> won't create the bucket:
>
>
> Error Swift Client:
> DEBUG:root:HTTP PERF: 0.05760 seconds to PUT 10.10.10.51:80 /swift/v1/test)
> DEBUG:swiftclient
[ Re-added the list — please keep emails on there so everybody can benefit! ]
On Fri, Sep 20, 2013 at 12:24 PM, Serge Slipchenko
wrote:
>
>
>
> On Fri, Sep 20, 2013 at 5:59 PM, Gregory Farnum wrote:
>>
>> On Fri, Sep 20, 2013 at 6:40 AM, Serge Slipchenko
>> wrote:
>> > Hi,
>> >
>> > I'm using C
Hi Yehuda / Mark,
Thanks for the information! We will try keystone authentication again when
the next dumpling dot release is out.
As for "ceph cache", are you referring to "rgw_cache_enabled"? If so, we
don't have that set in our ceph.conf so should in theory be using it
already.
Regards,
Mat
On Fri, Sep 20, 2013 at 1:50 PM, Matt Thompson wrote:
>
> Hi Yehuda / Mark,
>
> Thanks for the information! We will try keystone authentication again when
> the next dumpling dot release is out.
>
> As for "ceph cache", are you referring to "rgw_cache_enabled"? If so, we
> don't have that set
Sorry, not trying to repost or bump my thread, but I think I can restate my
question here and for better clarity. I am confused about the "--cluster"
argument used when "ceph-deploy mon create" invokes "ceph-mon" on the target
system. I always get a failure at this point when running "ceph-dep
Mike,
So I do have to ask, where would the extra latency be coming from if all my
OSDs are on the same machine that my test VM is running on? I have tried
every SSD tweak in the book. The primary concerning issue I see is with
Read performance of sequential IOs in the 4-8K range. I would expect
Hi,
I am running Ceph on a 3 node cluster and each of my server node is running 10
OSDs, one for each disk. I have one admin node and all the nodes are connected
with 2 X 10G network. One network is for cluster and other one configured as
public network.
All the OSD journals are on SSDs.
I sta
Hi,
A couple of things that might be worth trying:
use multiple containers in swift-bench. Newer versions should support
this. Also, if this is a test cluster, you may want to try the ceph
wip-6286 branch as we have a rather major performance improvement in it
when dealing with small object
Hi Mark,
Thanks for your quick response.
I tried adding the 'num_container = 100' in the job file and found that the
performance actually decreasing with that option. I am getting around 1K less
iops after putting this. Another observation is that in order to get back the
earlier iops I need to
On 09/20/2013 05:49 PM, Somnath Roy wrote:
Hi Mark,
Thanks for your quick response.
I tried adding the 'num_container = 100' in the job file and found that the
performance actually decreasing with that option. I am getting around 1K less
iops after putting this. Another observation is that in o
Hi Mark,
It's a test cluster and I will try with the new release.
As I mentioned in the mail, I think number of rados client instance is the
limitation. Could you please let me know how many rados client instance the
radosgw daemon is instantiating ? Is it configurable somehow ?
Thanks & Regard
I thought I'd just throw this in there, as I've been following this
thread: dd also has an 'iflag' directive just like the 'oflag'.
I don't have a deep, offhand recollection of the caching mechanisms at play
here, but assuming you want a solid synchronous / non-cached read, you
should probably spe
Thanks Jamie,
I tried that too. But similar results. The issue looks to possibly be
with the latency but everything is running on one server so logiclly I
would think there would be no latency but according to this there may be
something that is causing slow results. See Co-Residency
http://cep
The iflag addition should help with at least having more accurate reads via
dd, but in terms of actually testing performance, have you tried sysbench
or bonie++?
I'd be curious how things change with multiple io threads, as dd isn't
necessarily a good performance investigation tool (you're rather
Thanks Jamie,
I have not tried bonnie++. I was trying to keep it to sequential IO for
comparison since that is all Rados bench can do. I did do a full io test
in a windows vm using SQLIO. I have both read/write sequential/random for
4/8/64K blocks from that test. I also have access to a Dell E
26 matches
Mail list logo