I've found nothing related in Apache logs,
I believe it's something related to Radosgw,
Anyone else tested the same thing on owned Radosgw?
Regards
On Mon, Oct 28, 2013 at 11:52 PM, Mark Nelson wrote:
> I'm not really an apache expert, but you could try looking at the apache
> and rgw logs and
I'm not really an apache expert, but you could try looking at the apache
and rgw logs and see if you can trace where the 2nd request is hanging
up. Also, just to be sure, both clients can download data
independently, just not together?
Mark
On 10/28/2013 02:54 PM, hadi golestani wrote:
I'm
I'm running Ubuntu 12 on all my nodes and I've just installed every package
with default configs like what is mentioned in quick installtion guide of
Ceph
Anyone else experiancing the same issue?
Regards
On Mon, Oct 28, 2013 at 11:09 PM, Mark Nelson wrote:
> Strange! I'm not sure I've actuall
Strange! I'm not sure I've actually ever seen two concurrent downloads
fail to work properly. Is there anything unusual about the setup?
Mark
On 10/28/2013 01:58 PM, hadi golestani wrote:
My test is so simple,
On a cluster with 3 MON, 4 OSD, 1 RGW I can't download a big file from
two differe
Sounds like an issue with your apache config. How did you install your
apache? What distribution are you running on? Are you using it as
mpm-worker? Do you have non-default radosgw settings?
Yehuda
On Mon, Oct 28, 2013 at 11:58 AM, hadi golestani
wrote:
> My test is so simple,
> On a cluster wit
My test is so simple,
On a cluster with 3 MON, 4 OSD, 1 RGW I can't download a big file from two
different clients concurrently,
One of them will wait till the other finish downloading it.
Regards
On Mon, Oct 28, 2013 at 10:19 PM, Mark Nelson wrote:
> Hi Hadi,
>
> Can you tell me a bit about t
Hi Hadi,
Can you tell me a bit about the tests you are doing and seeing poor
performance on?
Mark
On 10/28/2013 01:32 PM, hadi golestani wrote:
Any answer to this question? I'm hitting almost the same issue with radosgw,
Read performance is not fine with radosgw
Regards
___
Any answer to this question? I'm hitting almost the same issue with radosgw,
Read performance is not fine with radosgw
Regards
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
lson [mailto:mark.nel...@inktank.com]
Sent: Friday, September 27, 2013 11:50 AM
To: Somnath Roy
Cc: Yehuda Sadeh; ceph-users@lists.ceph.com; Anirban Ray;
ceph-de...@vger.kernel.org
Subject: Re: [ceph-users] Scaling radosgw module
Hi Somnath,
With SSDs, you almost certainly are going to be running into bottlen
From: Mark Nelson [mailto:mark.nel...@inktank.com]
Sent: Friday, September 27, 2013 11:50 AM
To: Somnath Roy
Cc: Yehuda Sadeh; ceph-users@lists.ceph.com; Anirban Ray;
ceph-de...@vger.kernel.org
Subject: Re: [ceph-users] Scaling radosgw module
Hi Somnath,
With SSDs, you almost certainly are goin
Roy
Cc: Mark Nelson; ceph-users@lists.ceph.com; Anirban Ray;
ceph-de...@vger.kernel.org
Subject: Re: [ceph-users] Scaling radosgw module
You specify the relative performance, but what the actual numbers that you're
seeing? How many GETs per second, and how many PUTs per second do you see?
On T
: Re: [ceph-users] Scaling radosgw module
You specify the relative performance, but what the actual numbers that you're
seeing? How many GETs per second, and how many PUTs per second do you see?
On Thu, Sep 26, 2013 at 4:00 PM, Somnath Roy wrote:
> Mark,
> One more thing, all my t
3:59 PM
> To: Mark Nelson
> Cc: ceph-users@lists.ceph.com; ceph-de...@vger.kernel.org; Anirban Ray
> Subject: RE: [ceph-users] Scaling radosgw module
>
> Nope...With one client hitting the radaosgw , the daemon cpu usage is going
> up till 400-450% i.e taking in avg 4 core..In one client scena
sday, September 26, 2013 3:59 PM
To: Mark Nelson
Cc: ceph-users@lists.ceph.com; ceph-de...@vger.kernel.org; Anirban Ray
Subject: RE: [ceph-users] Scaling radosgw module
Nope...With one client hitting the radaosgw , the daemon cpu usage is going up
till 400-450% i.e taking in avg 4 core..In one cl
nath
-Original Message-
From: Mark Nelson [mailto:mark.nel...@inktank.com]
Sent: Thursday, September 26, 2013 3:50 PM
To: Somnath Roy
Cc: ceph-users@lists.ceph.com; ceph-de...@vger.kernel.org; Anirban Ray
Subject: Re: [ceph-users] Scaling radosgw module
Ah, that's very good to know!
And RGW
rom: ceph-devel-ow...@vger.kernel.org
[mailto:ceph-devel-ow...@vger.kernel.org] On Behalf Of Mark Nelson
Sent: Thursday, September 26, 2013 3:33 PM
To: Somnath Roy
Cc: ceph-users@lists.ceph.com; ceph-de...@vger.kernel.org; Anirban Ray
Subject: Re: [ceph-users] Scaling radosgw module
It's ki
sday, September 26, 2013 3:33 PM
To: Somnath Roy
Cc: ceph-users@lists.ceph.com; ceph-de...@vger.kernel.org; Anirban Ray
Subject: Re: [ceph-users] Scaling radosgw module
It's kind of annoying, but it may be worth setting up a 2nd RGW server and
seeing if having two copies of the benchmark going at the
~6x slower than the single rados bench
output!
Any other suggestion ?
Thanks & Regards
Somnath
-Original Message-
From: Somnath Roy
Sent: Friday, September 20, 2013 4:08 PM
To: 'Mark Nelson'
Cc: ceph-users@lists.ceph.com
Subject: RE: [ceph-users] Scaling radosgw module
Hi Ma
r 20, 2013 4:08 PM
To: 'Mark Nelson'
Cc: ceph-users@lists.ceph.com
Subject: RE: [ceph-users] Scaling radosgw module
Hi Mark,
It's a test cluster and I will try with the new release.
As I mentioned in the mail, I think number of rados client instance is the
limitation. Could you ple
hanks & Regards
Somnath
-Original Message-
From: Mark Nelson [mailto:mark.nel...@inktank.com]
Sent: Friday, September 20, 2013 4:02 PM
To: Somnath Roy
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Scaling radosgw module
On 09/20/2013 05:49 PM, Somnath Roy wrote:
> Hi Mark,
>
Thanks & Regards
Somnath
-Original Message-
From: ceph-users-boun...@lists.ceph.com
[mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Mark Nelson
Sent: Friday, September 20, 2013 3:03 PM
To: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Scaling radosgw module
Hi,
A c
To: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Scaling radosgw module
Hi,
A couple of things that might be worth trying:
use multiple containers in swift-bench. Newer versions should support this.
Also, if this is a test cluster, you may want to try the ceph
wip-6286 branch as we have
Hi,
A couple of things that might be worth trying:
use multiple containers in swift-bench. Newer versions should support
this. Also, if this is a test cluster, you may want to try the ceph
wip-6286 branch as we have a rather major performance improvement in it
when dealing with small object
Hi,
I am running Ceph on a 3 node cluster and each of my server node is running 10
OSDs, one for each disk. I have one admin node and all the nodes are connected
with 2 X 10G network. One network is for cluster and other one configured as
public network.
All the OSD journals are on SSDs.
I sta
24 matches
Mail list logo