On Thu, Oct 24, 2013 at 6:31 AM, Guang Yang wrote:
> Hi Mark, Greg and Kyle,
> Sorry to response this late, and thanks for providing the directions for me
> to look at.
>
> We have exact the same setup for OSD, pool replica (and even I tried to
> create the same number of PGs within the small clus
Thanks Mark.
I cannot connect to my hosts, I will do the check and get back to you tomorrow.
Thanks,
Guang
在 2013-10-24,下午9:47,Mark Nelson 写道:
> On 10/24/2013 08:31 AM, Guang Yang wrote:
>> Hi Mark, Greg and Kyle,
>> Sorry to response this late, and thanks for providing the directions for
>>
On 10/24/2013 08:31 AM, Guang Yang wrote:
> Hi Mark, Greg and Kyle,
> Sorry to response this late, and thanks for providing the directions for
> me to look at.
>
> We have exact the same setup for OSD, pool replica (and even I tried to
> create the same number of PGs within the small cluster), h
Hi Mark, Greg and Kyle,
Sorry to response this late, and thanks for providing the directions for me to
look at.
We have exact the same setup for OSD, pool replica (and even I tried to create
the same number of PGs within the small cluster), however, I can still
reproduce this constantly.
This
Hi Kyle and Greg,
I will get back to you with more details tomorrow, thanks for the response.
Thanks,
Guang
在 2013-10-22,上午9:37,Kyle Bader 写道:
> Besides what Mark and Greg said it could be due to additional hops through
> network devices. What network devices are you using, what is the network
Thanks Mark for the response. My comments inline...
From: Mark Nelson
To: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Rados bench result when increasing OSDs
Message-ID: <52653b49.8090...@inktank.com>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
On 10/21/2013 09
Besides what Mark and Greg said it could be due to additional hops through
network devices. What network devices are you using, what is the network
topology and does your CRUSH map reflect the network topology?
On Oct 21, 2013 9:43 AM, "Gregory Farnum" wrote:
> On Mon, Oct 21, 2013 at 7:13 AM, Gu
On Mon, Oct 21, 2013 at 7:13 AM, Guang Yang wrote:
> Dear ceph-users,
> Recently I deployed a ceph cluster with RadosGW, from a small one (24 OSDs)
> to a much bigger one (330 OSDs).
>
> When using rados bench to test the small cluster (24 OSDs), it showed the
> average latency was around 3ms (o
On 10/21/2013 09:13 AM, Guang Yang wrote:
Dear ceph-users,
Hi!
Recently I deployed a ceph cluster with RadosGW, from a small one (24 OSDs) to
a much bigger one (330 OSDs).
When using rados bench to test the small cluster (24 OSDs), it showed the
average latency was around 3ms (object size
Dear ceph-users,
Recently I deployed a ceph cluster with RadosGW, from a small one (24 OSDs) to
a much bigger one (330 OSDs).
When using rados bench to test the small cluster (24 OSDs), it showed the
average latency was around 3ms (object size is 5K), while for the larger one
(330 OSDs), the av
10 matches
Mail list logo