irectory tree in that folder might be all we need.
> >>
> >> Thanks again for all the help.
> >>
> >> Shain
> >>
> >>
> >>
> >> Shain Miley | Manager of Systems and Infrastructure, Digital Media |
> >> smi...@npr.org |
2015 7:57 PM
To: ceph-us...@ceph.com
Cc: Shain Miley
Subject: Re: [ceph-users] rbd directory listing performance issues
On Mon, 12 Jan 2015 13:49:28 + Shain Miley wrote:
> Hi,
> I am just wondering if anyone has any thoughts on the questions
> below...I would like to order some addit
>> Shain
>>
>>
>>
>> Shain Miley | Manager of Systems and Infrastructure, Digital Media |
>> smi...@npr.org | 202.513.3649
>>
>> ________
>> From: ceph-users [ceph-users-boun...@lists.ceph.com] on behalf o
ystems and Infrastructure, Digital Media |
smi...@npr.org | 202.513.3649
From: Christian Balzer [ch...@gol.com]
Sent: Tuesday, January 06, 2015 7:34 PM
To: ceph-us...@ceph.com
Cc: Shain Miley
Subject: Re: [ceph-users] rbd directory listing performance iss
From: ceph-users [ceph-users-boun...@lists.ceph.com] on behalf of Shain Miley
[smi...@npr.org]
Sent: Tuesday, January 06, 2015 8:16 PM
To: Christian Balzer; ceph-us...@ceph.com
Subject: Re: [ceph-users] rbd directory listing performance issues
Christian,
Each of the OSD's server node
Hello,
On Tue, 6 Jan 2015 15:29:50 + Shain Miley wrote:
> Hello,
>
> We currently have a 12 node (3 monitor+9 OSD) ceph cluster, made up of
> 107 x 4TB drives formatted with xfs. The cluster is running ceph version
> 0.80.7:
>
I assume journals on the same HDD then.
How much memory per no
of Systems and Infrastructure, Digital Media |
> smi...@npr.org | 202.513.3649
>
> ________________
> From: Robert LeBlanc [rob...@leblancnet.us]
> Sent: Tuesday, January 06, 2015 1:57 PM
> To: Shain Miley
> Cc: ceph-us...@ceph.com
> Subject: Re:
smi...@npr.org | 202.513.3649
From: Christian Balzer [ch...@gol.com]
Sent: Tuesday, January 06, 2015 7:34 PM
To: ceph-us...@ceph.com
Cc: Shain Miley
Subject: Re: [ceph-users] rbd directory listing performance issues
Hello,
On Tue, 6 Jan 2015 15:29:50 +00
away...even in cases when right after that I do an 'ls -l' and it takes a
>> while.
>>
>> Thanks,
>>
>> Shain
>>
>> Shain Miley | Manager of Systems and Infrastructure, Digital Media |
>> smi...@npr.org | 202.513.3649
>>
>
What fs are you running inside the RBD?
On Tue, Jan 6, 2015 at 8:29 AM, Shain Miley wrote:
> Hello,
>
> We currently have a 12 node (3 monitor+9 OSD) ceph cluster, made up of 107 x
> 4TB drives formatted with xfs. The cluster is running ceph version 0.80.7:
>
> Cluster health:
> cluster 504b5794-
r.org | 202.513.3649
From: Robert LeBlanc [rob...@leblancnet.us]
Sent: Tuesday, January 06, 2015 1:57 PM
To: Shain Miley
Cc: ceph-us...@ceph.com
Subject: Re: [ceph-users] rbd directory listing performance issues
I would think that the RBD mounter would cache the directory
___
> From: Robert LeBlanc [rob...@leblancnet.us]
> Sent: Tuesday, January 06, 2015 1:27 PM
> To: Shain Miley
> Cc: ceph-us...@ceph.com
> Subject: Re: [ceph-users] rbd directory listing performance issues
>
> What fs are you running inside the RBD?
>
> On Tue, Jan 6, 2
gital Media |
smi...@npr.org | 202.513.3649
From: Robert LeBlanc [rob...@leblancnet.us]
Sent: Tuesday, January 06, 2015 1:27 PM
To: Shain Miley
Cc: ceph-us...@ceph.com
Subject: Re: [ceph-users] rbd directory listing performance issues
What fs are you ru
Hello,
We currently have a 12 node (3 monitor+9 OSD) ceph cluster, made up of 107 x
4TB drives formatted with xfs. The cluster is running ceph version 0.80.7:
Cluster health:
cluster 504b5794-34bd-44e7-a8c3-0494cf800c23
health HEALTH_WARN crush map has legacy tunables
monmap e1: 3 mons
14 matches
Mail list logo