What are you talking about when you say you have mds in a region, afaik
only radosgw supports multisite and regions.
it sounds like you have a cluster spread out over a geographical area.
and this will have a massive impact on latency
what is the latency between all servers in the cluster ?
ki
time got reduced when MDS from the same region became active
Each region we have a MDS. OSD nodes are in one region and active MDS is in
another region . So that this delay.
On Tue, Jul 17, 2018 at 6:23 PM, John Spray wrote:
> On Tue, Jul 17, 2018 at 8:26 AM Surya Bala
> wrote:
> >
> > Hi folk
On Tue, Jul 17, 2018 at 8:26 AM Surya Bala wrote:
>
> Hi folks,
>
> We have production cluster with 8 nodes and each node has 60 disks of size
> 6TB each. We are using cephfs and FUSE client with global mount point. We are
> doing rsync from our old server to this cluster rsync is slow compared
Hi,
could you share your MDS hardware specifications?
Att.,
Brenno Martinez
SUPCD / CDENA / CDNIA
(41) 3593-8423
- Mensagem original -
De: "Daniel Baumann"
Para: "Ceph Users"
Enviadas: Terça-feira, 17 de julho de 2018 6:45:25
Assunto: Re: [ceph-users] ls oper
Previosly we had multi-active MDS. But that time we got slow /stuck
requests when multiple clients accessing the cluster. So we decided to have
single active MDS and all others are stand by.
When we got this issue MDS trimming was going on. when we checked the last
ops
{
"ops": [
{
On 07/17/2018 11:43 AM, Marc Roos wrote:
> I had similar thing with doing the ls. Increasing the cache limit helped
> with our test cluster
same here; additionally we also had to use more than one MDS to get good
performance (currently 3 MDS plus 2 stand-by per FS).
Regards,
Daniel
_
@lists.ceph.com
Subject: Re: [ceph-users] ls operation is too slow in cephfs
Thanks for the reply anton.
CPU core count - 40
RAM - 250GB
We have single active MDS, Ceph version luminous 12.2.4 default PG
number is 64 and we are not changing PG count while creating pool we
have totally 8 server each
Thanks for the reply anton.
CPU core count - 40
RAM - 250GB
We have single active MDS, Ceph version luminous 12.2.4
default PG number is 64 and we are not changing PG count while creating pool
we have totally 8 server each with 60OSD of 6TB size.
8 server splitted into 2 per region . Crush map i
You need to give us more details about your OSD setup and hardware
specification of nodes (CPU core count, RAM amount)
On 2018.07.17. 10:25, Surya Bala wrote:
Hi folks,
We have production cluster with 8 nodes and each node has 60 disks of
size 6TB each. We are using cephfs and FUSE client wi
Hi folks,
We have production cluster with 8 nodes and each node has 60 disks of size
6TB each. We are using cephfs and FUSE client with global mount point. We
are doing rsync from our old server to this cluster rsync is slow compared
to normal server
when we do 'ls' inside some folder, which has
10 matches
Mail list logo