Hello Vlad,

Ceph clients connect to the primary OSD of each PG. If you create a
crush rule for building1 and one for building2 that takes a OSD from
the same building as the first one, your reads to the pool will always
be on the same building (if the cluster is healthy) and only write
request get replicated to the other building.

--
Martin Verges
Managing director

Mobile: +49 174 9335695
E-Mail: martin.ver...@croit.io
Chat: https://t.me/MartinVerges

croit GmbH, Freseniusstr. 31h, 81247 Munich
CEO: Martin Verges - VAT-ID: DE310638492
Com. register: Amtsgericht Munich HRB 231263

Web: https://croit.io
YouTube: https://goo.gl/PGE1Bx


2018-11-09 4:54 GMT+01:00 Vlad Kopylov <vladk...@gmail.com>:
> I am trying to test replicated ceph with servers in different buildings, and
> I have a read problem.
> Reads from one building go to osd in another building and vice versa, making
> reads slower then writes! Making read as slow as slowest node.
>
> Is there a way to
> - disable parallel read (so it reads only from the same osd node where mon
> is);
> - or give each client read restriction per osd?
> - or maybe strictly specify read osd on mount;
> - or have node read delay cap (for example if node time out is larger then 2
> ms then do not use such node for read as other replicas are available).
> - or ability to place Clients on the Crush map - so it understands that osd
> in - for example osd in the same data-center as client has preference, and
> pull data from it/them.
>
> Mounting with kernel client latest mimic.
>
> Thank you!
>
> Vlad
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to