Hi Greg,
Thank you for your concern.
It seems that problem was caused by ceph-mds. While the rest of Ceph
modules have been upgraded to 0.61.8, ceph-mds was 0.56.7.
I've updated ceph-mds and cluster stabilised within few hours.
Kind regards, Serge
On 08/30/2013 08:22 PM, Gregory Farnum wrot
According to the doc, the pg numbers should be enlarged for better read/write
balance if the osd number is increased.But seems the pg number cannot be
changed on the fly. It's fixed when the pool is created. Am I right?
__
On 2013-09-02 05:19, Fuchs, Andreas (SwissTXT) wrote:
Reading through the documentation and talking to several peaople leads to the
conclusion that it's a best practice to place the journal of an OSD instance to
a separate SSD disk to speed writing up.
But is this true? i have 3 new dell serve
Reading through the documentation and talking to several peaople leads to the
conclusion that it's a best practice to place the journal of an OSD instance to
a separate SSD disk to speed writing up.
But is this true? i have 3 new dell servers for testing available with 12 x 4
TB SATA and 2 x 10
I'd like to extend this discussion a little bit.
We where told to do 1 OSD per Physical disk and to make shure that we will
never ever lose some data, make 3 replicas.
So raw disk capacity has to be devided by 3 to get usable capacity.
Named vendors tell us that they get 80% usable capacity from
On 09/01/2013 03:35 AM, Kasper Dieter wrote:
Hi,
under
http://eu.ceph.com/docs/wip-rpm-doc/config-cluster/rbd-config-ref/
I found a good description about RBD cache parameters.
You're looking at an old branch there - the current description is a bit
more clear that this doesn't affect rbd.ko
Hi,
under
http://eu.ceph.com/docs/wip-rpm-doc/config-cluster/rbd-config-ref/
I found a good description about RBD cache parameters.
But, I am missing information
- by whom these parameters are evaluated and
- when will this happen ?
My assumption:
- the rbd_cache* parameter will be read by MONs