Re: [ceph-users] Upgraded Bobtail to Cuttlefish and unable to mount cephfs

2013-09-01 Thread Serge Slipchenko
Hi Greg, Thank you for your concern. It seems that problem was caused by ceph-mds. While the rest of Ceph modules have been upgraded to 0.61.8, ceph-mds was 0.56.7. I've updated ceph-mds and cluster stabilised within few hours. Kind regards, Serge On 08/30/2013 08:22 PM, Gregory Farnum wrot

[ceph-users] Is it possible to change the pg number after adding new osds?

2013-09-01 Thread Da Chun Ng
According to the doc, the pg numbers should be enlarged for better read/write balance if the osd number is increased.But seems the pg number cannot be changed on the fly. It's fixed when the pool is created. Am I right? __

Re: [ceph-users] To put journals to SSD or not?

2013-09-01 Thread Martin Rudat
On 2013-09-02 05:19, Fuchs, Andreas (SwissTXT) wrote: Reading through the documentation and talking to several peaople leads to the conclusion that it's a best practice to place the journal of an OSD instance to a separate SSD disk to speed writing up. But is this true? i have 3 new dell serve

[ceph-users] To put journals to SSD or not?

2013-09-01 Thread Fuchs, Andreas (SwissTXT)
Reading through the documentation and talking to several peaople leads to the conclusion that it's a best practice to place the journal of an OSD instance to a separate SSD disk to speed writing up. But is this true? i have 3 new dell servers for testing available with 12 x 4 TB SATA and 2 x 10

Re: [ceph-users] some newbie questions...

2013-09-01 Thread Fuchs, Andreas (SwissTXT)
I'd like to extend this discussion a little bit. We where told to do 1 OSD per Physical disk and to make shure that we will never ever lose some data, make 3 replicas. So raw disk capacity has to be devided by 3 to get usable capacity. Named vendors tell us that they get 80% usable capacity from

Re: [ceph-users] from whom and when will rbd_cache* be read

2013-09-01 Thread Josh Durgin
On 09/01/2013 03:35 AM, Kasper Dieter wrote: Hi, under http://eu.ceph.com/docs/wip-rpm-doc/config-cluster/rbd-config-ref/ I found a good description about RBD cache parameters. You're looking at an old branch there - the current description is a bit more clear that this doesn't affect rbd.ko

[ceph-users] from whom and when will rbd_cache* be read

2013-09-01 Thread Kasper Dieter
Hi, under http://eu.ceph.com/docs/wip-rpm-doc/config-cluster/rbd-config-ref/ I found a good description about RBD cache parameters. But, I am missing information - by whom these parameters are evaluated and - when will this happen ? My assumption: - the rbd_cache* parameter will be read by MONs