On 18/07/15 12:53, Steve Thompson wrote:
Ceph newbie (three weeks).
Ceph 0.94.2, CentOS 6.6 x86_64, kernel 2.6.32. Twelve identical OSD's
(1 TB each), three MON's, one active MDS and two standby MDS's. 10GbE
cluster network, 1GbE public network. Using CephFS on a single client
via the 4.1.
Congratulations on getting your cluster up and running. Many of us
have seen the distribution issue on smaller clusters. More PGs and
more OSDs help. A 100 OSD configuration balances better then a 12
OSD system.
Ceph tries to protect your data, so a single full OSD shuts off
writes. Ceph CRUSH
Ceph newbie (three weeks).
Ceph 0.94.2, CentOS 6.6 x86_64, kernel 2.6.32. Twelve identical OSD's (1
TB each), three MON's, one active MDS and two standby MDS's. 10GbE cluster
network, 1GbE public network. Using CephFS on a single client via the
4.1.1 kernel from elrepo; using rsync to copy da