Ceph newbie (three weeks).
Ceph 0.94.2, CentOS 6.6 x86_64, kernel 2.6.32. Twelve identical OSD's (1
TB each), three MON's, one active MDS and two standby MDS's. 10GbE cluster
network, 1GbE public network. Using CephFS on a single client via the
4.1.1 kernel from elrepo; using rsync to copy da
Congratulations on getting your cluster up and running. Many of us
have seen the distribution issue on smaller clusters. More PGs and
more OSDs help. A 100 OSD configuration balances better then a 12
OSD system.
Ceph tries to protect your data, so a single full OSD shuts off
writes. Ceph CRUSH
Hi All,
I’m doing some testing on the new High/Low speed cache tiering flushing and I’m
trying to get my head round the effect that changing these 2 settings have on
the flushing speed. When setting the osd_agent_max_ops to 1, I can get up to
20% improvement before the osd_agent_max_high_ops v