On 02/17/2015 11:13 AM, Mohamed Pakkeer wrote:
Hi Joao,
We followed your instruction to create the store dump
ceph-kvstore-tool /var/lib/ceph/mon/ceph-FOO/store.db list > store.dump'
for above store's location, let's call it $STORE:
for m in osdmap pgmap; do
for k in first_committed last_c
Hi Joao,
We followed your instruction to create the store dump
ceph-kvstore-tool /var/lib/ceph/mon/ceph-FOO/store.db list > store.dump'
for above store's location, let's call it $STORE:
for m in osdmap pgmap; do
for k in first_committed last_committed; do
ceph-kvstore-tool $STORE get $m $
On 02/16/2015 12:57 PM, Mohamed Pakkeer wrote:
Hi ceph-experts,
We are getting "store is getting too big" on our test cluster.
Cluster is running with giant release and configured as EC pool to test
cephFS.
cluster c2a97a2f-fdc7-4eb5-82ef-70c52f2eceb1
health HEALTH_WARN too few pgs
On 12/10/2014 07:30 PM, Kevin Sumner wrote:
The mons have grown another 30GB each overnight (except for 003?), which
is quite worrying. I ran a little bit of testing yesterday after my
post, but not a significant amount.
I wouldn’t expect compact on start to help this situation based on the
nam
The mons have grown another 30GB each overnight (except for 003?), which is
quite worrying. I ran a little bit of testing yesterday after my post, but not
a significant amount.
I wouldn’t expect compact on start to help this situation based on the name
since we don’t (shouldn’t?) restart the m
Maybe you can enable "mon_compact_on_start=true" when restarting mon,
it will compact data
On Wed, Dec 10, 2014 at 6:50 AM, Kevin Sumner wrote:
> Hi all,
>
> We recently upgraded our cluster to Giant from. Since then, we’ve been
> driving load tests against CephFS. However, we’re getting “store