Hi,

I have an old ceph cluster and have upgraded recently from Luminous to
Nautilus.  After converting to Nautilus I decided it was time to convert to
bluestore.

Before I converted the cluster was healthy but after I have a HEALTH_WARN

#ceph health detail
HEALTH_WARN 1 subtrees have overcommitted pool target_size_bytes; 1
subtrees have overcommitted pool target_size_ratio
POOL_TARGET_SIZE_BYTES_OVERCOMMITTED 1 subtrees have overcommitted pool
target_size_bytes
    Pools ['data', 'metadata', 'rbd', 'images', 'locks'] overcommit
available storage by 1.244x due to target_size_bytes    0  on pools []
POOL_TARGET_SIZE_RATIO_OVERCOMMITTED 1 subtrees have overcommitted pool
target_size_ratio
    Pools ['data', 'metadata', 'rbd', 'images', 'locks'] overcommit
available storage by 1.244x due to target_size_ratio 0.000 on pools []

I started with a target_size ratio of .85 on the images pool and reduced it
to 0 to hopefully get the warning to go away.  The cluster seems to be
running fine, I just can't figure out what the problem is and how to make
the message go away.  I restarted the monitors this morning in hopes to fix
it.  Anyone have any ideas?

Thanks in advance


-- 
Joe Ryner
Associate Director
Center for the Application of Information Technologies (CAIT) -
http://www.cait.org
Western Illinois University - http://www.wiu.edu


P: (309) 298-1804
F: (309) 298-2806
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to