[ceph-users] Ceph cluster in error state (full) with raw usage 32% of total capacity

2017-08-09 Thread Mandar Naik
0-9-122_ruleset{"rule_id": 1,"rule_name": "ip-10-0-9-122_ruleset","ruleset": 1,"type": 1,"min_size": 1, "max_size": 10,"steps": [{"op": "take",

Re: [ceph-users] Ceph cluster in error state (full) with raw usage 32% of total capacity

2017-08-10 Thread Mandar Naik
peter.malo...@brockmann-consult.de> wrote: > I think a `ceph osd df` would be useful. > > And how did you set up such a cluster? I don't see a root, and you have > each osd in there more than once...is that even possible? > > > > On 08/10/17 08:46, Mandar Naik wrote: &

Re: [ceph-users] Ceph cluster in error state (full) with raw usage 32% of total capacity

2017-08-16 Thread Mandar Naik
Hi, I just wanted to give a friendly reminder for this issue. I would appreciate if someone can help me out here. Also, please do let me know in case some more information is required here. On Thu, Aug 10, 2017 at 2:41 PM, Mandar Naik wrote: > Hi Peter, > Thanks a lot for the reply. Pleas

Re: [ceph-users] Ceph cluster in error state (full) with raw usage 32% of total capacity

2017-08-16 Thread Mandar Naik
present... > > what's most probably happening is that a (or several) pool is using > those same OSDs and the requests to those PGs are also getting blocked > because of the disk full. This turns that some (or all) of the > remaining OSDs are waiting for that one to complete some