This is the output from the first node, the other 2 nodes hang.
{ "name": "1",
"rank": 0,
"state": "probing",
"election_epoch": 19038,
"quorum": [],
"outside_quorum": [
"1"],
"extra_probe_peers": [],
"monmap": { "epoch": 1,
"fsid": "c5344dc8-b390-420a-bc1c-8b3ba4d9d5eb"
Hi,
it's a Cuttlefish bug, which should be fixed in next point release very
soon.
Olivier
Le dimanche 02 juin 2013 à 18:51 +1000, Bond, Darryl a écrit :
> Cluster has gone into HEALTH_WARN because the mon filesystem is 12%
> The cluster was upgraded to cuttlefish last week and had been running o
On Sun, 2 Jun 2013, Bond, Darryl wrote:
> Cluster has gone into HEALTH_WARN because the mon filesystem is 12%
> The cluster was upgraded to cuttlefish last week and had been running on
> bobtail for a few months.
>
> How big can I expect the /var/lib/ceph/mon to get, what influences it's size.
>
Am 02.06.2013 10:51, schrieb Bond, Darryl:
> Cluster has gone into HEALTH_WARN because the mon filesystem is 12%
> The cluster was upgraded to cuttlefish last week and had been running on
> bobtail for a few months.
>
> How big can I expect the /var/lib/ceph/mon to get, what influences it's size.
Cluster has gone into HEALTH_WARN because the mon filesystem is 12%
The cluster was upgraded to cuttlefish last week and had been running on
bobtail for a few months.
How big can I expect the /var/lib/ceph/mon to get, what influences it's size.
It is at 11G now, I'm not sure how fast it has been