Cluster has gone into HEALTH_WARN because the mon filesystem is 12%
The cluster was upgraded to cuttlefish last week and had been running on
bobtail for a few months.
How big can I expect the /var/lib/ceph/mon to get, what influences it's size.
It is at 11G now, I'm not sure how fast it has been
Am 02.06.2013 10:51, schrieb Bond, Darryl:
> Cluster has gone into HEALTH_WARN because the mon filesystem is 12%
> The cluster was upgraded to cuttlefish last week and had been running on
> bobtail for a few months.
>
> How big can I expect the /var/lib/ceph/mon to get, what influences it's size.
On Sun, 2 Jun 2013, Bond, Darryl wrote:
> Cluster has gone into HEALTH_WARN because the mon filesystem is 12%
> The cluster was upgraded to cuttlefish last week and had been running on
> bobtail for a few months.
>
> How big can I expect the /var/lib/ceph/mon to get, what influences it's size.
>
Hi,
it's a Cuttlefish bug, which should be fixed in next point release very
soon.
Olivier
Le dimanche 02 juin 2013 à 18:51 +1000, Bond, Darryl a écrit :
> Cluster has gone into HEALTH_WARN because the mon filesystem is 12%
> The cluster was upgraded to cuttlefish last week and had been running o
On Wed, May 29, 2013 at 04:16:14PM +0200, w sun wrote:
> Hi Wolfgang,
>
> Can you elaborate the issue for 1.5 with libvirt? Wonder if that will impact
> the usage with Grizzly. Did a quick compile for 1.5 with RBD support enabled,
> so far it seems to be ok for openstack with a few simple tests.
Hi,
I try to start postgres cluster on VMs with second disk mounted from
ceph (rbd - kvm).
I started some writes (pgbench initialisation) on 8 VMs and VMs freez.
Ceph reports slow request on 1 osd. I restarted this osd to remove
slows and VMs hangs permanently.
Is this a normal situation afer clust