jan.zel...@id.unibe.ch writes:
> Hi,
>
> as I had the same issue in a little virtualized test environment (3 x 10g lvm
> volumes) I would like to understand the 'weight' thing.
> I did not find any "userfriendly explanation" for that kind of problem.
>
> The only explanation I found is on
> ht
Hi,
as I had the same issue in a little virtualized test environment (3 x 10g lvm
volumes) I would like to understand the 'weight' thing.
I did not find any "userfriendly explanation" for that kind of problem.
The only explanation I found is on
http://ceph.com/docs/master/rados/operations/crus
Hi,
On 03/20/2015 01:58 AM, houguanghua wrote:
Dear all,
Ceph 0.72.2 is deployed in three hosts. But the ceph's status is
HEALTH_WARN . The status is as follows:
# ceph -s
cluster e25909ed-25d9-42fd-8c97-0ed31eec6194
health HEALTH_WARN 768 pgs degraded; 768 pgs stuck u
Dear all,
Ceph 0.72.2 is deployed in three hosts. But the ceph's status is HEALTH_WARN .
The status is as follows:
# ceph -s
cluster e25909ed-25d9-42fd-8c97-0ed31eec6194
health HEALTH_WARN 768 pgs degraded; 768 pgs stuck unclean; recovery 2/3
objects degraded (66.667%)
monmap e3