You might try temporarily increasing the backfill allowance params so that
the stuff can move around more quickly. Given the cluster is idle it's
definitely hitting those limits. ;)
-Greg

On Saturday, January 3, 2015, Lindsay Mathieson <lindsay.mathie...@gmail.com>
wrote:

> I just added 4 OSD's to my 2 OSD "cluster" (2 Nodes, now have 3 OSD's per
> node).
>
> Given its the weekend and not in use, I've set them all to weight 1, but
> looks like it going to take a while to rebalance ... :)
>
> Is having them all at weight 1 the fastest way to get back to health, or
> is it causing contention?
>
> Current health:
>
> ceph -s
>     cluster f67ef302-5c31-425d-b0fe-cdc0738f7a62
>      health HEALTH_WARN 227 pgs backfill; 2 pgs backfilling; 97 pgs
> degraded; 29 pgs recovering; 68 pgs recovery_wait; 97 pgs stuck degraded;
> 326 pgs stuck unclean; recovery 30464/943028 objects degraded (3.230%);
> 727189/943028 objects misplaced (77.112%); mds cluster is degraded; mds 1
> is laggy
>      monmap e9: 3 mons at {0=
> 10.10.10.240:6789/0,1=10.10.10.241:6789/0,2=10.10.10.242:6789/0},
> election epoch 770, quorum 0,1,2 0,1,2
>      mdsmap e212: 1/1/1 up {0=1=up:replay(laggy or crashed)}
>      osdmap e1474: 6 osds: 6 up, 6 in
>       pgmap v828583: 512 pgs, 4 pools, 1073 GB data, 282 kobjects
>             2072 GB used, 7237 GB / 9310 GB avail
>             30464/943028 objects degraded (3.230%); 727189/943028 objects
> misplaced (77.112%)
>                  186 active+clean
>                  227 active+remapped+wait_backfill
>                   68 active+recovery_wait+degraded
>                    2 active+remapped+backfilling
>                   29 active+recovering+degraded
> recovery io 24639 kB/s, 6 objects/s
>
>
> thanks,
>
> --
> Lindsay
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to