On Tue, Feb 7, 2017 at 12:15 PM, Patrick McGarry <pmcga...@redhat.com>
wrote:

> Moving this to ceph-user
>
> On Mon, Feb 6, 2017 at 3:51 PM, nigel davies <nigdav...@gmail.com> wrote:
> > Hay
> >
> > I am helping to run an small ceph cluster 2 nodes set up.
> >
> > We have recently bought a 3rd storage node and the management want to
> > increase the replication from two to three.
> >
> > As soon as i changed the pool size from 2 to 3, the cluster go's in to
> > warning.
>

Can you please attach below command outputs in a pastebin.

$ceph osd dump | grep -i pool

and crushmap.txt in pastebin

$ceph osd getcrushmap -o /tmp/crushmap
$crushtool -d crushmap -o /tmp/curshmap.txt


> >
> >      health HEALTH_WARN
> >             512 pgs degraded
> >             512 pgs stuck unclean
> >             512 pgs undersized
> >             recovery 5560/19162 objects degraded (29.016%)
> >             election epoch 50, quorum 0,1
> >      osdmap e243: 20 osds: 20 up, 20 in
> >             flags sortbitwise
> >       pgmap v79260: 2624 pgs, 3 pools, 26873 MB data, 6801 objects
> >             54518 MB used, 55808 GB / 55862 GB avail
> >             5560/19162 objects degraded (29.016%)
> >                 2112 active+clean
> >                  512 active+undersized+degraded
> >
> > The cluster is not recovering it self, any help would be grate full on
> this
> >
> >
>
>
>
> --
>
> Best Regards,
>
> Patrick McGarry
> Director Ceph Community || Red Hat
> http://ceph.com  ||  http://community.redhat.com
> @scuttlemonkey || @ceph
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to