What's the fix for people running precise (12.04)?  I believe I see the
same issue with quantal (12.10) as well.


On Thu, Apr 18, 2013 at 1:56 PM, Gregory Farnum <g...@inktank.com> wrote:

> Seeing this go by again it's simple enough to provide a quick
> answer/hint — by setting the tunables it's of course getting a better
> distribution of data, but the reason they're optional to begin with is
> that older clients won't support them. In this case, the kernel client
> being run; so it returns an error.
> -Greg
> Software Engineer #42 @ http://inktank.com | http://ceph.com
>
>
> On Thu, Apr 18, 2013 at 12:51 PM, John Wilkins <john.wilk...@inktank.com>
> wrote:
> > Bryan,
> >
> > It seems you got crickets with this question. Did you get any further?
> I'd
> > like to add it to my upcoming CRUSH troubleshooting section.
> >
> >
> > On Wed, Apr 3, 2013 at 9:27 AM, Bryan Stillwell <
> bstillw...@photobucket.com>
> > wrote:
> >>
> >> I have two test clusters running Bobtail (0.56.4) and Ubuntu Precise
> >> (12.04.2).  The problem I'm having is that I'm not able to get either
> >> of them into a state where I can both mount the filesystem and have
> >> all the PGs in the active+clean state.
> >>
> >> It seems that on both clusters I can get them into a 100% active+clean
> >> state by setting "ceph osd crush tunables bobtail", but when I try to
> >> mount the filesystem I get:
> >>
> >> mount error 5 = Input/output error
> >>
> >>
> >> However, if I set "ceph osd crush tunables legacy" I can mount both
> >> filesystems, but then some of the PGs are stuck in the
> >> "active+remapped" state:
> >>
> >> # ceph -s
> >>    health HEALTH_WARN 29 pgs stuck unclean; recovery 5/1604152 degraded
> >> (0.000%)
> >>    monmap e1: 1 mons at {a=172.16.0.50:6789/0}, election epoch 1,
> quorum 0
> >> a
> >>    osdmap e10272: 20 osds: 20 up, 20 in
> >>     pgmap v1114740: 1920 pgs: 1890 active+clean, 29 active+remapped, 1
> >> active+clean+scrubbing; 3086 GB data, 6201 GB used, 3098 GB / 9300 GB
> >> avail; 232B/s wr, 0op/s; 5/1604152 degraded (0.000%)
> >>    mdsmap e420: 1/1/1 up {0=a=up:active}
> >>
> >>
> >> Is any one else seeing this?
> >>
> >> Thanks,
> >> Bryan
> >> _______________________________________________
> >> ceph-users mailing list
> >> ceph-users@lists.ceph.com
> >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
> >
> >
> >
> > --
> > John Wilkins
> > Senior Technical Writer
> > Intank
> > john.wilk...@inktank.com
> > (415) 425-9599
> > http://inktank.com
> >
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
>



-- 
[image: Photobucket] <http://photobucket.com>

*Bryan Stillwell*
SENIOR SYSTEM ADMINISTRATOR

E: bstillw...@photobucket.com
O: 303.228.5109
M: 970.310.6085

[image: Facebook] <http://www.facebook.com/photobucket>[image:
Twitter]<http://twitter.com/photobucket>[image:
Photobucket] <http://photobucket.com/images/photobucket>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to