Bryan,

It seems you got crickets with this question. Did you get any further? I'd
like to add it to my upcoming CRUSH troubleshooting section.


On Wed, Apr 3, 2013 at 9:27 AM, Bryan Stillwell
<bstillw...@photobucket.com>wrote:

> I have two test clusters running Bobtail (0.56.4) and Ubuntu Precise
> (12.04.2).  The problem I'm having is that I'm not able to get either
> of them into a state where I can both mount the filesystem and have
> all the PGs in the active+clean state.
>
> It seems that on both clusters I can get them into a 100% active+clean
> state by setting "ceph osd crush tunables bobtail", but when I try to
> mount the filesystem I get:
>
> mount error 5 = Input/output error
>
>
> However, if I set "ceph osd crush tunables legacy" I can mount both
> filesystems, but then some of the PGs are stuck in the
> "active+remapped" state:
>
> # ceph -s
>    health HEALTH_WARN 29 pgs stuck unclean; recovery 5/1604152 degraded
> (0.000%)
>    monmap e1: 1 mons at {a=172.16.0.50:6789/0}, election epoch 1, quorum
> 0 a
>    osdmap e10272: 20 osds: 20 up, 20 in
>     pgmap v1114740: 1920 pgs: 1890 active+clean, 29 active+remapped, 1
> active+clean+scrubbing; 3086 GB data, 6201 GB used, 3098 GB / 9300 GB
> avail; 232B/s wr, 0op/s; 5/1604152 degraded (0.000%)
>    mdsmap e420: 1/1/1 up {0=a=up:active}
>
>
> Is any one else seeing this?
>
> Thanks,
> Bryan
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>



-- 
John Wilkins
Senior Technical Writer
Intank
john.wilk...@inktank.com
(415) 425-9599
http://inktank.com
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to