Hi Philipp,

It sounds like perhaps you don't have any OSDs that are both "up" and "in"
the cluster. Can you provide the output of "ceph health detail" and "ceph
osd tree" for us?

As for the "howto" you mentioned, I added some notes to the top but never
really updated the body of the document... I'm not entirely sure it's
straightforward or up to date any longer :) I'd be happy to make changes as
needed but I haven't manually deployed a cluster in several months, and
Inktank now has a manual deployment guide for Ceph at
http://ceph.com/docs/master/install/manual-deployment/

-Aaron



On Fri, Jan 10, 2014 at 6:57 AM, Philipp Strobl <phil...@pilarkto.net>wrote:

> Hi,
>
> After managed to deploy ceph manual in gentoo (ceph-disk tools are under
> /usr/usr/sbin...), the daemons are coming properly up, but "ceph health"
> shows warn for all pgs stuck unclean.
> This is a strange behavior for a clean new installtion i guess.
>
> So the question is, do i'm something wrong Or can i reset the PGs for
> getting the Cluster Running ?
>
> Also the rbd-Client Or Mount.ceph Hangs with no answer.
>
> I used this howto:
> https://github.com/aarontc/ansible-playbooks/blob/master/roles/ceph.notes-on-deployment.rst
>
> Resp. our German translation/expansion
> http://wiki.open-laboratory.de/Intern:IT:HowTo:Ceph
>
> With auth Support ... = none
>
>
> Best regards
> And thank you in advance
>
> Philipp Strobl
>
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>


-- 
Aaron Ten Clay
http://www.aarontc.com/
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to