thanks, running as root does give me status, but not clean.

r...@jr-ceph2.vm:~# ceph status
  cluster 9059dfad-924a-425c-a20b-17dc1d53111e
   health HEALTH_WARN 91 pgs degraded; 192 pgs stuck unclean; recovery
21/42 degraded (50.000%)
   monmap e1: 1 mons at {jr-ceph2=10.88.26.55:6789/0}, election epoch 2,
quorum 0 jr-ceph2
   osdmap e10: 2 osds: 2 up, 2 in
    pgmap v2715: 192 pgs: 101 active+remapped, 91 active+degraded; 9518
bytes data, 9148 MB used, 362 GB / 391 GB avail; 21/42 degraded (50.000%)
   mdsmap e4: 1/1/1 up {0=jr-ceph2.XXX=up:active}

don't see anything telling in the ceph logs; Should I wait for the new
quickstart?


On Mon, Sep 16, 2013 at 2:27 PM, John Wilkins <john.wilk...@inktank.com>wrote:

> We will have a new update to the quick start this week.
>
> On Mon, Sep 16, 2013 at 12:18 PM, Alfredo Deza <alfredo.d...@inktank.com>
> wrote:
> > On Mon, Sep 16, 2013 at 12:58 PM, Justin Ryan <justin.r...@kixeye.com>
> wrote:
> >> Hi,
> >>
> >> I'm brand new to Ceph, attempting to follow the Getting Started guide
> with 2
> >> VMs. I completed the Preflight without issue.  I completed Storage
> Cluster
> >> Quick Start, but have some questions:
> >>
> >> The Single Node Quick Start grey box -- does 'single node' mean if
> you're
> >> running the whole thing on a single machine, if you have only one server
> >> node like the diagram at the top of the page, or if you're only running
> one
> >> OSD process? I'm not sure if I need to make the `osd crush chooseleaf
> type`
> >> change.
> >>
> >> Are the LIST, ZAP, and ADD OSDS ON STANDALONE DISKS sections an
> alternative
> >> to the MULTIPLE OSDS ON THE OS DISK (DEMO ONLY) section? I thought I
> set up
> >> my OSDs already on /tmp/osd{0,1}.
> >>
> >> Moving on to the Block Device Quick Start -- it says "To use this
> guide, you
> >> must have executed the procedures in the Object Store Quick Start guide
> >> first" -- but the link to the Object Store Quick Start actually points
> to
> >> the Storage Cluster Quick Start -- which is it?
> >>
> >> Most importantly, it says "Ensure your Ceph Storage Cluster is in an
> active
> >> + clean state before working with the Ceph Block Device" --- how can
> tell if
> >> my cluster is active+clean?? The only ceph* command on the admin node is
> >> ceph-deploy, and running `ceph` on the server node:
> >>
> >> ceph@jr-ceph2:~$ ceph
> >> 2013-09-16 16:53:10.880267 7feb96c1b700 -1 monclient(hunting): ERROR:
> >> missing keyring, cannot use cephx for authentication
> >> 2013-09-16 16:53:10.880271 7feb96c1b700  0 librados: client.admin
> >> initialization error (2) No such file or directory
> >> Error connecting to cluster: ObjectNotFound
> >
> > There is a ticket open for this, but you basically need super-user
> > permissions here to run (any?) ceph commands.
> >>
> >> Thanks in advance for any help, and apologies if I missed anything
> obvious.
> >>
> >>
> >>
> >>
> >>
> >> _______________________________________________
> >> ceph-users mailing list
> >> ceph-users@lists.ceph.com
> >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >>
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
>
> --
> John Wilkins
> Senior Technical Writer
> Intank
> john.wilk...@inktank.com
> (415) 425-9599
> http://inktank.com
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to