The end goal is to recover the data. I don't need to re-implement the cluster as it was - that just appeared to the the natural way to recover the data.

What monitor data would be required to re-implement the cluster?

--
Peter Hinman
International Bridge / ParcelPool.com

On 7/29/2015 2:55 PM, Gregory Farnum wrote:


On Wednesday, July 29, 2015, Peter Hinman <[email protected] <mailto:[email protected]>> wrote:

    Hi Greg -

    So at the moment, I seem to be trying to resolve a permission error.
     === osd.3 ===
     Mounting xfs on stor-2:/var/lib/ceph/osd/ceph-3
     2015-07-29 13:35:08.809536 7f0a0262e700  0 librados: osd.3
    authentication error (1) Operation not permitted
     Error connecting to cluster: PermissionError
     failed: 'timeout 30 /usr/bin/ceph -c /etc/ceph/ceph.conf
    --name=osd.3 --keyring=/var/lib/ceph/osd/ceph-3/keyring osd crush
    create-or-move -- 3 3.64 host=stor-2 root=default'
     ceph-disk: Error: ceph osd start failed: Command
    '['/usr/sbin/service', 'ceph', '--cluster', 'ceph', 'start',
    'osd.3']' returned non-zero exit status 1
     ceph-disk: Error: One or more partitions failed to activate

    Is there a way to identify the cause of this PermissionError? I've
    copied the client.bootstrap-osd key from the output of ceph auth
    list, and pasted it into /var/lib/ceph/bootstrap-osd/ceph.keyring,
    but that has not resolve the error.

    But it sounds like you are saying that even once I get this
    resolved, I have no hope of recovering the data?


Well, I think you'd need to buy help to assemble a working cluster with these OSDs. But if you have rbd images you want to get out, you might be able to string together the tools to make that happen. I'd have to defer to David (for OSD object extraction options) or Josh/Jason (for rbd export/import) for that, though.

ceph-objectstore-tool will I think be part of your solution, but I'm not sure how much of can do on its own. What's your end goal?


-- Peter Hinman

    On 7/29/2015 1:57 PM, Gregory Farnum wrote:
    This sounds like you're trying to reconstruct a cluster after
    destroying the monitors. That is...not going to work well. The
    monitors define the cluster and you can't move OSDs into
    different clusters. We have ideas for how to reconstruct monitors
    and it can be done manually with a lot of hassle, but the process
    isn't written down and there aren't really fools I help with it. :/

*tools to help with it. Sorry for the unfortunate autocorrect!



    On Wed, Jul 29, 2015 at 5:48 PM Peter Hinman
    <[email protected]
    <javascript:_e(%7B%7D,'cvml','[email protected]');>> wrote:

        I've got a situation that seems on the surface like it should be
        recoverable, but I'm struggling to understand how to do it.

        I had a cluster of 3 monitors, 3 osd disks, and 3 journal
        ssds. After
        multiple hardware failures, I pulled the 3 osd disks and 3
        journal ssds
        and am attempting to bring them back up again on new hardware
        in a new
        cluster.  I see plenty of documentation on how to zap and
        initialize and
        add "new" osds, but I don't see anything on rebuilding with
        existing osd
        disks.

        Could somebody provide guidance on how to do this?  I'm
        running 94.2 on
        all machines.

        Thanks,

        --
        Peter Hinman


        _______________________________________________
        ceph-users mailing list
        [email protected]
        <javascript:_e(%7B%7D,'cvml','[email protected]');>
        http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to