-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
I'm glad you were able to recover. I'm sure you learned a lot about
Ceph through the exercise (always seems to be the case for me with
things). I'll look forward to your report so that we can include it in
our operations manual, just in case.
- -
For the record, I have been able to recover. Thank you very much for
the guidance.
I hate searching the web and finding only partial information on threads
like this, so I'm going to document and post what I've learned as best I
can in hopes that it will help someone else out in the future.
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
If you had multiple monitors, you should recover if possible more than
50% of them (they will need to form a quorum). If you can't, it is
messy but, you can manually remove enough monitors to start a quorum.
>From /etc/ceph/ you will want the keyring
Ok - that is encouraging. I've believe I've got data from a previous
monitor. I see files in a store.db dated yesterday, with a
MANIFEST- file that is significantly greater than the
MANIFEST-07 file listed for the current monitors.
I've actually found data for two previous monitor
Thanks Robert -
Where would that monitor data (database) be found?
--
Peter Hinman
On 7/29/2015 3:39 PM, Robert LeBlanc wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
If you built new monitors, this will not work. You would have to
recover the monitor data (database) from at least one
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
The default is /var/lib/ceph/mon/- (/var/lib/ceph/mon/ceph-mon1 for
me). You will also need the information from /etc/ceph/ to reconstruct
the data. I *think* you should be able to just copy this to a new box
with the same name and IP address and sta
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
If you built new monitors, this will not work. You would have to
recover the monitor data (database) from at least one monitor and
rebuild the monitor. The new monitors would not have any information
about pools, OSDs, PGs, etc to allow an OSD to be
The end goal is to recover the data. I don't need to re-implement the
cluster as it was - that just appeared to the the natural way to recover
the data.
What monitor data would be required to re-implement the cluster?
--
Peter Hinman
International Bridge / ParcelPool.com
On 7/29/2015 2:55 PM
On Wednesday, July 29, 2015, Peter Hinman wrote:
> Hi Greg -
>
> So at the moment, I seem to be trying to resolve a permission error.
>
> === osd.3 ===
> Mounting xfs on stor-2:/var/lib/ceph/osd/ceph-3
> 2015-07-29 13:35:08.809536 7f0a0262e700 0 librados: osd.3 authentication
> error (1) Ope
Hi Greg -
So at the moment, I seem to be trying to resolve a permission error.
=== osd.3 ===
Mounting xfs on stor-2:/var/lib/ceph/osd/ceph-3
2015-07-29 13:35:08.809536 7f0a0262e700 0 librados: osd.3
authentication error (1) Operation not permitted
Error connecting to cluster: PermissionEr
This sounds like you're trying to reconstruct a cluster after destroying
the monitors. That is...not going to work well. The monitors define the
cluster and you can't move OSDs into different clusters. We have ideas for
how to reconstruct monitors and it can be done manually with a lot of
hassle, b
e is prohibited.
> If you received this message erroneously, please notify the sender and
> delete it, together with any attachments.
>
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Peter Hinman
> Sent: Wednesday, July
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Recovery question
Thanks for the guidance. I'm working on building a valid ceph.conf right now.
I'm not familiar with the osd-bootstrap key. Is that the standard filename for
it? Is it the keyring that is stored on the osd?
I
Thanks for the guidance. I'm working on building a valid ceph.conf
right now. I'm not familiar with the osd-bootstrap key. Is that the
standard filename for it? Is it the keyring that is stored on the osd?
I'll see if the logs turn up anything I can decipher after I rebuild the
ceph.conf fi
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Did you use ceph-depoy or ceph-disk to create the OSDs? If so, it
should use udev to start he OSDs. In that case, a new host that has
the correct ceph.conf and osd-bootstrap key should be able to bring up
the OSDs into the cluster automatically. Just
I've got a situation that seems on the surface like it should be
recoverable, but I'm struggling to understand how to do it.
I had a cluster of 3 monitors, 3 osd disks, and 3 journal ssds. After
multiple hardware failures, I pulled the 3 osd disks and 3 journal ssds
and am attempting to bring
16 matches
Mail list logo