On Mon, Oct 3, 2016 at 6:29 AM, Adam Tygart wrote:
> I put this in the #ceph-dev on Friday,
>
> (gdb) print info
> $7 = (const MDSMap::mds_info_t &) @0x5fb1da68: {
> global_id = { boost::totally_ordered2 boost::detail::empty_base > >> =
> { boost::equality_comparable1 boost::totally_ordered2
Sent before I was ready, oops.
How might I get the osdmap from a down cluster?
--
Adam
On Mon, Oct 3, 2016 at 12:29 AM, Adam Tygart wrote:
> I put this in the #ceph-dev on Friday,
>
> (gdb) print info
> $7 = (const MDSMap::mds_info_t &) @0x5fb1da68: {
> global_id = { boost::totally_ordere
On Sat, Oct 1, 2016 at 7:19 PM, Adam Tygart wrote:
> The wip-fixup-mds-standby-init branch doesn't seem to allow the
> ceph-mons to start up correctly. I disabled all mds servers before
> starting the monitors up, so it would seem the pending mdsmap update
> is in durable storage. Now that the mds
The wip-fixup-mds-standby-init branch doesn't seem to allow the
ceph-mons to start up correctly. I disabled all mds servers before
starting the monitors up, so it would seem the pending mdsmap update
is in durable storage. Now that the mds servers are down, can we clear
the mdsmap of active and sta
On Fri, Sep 30, 2016 at 11:39 AM, Adam Tygart wrote:
> Hello all,
>
> Not sure if this went through before or not, as I can't check the
> mailing list archives.
>
> I've gotten myself into a bit of a bind. I was prepping to add a new
> mds node to my ceph cluster. e.g. ceph-deploy mds create mormo
Hello all,
Not sure if this went through before or not, as I can't check the
mailing list archives.
I've gotten myself into a bit of a bind. I was prepping to add a new
mds node to my ceph cluster. e.g. ceph-deploy mds create mormo
Unfortunately, it started the mds server before I was ready. My
I could, I suppose, update the monmaps in the working monitors to
remove the broken ones and then re-deploy the broken ones. The main
concern I have is that if the mdsmap update isn't pending on the
working ones, what else isn't in sync.
Thoughts?
--
Adam
On Fri, Sep 30, 2016 at 11:05 AM, Adam T
Hello all,
I've gotten myself into a bit of a bind. I was prepping to add a new
mds node to my ceph cluster. e.g. ceph-deploy mds create mormo
Unfortunately, it started the mds server before I was ready. My
cluster was running 10.2.1, and the newly deployed mds is 10.2.3.
This caused 3 of my 5 m