I was going through the permissions on the various keys in the cluster and
I think the admin capabilities look a little weird. (below) Could this be
causing the ceph-mgr problems when it starts?

[client.admin]
        key = [redacted]
        auid = 0
        caps mds = "allow"
        caps mgr = "allow *"
        caps mon = "allow *"
        caps osd = "allow *"


On Tue, Jan 8, 2019 at 10:39 AM Randall Smith <rbsm...@adams.edu> wrote:

> Thanks to everyone who has tried to help so far. I have filed a bug report
> on this issue at http://tracker.ceph.com/issues/37835. I hope we can get
> this fixed so I can finish this upgrade.
>
> On Fri, Jan 4, 2019 at 7:26 AM Randall Smith <rbsm...@adams.edu> wrote:
>
>> Greetings,
>>
>> I'm upgrading my cluster from luminous to mimic. I've upgraded my
>> monitors and am attempting to upgrade the mgrs. Unfortunately, after an
>> upgrade the mgr daemon exits immediately with error code 1.
>>
>> I've tried running ceph-mgr in debug mode to try to see what's happening
>> but the output (below) is a bit cryptic for me. It looks like
>> authentication might be failing but it was working prior to the upgrade.
>>
>> I do have "auth supported = cephx" in the global section of ceph.conf.
>>
>> What do I need to do to fix this?
>>
>> Thanks.
>>
>> /usr/bin/ceph-mgr -f --cluster ceph --id 8 --setuser ceph --setgroup ceph
>> -d --debug_ms 5
>>
>> 2019-01-04 07:01:38.457 7f808f83f700  2 Event(0x30c42c0 nevent=5000
>> time_id=1).set_owner idx=0 owner=140190140331776
>>
>> 2019-01-04 07:01:38.457 7f808f03e700  2 Event(0x30c4500 nevent=5000
>> time_id=1).set_owner idx=1 owner=140190131939072
>>
>> 2019-01-04 07:01:38.457 7f808e83d700  2 Event(0x30c4e00 nevent=5000
>> time_id=1).set_owner idx=2 owner=140190123546368
>>
>> 2019-01-04 07:01:38.457 7f809dd5b380  1  Processor -- start
>>
>>
>> 2019-01-04 07:01:38.477 7f809dd5b380  1 -- - start start
>>
>>
>> 2019-01-04 07:01:38.481 7f809dd5b380  1 -- - --> 192.168.253.147:6789/0
>> -- auth(proto 0 26 bytes epoch 0) v1 -- 0x32a6780 con 0
>>
>> 2019-01-04 07:01:38.481 7f809dd5b380  1 -- - --> 192.168.253.148:6789/0
>> -- auth(proto 0 26 bytes epoch 0) v1 -- 0x32a6a00 con 0
>> 2019-01-04 07:01:38.481 7f808e83d700  1 -- 192.168.253.148:0/1359135487
>> learned_addr learned my addr 192.168.253.148:0/1359135487
>> 2019-01-04 07:01:38.481 7f808e83d700  2 -- 192.168.253.148:0/1359135487
>> >> 192.168.253.148:6789/0 conn(0x332d500 :-1
>> s=STATE_CONNECTING_WAIT_ACK_SEQ pgs=0 cs=0 l=0)._process_connection got
>> newly_a$
>> ked_seq 0 vs out_seq 0
>> 2019-01-04 07:01:38.481 7f808f03e700  2 -- 192.168.253.148:0/1359135487
>> >> 192.168.253.147:6789/0 conn(0x332ce00 :-1
>> s=STATE_CONNECTING_WAIT_ACK_SEQ pgs=0 cs=0 l=0)._process_connection got
>> newly_a$
>> ked_seq 0 vs out_seq 0
>> 2019-01-04 07:01:38.481 7f808f03e700  5 -- 192.168.253.148:0/1359135487
>> >> 192.168.253.147:6789/0 conn(0x332ce00 :-1
>> s=STATE_OPEN_MESSAGE_READ_FOOTER_AND_DISPATCH pgs=74172 cs=1 l=1). rx mon.1
>> seq
>> 1 0x30c5440 mon_map magic: 0 v1
>> 2019-01-04 07:01:38.481 7f808e83d700  5 -- 192.168.253.148:0/1359135487
>> >> 192.168.253.148:6789/0 conn(0x332d500 :-1
>> s=STATE_OPEN_MESSAGE_READ_FOOTER_AND_DISPATCH pgs=74275 cs=1 l=1). rx mon.2
>> seq
>> 1 0x30c5680 mon_map magic: 0 v1
>> 2019-01-04 07:01:38.481 7f808f03e700  5 -- 192.168.253.148:0/1359135487
>> >> 192.168.253.147:6789/0 conn(0x332ce00 :-1
>> s=STATE_OPEN_MESSAGE_READ_FOOTER_AND_DISPATCH pgs=74172 cs=1 l=1). rx mon.1
>> seq
>> 2 0x32a6780 auth_reply(proto 2 0 (0) Success) v1
>> 2019-01-04 07:01:38.481 7f808e83d700  5 -- 192.168.253.148:0/1359135487
>> >> 192.168.253.148:6789/0 conn(0x332d500 :-1
>> s=STATE_OPEN_MESSAGE_READ_FOOTER_AND_DISPATCH pgs=74275 cs=1 l=1). rx mon.2
>> seq
>> 2 0x32a6a00 auth_reply(proto 2 0 (0) Success) v1
>> 2019-01-04 07:01:38.481 7f808e03c700  1 -- 192.168.253.148:0/1359135487
>> <== mon.1 192.168.253.147:6789/0 1 ==== mon_map magic: 0 v1 ==== 370+0+0
>> (3034216899 0 0) 0x30c5440 con 0x332ce00
>> 2019-01-04 07:01:38.481 7f808e03c700  1 -- 192.168.253.148:0/1359135487
>> <== mon.2 192.168.253.148:6789/0 1 ==== mon_map magic: 0 v1 ==== 370+0+0
>> (3034216899 0 0) 0x30c5680 con 0x332d500
>> 2019-01-04 07:01:38.481 7f808e03c700  1 -- 192.168.253.148:0/1359135487
>> <== mon.1 192.168.253.147:6789/0 2 ==== auth_reply(proto 2 0 (0)
>> Success) v1 ==== 33+0+0 (3430158761 0 0) 0x32a6780 con 0x33$
>> ce00
>> 2019-01-04 07:01:38.481 7f808e03c700  1 -- 192.168.253.148:0/1359135487
>> --> 192.168.253.147:6789/0 -- auth(proto 2 2 bytes epoch 0) v1 --
>> 0x32a6f00 con 0
>> 2019-01-04 07:01:38.481 7f808e03c700  1 -- 192.168.253.148:0/1359135487
>> <== mon.2 192.168.253.148:6789/0 2 ==== auth_reply(proto 2 0 (0)
>> Success) v1 ==== 33+0+0 (3242503871 0 0) 0x32a6a00 con 0x33$
>> d500
>> 2019-01-04 07:01:38.481 7f808e03c700  1 -- 192.168.253.148:0/1359135487
>> --> 192.168.253.148:6789/0 -- auth(proto 2 2 bytes epoch 0) v1 --
>> 0x32a6780 con 0
>> 2019-01-04 07:01:38.481 7f808f03e700  5 -- 192.168.253.148:0/1359135487
>> >> 192.168.253.147:6789/0 conn(0x332ce00 :-1
>> s=STATE_OPEN_MESSAGE_READ_FOOTER_AND_DISPATCH pgs=74172 cs=1 l=1). rx mon.1
>> seq
>> 3 0x32a6f00 auth_reply(proto 2 -22 (22) Invalid argument) v1
>> 2019-01-04 07:01:38.481 7f808e03c700  1 -- 192.168.253.148:0/1359135487
>> <== mon.1 192.168.253.147:6789/0 3 ==== auth_reply(proto 2 -22 (22)
>> Invalid argument) v1 ==== 24+0+0 (882932531 0 0) 0x32a6f$
>> 0 con 0x332ce00
>> 2019-01-04 07:01:38.481 7f808e03c700  1 -- 192.168.253.148:0/1359135487
>> >> 192.168.253.147:6789/0 conn(0x332ce00 :-1 s=STATE_OPEN pgs=74172 cs=1
>> l=1).mark_down
>> 2019-01-04 07:01:38.481 7f808e03c700  2 -- 192.168.253.148:0/1359135487
>> >> 192.168.253.147:6789/0 conn(0x332ce00 :-1 s=STATE_OPEN pgs=74172 cs=1
>> l=1)._stop
>> 2019-01-04 07:01:38.481 7f808e83d700  5 -- 192.168.253.148:0/1359135487
>> >> 192.168.253.148:6789/0 conn(0x332d500 :-1
>> s=STATE_OPEN_MESSAGE_READ_FOOTER_AND_DISPATCH pgs=74275 cs=1 l=1). rx mon.2
>> seq
>> 3 0x32a6780 auth_reply(proto 2 -22 (22) Invalid argument) v1
>> 2019-01-04 07:01:38.481 7f808e03c700  1 -- 192.168.253.148:0/1359135487
>> <== mon.2 192.168.253.148:6789/0 3 ==== auth_reply(proto 2 -22 (22)
>> Invalid argument) v1 ==== 24+0+0 (1359424806 0 0) 0x32a6$
>> 80 con 0x332d500
>> 2019-01-04 07:01:38.481 7f808e03c700  1 -- 192.168.253.148:0/1359135487
>> >> 192.168.253.148:6789/0 conn(0x332d500 :-1 s=STATE_OPEN pgs=74275 cs=1
>> l=1).mark_down
>> 2019-01-04 07:01:38.481 7f808e03c700  2 -- 192.168.253.148:0/1359135487
>> >> 192.168.253.148:6789/0 conn(0x332d500 :-1 s=STATE_OPEN pgs=74275 cs=1
>> l=1)._stop
>>
>> 2019-01-04 07:01:38.481 7f809dd5b380  1 -- 192.168.253.148:0/1359135487
>> shutdown_connections
>> 2019-01-04 07:01:38.481 7f809dd5b380  5 -- 192.168.253.148:0/1359135487
>> shutdown_connections mark down 192.168.253.148:6789/0 0x332d500
>> 2019-01-04 07:01:38.481 7f809dd5b380  5 -- 192.168.253.148:0/1359135487
>> shutdown_connections mark down 192.168.253.147:6789/0 0x332ce00
>> 2019-01-04 07:01:38.481 7f809dd5b380  5 -- 192.168.253.148:0/1359135487
>> shutdown_connections delete 0x332ce00
>> 2019-01-04 07:01:38.481 7f809dd5b380  5 -- 192.168.253.148:0/1359135487
>> shutdown_connections delete 0x332d500
>> 2019-01-04 07:01:38.485 7f809dd5b380  1 -- 192.168.253.148:0/1359135487
>> shutdown_connections
>> 2019-01-04 07:01:38.485 7f809dd5b380  1 -- 192.168.253.148:0/1359135487
>> wait complete.
>> 2019-01-04 07:01:38.485 7f809dd5b380  1 -- 192.168.253.148:0/1359135487
>> >> 192.168.253.148:0/1359135487 conn(0x332c000 :-1 s=STATE_NONE pgs=0
>> cs=0 l=0).mark_down
>> 2019-01-04 07:01:38.485 7f809dd5b380  2 -- 192.168.253.148:0/1359135487
>> >> 192.168.253.148:0/1359135487 conn(0x332c000 :-1 s=STATE_NONE pgs=0
>> cs=0 l=0)._stop
>> failed to fetch mon config (--no-mon-config to skip)
>>
>> --
>> Randall Smith
>> Computing Services
>> Adams State University
>> http://www.adams.edu/
>> 719-587-7741
>>
>
>
> --
> Randall Smith
> Computing Services
> Adams State University
> http://www.adams.edu/
> 719-587-7741
>


-- 
Randall Smith
Computing Services
Adams State University
http://www.adams.edu/
719-587-7741
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to