iate any advice - assuming this also doesn't get stuck in
moderation queues.
--
Sam Skipsey (he/him, they/them)
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
y are
> currently residing.
>
> But there is some clarification needed before you go ahead with that.
> Could you share `ceph status`, `ceph health detail`?
>
> Cheers, Dan
>
>
> On Mon, Mar 22, 2021 at 12:05 PM Sam Skipsey wrote:
> >
> > Hi everyone:
> >
f this)
definitely shows the problem occurring on the 12th [when 14.2.17 dropped],
but things didn't "break" until we tried upgrading OSDs to 14.2.18...
Sam
On Mon, 22 Mar 2021 at 12:20, Sam Skipsey wrote:
> Hi Dan:
>
> Thanks for the reply - at present, our mons and mg
g degraded objects), then start starting osds.
> As soon as you have some osd logs reporting some failures, then share
> those...
>
> - Dan
>
> On Mon, Mar 22, 2021 at 3:49 PM Sam Skipsey wrote:
> >
> > So, we started the mons and mgr up again, and here's the relev
ow those same loopback addresses for each OSD?
>
> This sounds familiar... I'm trying to find the recent ticket.
>
> .. dan
>
>
> On Mon, Mar 22, 2021, 6:07 PM Sam Skipsey wrote:
>
>> hi Dan:
>>
>> So, unsetting nodown results in... almost all of t
cerns at the moment.
> (With osds flapping the osdmaps churn and that inflates the mon store)
>
> .. Dan
>
> On Mon, Mar 22, 2021, 6:28 PM Sam Skipsey wrote:
>
>> Hm, yes it does [and I was wondering why loopbacks were showing up
>> suddenly in the logs]. This wasn&
the issue that the osd can't restart after seting a
> virtual local loopback IP.
> In find_ipv4_in_subnet() and find_ipv6_in_subnet(), I use
> boost::starts_with(addrs->ifa_name, "lo") to ship the interfaces
> starting with "lo".
>
>
h before" and restarted the node. It does come up with the correct ip
> (ms_bind_ipv4=false, ms_bind_ipv6=true). So it does not seem to be as
> simple as this, or the ms_bind option migth matter here as well, dunno.
>
> FYI,
>
> Gr. Stefan
>
--
Sam Skipsey (he/him, they/them)
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
/8
> >
> > does that work?
>
> And if that doesn't, you can tell each daemon to which IP it should bind
> like so:
>
> ceph config set osd.$id public_addr 10.1.50.x
>
> Gr. Stefan
>
--
Sam Skipsey (he/him, they/them)
Hello all,
We've had a Nautilus [latest releases] cluster for some years now, and are
planning the upgrade process - both moving off Centos7 [ideally to a RHEL9
compatible spin like Alma 9 or Rocky 9] and also moving to a newer Ceph release
[ideally Pacific or higher to avoid too many later upg
10 matches
Mail list logo