On Sat, Sep 16, 2017 at 8:34 AM, David Turner wrote:
> I don't understand a single use case where I want updating my packages using
> yum, apt, etc to restart a ceph daemon. ESPECIALLY when there are so many
> clusters out there with multiple types of daemons running on the same
> server.
>
> My
Well OK now.
Before we go setting off the fire alarms all over town let's work out what is
happening, and why. I spent some time reproducing this and, it is indeed tied to
selinux being (at least) permissive. It does not happen when selinux is
disabled.
If we look at the journalctl output in the
> On Fri, Sep 15, 2017 at 3:49 PM, Gregory Farnum wrote:
> > On Fri, Sep 15, 2017 at 3:34 PM David Turner wrote:
> >>
> >> I don't understand a single use case where I want updating my packages
> >> using yum, apt, etc to restart a ceph daemon. ESPECIALLY when there are so
> >> many clusters out
On Fri, Sep 15, 2017 at 3:49 PM, Gregory Farnum wrote:
> On Fri, Sep 15, 2017 at 3:34 PM David Turner wrote:
>>
>> I don't understand a single use case where I want updating my packages
>> using yum, apt, etc to restart a ceph daemon. ESPECIALLY when there are so
>> many clusters out there with
I'm sorry for getting a little hot there. You're definitely right that you
can't please everyone with a forced choice. It's unfortunate that it can
so drastically impact an upgrade like it did here. Is there a way to
configure yum or apt to make sure that it won't restart these (or guarantee
tha
On Fri, Sep 15, 2017 at 3:34 PM David Turner wrote:
> I don't understand a single use case where I want updating my packages
> using yum, apt, etc to restart a ceph daemon. ESPECIALLY when there are so
> many clusters out there with multiple types of daemons running on the same
> server.
>
> My
I don't understand a single use case where I want updating my packages
using yum, apt, etc to restart a ceph daemon. ESPECIALLY when there are so
many clusters out there with multiple types of daemons running on the same
server.
My home setup is 3 nodes each running 3 OSDs, a MON, and an MDS serv
On Fri, Sep 15, 2017 at 2:10 PM, David Turner wrote:
> I'm glad that worked for you to finish the upgrade.
>
> He has multiple MONs, but all of them are on nodes with OSDs as well. When
> he updated the packages on the first node, it restarted the MON and all of
> the OSDs. This is strictly not
I'm glad that worked for you to finish the upgrade.
He has multiple MONs, but all of them are on nodes with OSDs as well. When
he updated the packages on the first node, it restarted the MON and all of
the OSDs. This is strictly not supported in the Luminous upgrade as the
OSDs can't be running
On Fri, Sep 15, 2017 at 1:48 PM, David wrote:
> Happy to report I got everything up to Luminous, used your tip to keep the
> OSDs running, David, thanks again for that.
>
> I'd say this is a potential gotcha for people collocating MONs. It appears
> that if you're running selinux, even in permissi
Happy to report I got everything up to Luminous, used your tip to keep the
OSDs running, David, thanks again for that.
I'd say this is a potential gotcha for people collocating MONs. It appears
that if you're running selinux, even in permissive mode, upgrading the
ceph-selinux packages forces a re
Hi David
I like your thinking! Thanks for the suggestion. I've got a maintenance
window later to finish the update so will give it a try.
On Thu, Sep 14, 2017 at 6:24 PM, David Turner wrote:
> This isn't a great solution, but something you could try. If you stop all
> of the daemons via syste
This isn't a great solution, but something you could try. If you stop all
of the daemons via systemd and start them all in a screen as a manually
running daemon in the foreground of each screen... I don't think that yum
updating the packages can stop or start the daemons. You could copy and
paste
Hi All
I did a Jewel -> Luminous upgrade on my dev cluster and it went very
smoothly.
I've attempted to upgrade on a small production cluster but I've hit a
snag.
After installing the ceph 12.2.0 packages with "yum install ceph" on the
first node and accepting all the dependencies, I found that
14 matches
Mail list logo