[ceph-users] Nautilus, Ceph-Ansible, existing OSDs, and ceph.conf updates

2021-04-10 Thread Dave Hall
Hello,

A while back I asked about the troubles I was having with Ceph-Ansible when
I kept existing OSDs in my inventory file when managing my Nautilus cluster.

At the time it was suggested that once the OSDs have been configured they
should be excluded from the inventory file.

However, when processing certain configuration changes Ceph-Ansible updates
ceph.conf on all cluster nodes and clients in the inventory file.

Is there an alternative way to keep OSD nodes in the inventory file without
listing them as OSD nodes, so they get other updates, but also so
Ceph-Ansible doesn't try to do any of the ceph-volume stuff that seems to
be failing after the OSDs are configured?

Or maybe I just have something odd in my inventory file.  I'd be glad to
share - either in this list or off line.

Thanks.

-Dave

--
Dave Hall
Binghamton University
kdh...@binghamton.edu
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: cephadm upgrade to pacific

2021-04-10 Thread Peter Childs
Possibly,

Given where it stopped, it matches, however the output of cepg log last
cephadm is rather empty, after I stop and restart the upgrade.

I think I might have attempted to trouble shoot too much let me try a
few more ideas.


Peter.




On Wed, 7 Apr 2021, 14:02 Sage Weil,  wrote:

> Can you share the output of 'ceph log last cephadm'?  I'm wondering if
> you are hitting https://tracker.ceph.com/issues/50114
>
> Thanks!
> s
>
> On Mon, Apr 5, 2021 at 4:00 AM Peter Childs  wrote:
> >
> > I am attempting to upgrade a Ceph Upgrade cluster that was deployed with
> > Octopus 15.2.8 and upgraded to 15.2.10 successfully. I'm not attempting
> to
> > upgrade to 16.2.0 Pacific, and it is not going very well.
> >
> >  I am using cephadm. It looks to have upgraded the managers and stopped,
> > and not moved on to the monitors or anything else. I've attempted
> stopping
> > the upgrade and restarting it, with debug on and I'm not seeing anything
> to
> > say why it is not progressing any further.
> >
> > I've also tried rebooting machines and failing the managers over with
> > no success. I'm currently thinking its stuck attempting to upgrade a
> > manager that does not exist.
> >
> > Its a test cluster of 16 nodes, bit of a proof of concept, so if I've got
> > something terribly wrong I'm happy to look at deploying, (running on top
> of
> > CentOS 7 but I'm fast heading to using something else) (apart from
> anything
> > its not really a production ready system yet)
> >
> > Just not sure where cephadm upgrade has crashed in 16.2.0
> >
> > Thanks in advance
> >
> > Peter
> > ___
> > ceph-users mailing list -- ceph-users@ceph.io
> > To unsubscribe send an email to ceph-users-le...@ceph.io
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io