[ceph-users] Container deployment - Ceph-volume activation

2021-03-11 Thread Cloud Guy
Hello, TL;DR Looking for guidance on ceph-volume lvm activate --all as it would apply to a containerized ceph deployment (Nautilus or Octopus). Detail: I’m planning to upgrade my Nautilus non-container cluster to Octopus (eventually containerized). There’s an expanded procedure that was t

[ceph-users] Re: Container deployment - Ceph-volume activation

2021-03-12 Thread Cloud Guy
To: Sebastian Wagner , 胡 玮文 , Cloud Guy Cc: "ceph-users@ceph.io" Subject: Re: [ceph-users] Re: Container deployment - Ceph-volume activation Hi, The osd activate will probably be nice in the future, but for now I'm doing it like this: ceph-volume activate --all for id in

[ceph-users] OSD Crash, high RAM usage

2020-08-23 Thread Cloud Guy
Hello TL;DR We have a Nautilus cluster which has been operating without issue for quite some time. Recently one OSD experienced a relatively slow and painful death. The OSD was purged (via dashboard), replaced and added as a new OSD (same ID). Upon rebuild, we notice the node hosting the r

[ceph-users] Fwd: Upgrade Path Advice Nautilus (CentOS 7) -> Octopus (new OS)

2020-08-26 Thread Cloud Guy
Hello, Looking for a bit of guidance / approach to upgrading from Nautilus to Octopus considering CentOS and Ceph-Ansible. We're presently running a Nautilus cluster (all nodes / daemons 14.2.11 as of this post). - There are 4 monitor-hosts with mon, mgr, and dashboard functions consolidated; - 4

[ceph-users] Re: Fwd: Upgrade Path Advice Nautilus (CentOS 7) -> Octopus (new OS)

2020-08-27 Thread Cloud Guy
On Thu, 27 Aug 2020 at 13:21, Anthony D'Atri wrote: > > > > > > Looking for a bit of guidance / approach to upgrading from Nautilus to > > Octopus considering CentOS and Ceph-Ansible. > > > > We're presently running a Nautilus cluster (all nodes / daemons 14.2.11 > as > > of this post). > > - The