Thanks for the reply Sebastian.
Sadly I haven't had any luck so far restoring the OSD drive's superblocks.
Any other advice from this group would be welcome before I erase and start
again.
On Mon, 27 Sept 2021 at 01:38, Sebastian Wagner wrote:
> Hi Phil,
>
>
> Am 27.09.21
Hey folks,
A recovery scenario I'm looking at right now is this:
1: In a clean 3-node Ceph cluster (pacific, deployed with cephadm), the OS
Disk is lost from all nodes
2: Trying to be helpful, a self-healing deployment system reinstalls the OS
on each node, and rebuilds the ceph services
3: Somew
Hey folks,
I'm working through some basic ops drills, and noticed what I think is an
inconsistency in the Cephadm Docs. Some Googling appears to show this is a
known thing, but I didn't find a clear direction on cooking up a solution
yet.
On a cluster with 5 mons, 2 were abruptly removed when th
t a message about the store.db files being off, its easiest to
> stop the working node, copy them over , set the user id/group to ceph and
> start things up.
>
> Rob
>
> -Original Message-
> From: Phil Merricks
> Sent: Tuesday, June 8, 2021 3:18 PM
> To: ceph-us
Hey folks,
I have deployed a 3 node dev cluster using cephadm. Deployment went
smoothly and all seems well.
If I try to mount a CephFS from a client node, 2/3 mons crash however.
I've begun picking through the logs to see what I can see, but so far
other than seeing the crash in the log itself,
Thanks for all the replies folks. I think it's testament to the
versatility of Ceph that there are some differences of opinion and
experience here.
With regards to the purpose of this cluster, it is providing distributed
storage for stateful workloads of containers. The data produced is
somewhat
l, and possibly the EC crush rule setup?
Best regards
Phil Merricks
On Wed., Nov. 11, 2020, 1:30 a.m. Robert Sander, <
r.san...@heinlein-support.de> wrote:
> Am 07.11.20 um 01:14 schrieb seffyr...@gmail.com:
> > I've inherited a Ceph Octopus cluster that seems like it nee