lection code will fail. This can happen when a device has multiple LVs
> where some of are used by Ceph and at least one LV isn't used by Ceph." so
> maybe you can start there in terms of finding a potential workaround for
> now.
>
> On Wed, Aug 16, 2023 at 12:05 PM Adam H
I've been having fun today trying to invite a new disk that replaced a
failing one into a cluster.
One of my attempts to apply an OSD spec was clearly wrong, because I now
have this error:
Module 'cephadm' has failed: 'osdspec_affinity'
and this was the traceback in the mgr logs:
Traceback (mo
request, the new MONs were
created.
On Tue, 11 Jul 2023 at 08:57, Adam Huffman
wrote:
> Forgot to say we're on Pacific 16.2.13.
>
> On Tue, 11 Jul 2023 at 08:55, Adam Huffman
> wrote:
>
>> Hello
>>
>> I'm trying to add MONs in advance of a planned
Forgot to say we're on Pacific 16.2.13.
On Tue, 11 Jul 2023 at 08:55, Adam Huffman
wrote:
> Hello
>
> I'm trying to add MONs in advance of a planned downtime.
>
> This has actually ended up removing an existing MON, which isn't helpful.
>
> The error I
Hello
I'm trying to add MONs in advance of a planned downtime.
This has actually ended up removing an existing MON, which isn't helpful.
The error I'm seeing is:
Invalid argument: /var/lib/ceph/mon/ceph-/store.db: does not
exist (create_if_missing is false)
error opening mon data directory at '
ion, there was just a few GBs in use immediately
after creation.
> Zitat von Adam Huffman :
>
> > Hello
> >
> > We have a new Pacific cluster configured via Cephadm.
> >
> > For the OSDs, the spec is like this, with the intention for DB and WAL to
> &g
Hello
We have a new Pacific cluster configured via Cephadm.
For the OSDs, the spec is like this, with the intention for DB and WAL to
be on NVMe:
spec:
data_devices:
rotational: true
db_devices:
model: SSDPE2KE032T8L
filter_logic: AND
objectstore: bluestore
wal_devices: