On Sun, 9 Sep 2018 11:20:01 +0200
Alex Lupsa wrote:
> Hi,
> Any ideas about the below ?
Don't use consumer grade ssd for Ceph cache/block.db/bcache.
> Thanks,
> Alex
>
> --
> Hi,
> I have a really small homelab 3-node ceph cluster on consumer hw -
> thanks to Proxmox for making it ea
On Thu, 22 Nov 2018 12:05:12 +0100
Marco Gaiarin wrote:
> Mandi! Paweł Sadowsk
> In chel di` si favelave...
>
> > We did similar changes a many times and it always behave as
> > expected.
>
> Ok. Good.
>
> > Can you show you crushmap/ceph osd tree?
>
> Sure!
>
> root@blackpanther:~# c
On Mon, 03 Dec 2018 16:41:36 +0100
si...@turka.nl wrote:
> Hi,
>
> Currently I am decommissioning an old cluster.
>
> For example, I want to remove OSD Server X with all its OSD's.
>
> I am following these steps for all OSD's of Server X:
> - ceph osd out
> - Wait for rebalance (active+clean)
Are two clusters in one layer2 network safe in production use?
The goal is a rbd-mirror between them.
--
Pozdrawiam
Jarosław Mociak - Nettelekom GK Sp. z o.o.
pgpGjD23Bl8Ln.pgp
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-u