Re: [ceph-users] Slow Ceph: Any plans on torrent-like transfers from OSDs ?

2018-09-09 Thread Jarek
On Sun, 9 Sep 2018 11:20:01 +0200 Alex Lupsa wrote: > Hi, > Any ideas about the below ? Don't use consumer grade ssd for Ceph cache/block.db/bcache. > Thanks, > Alex > > -- > Hi, > I have a really small homelab 3-node ceph cluster on consumer hw - > thanks to Proxmox for making it ea

Re: [ceph-users] New OSD with weight 0, rebalance still happen...

2018-11-22 Thread Jarek
On Thu, 22 Nov 2018 12:05:12 +0100 Marco Gaiarin wrote: > Mandi! Paweł Sadowsk > In chel di` si favelave... > > > We did similar changes a many times and it always behave as > > expected. > > Ok. Good. > > > Can you show you crushmap/ceph osd tree? > > Sure! > > root@blackpanther:~# c

Re: [ceph-users] Decommissioning cluster - rebalance questions

2018-12-04 Thread Jarek
On Mon, 03 Dec 2018 16:41:36 +0100 si...@turka.nl wrote: > Hi, > > Currently I am decommissioning an old cluster. > > For example, I want to remove OSD Server X with all its OSD's. > > I am following these steps for all OSD's of Server X: > - ceph osd out > - Wait for rebalance (active+clean)

[ceph-users] Two clusters in one network

2019-07-04 Thread Jarek
Are two clusters in one layer2 network safe in production use? The goal is a rbd-mirror between them. -- Pozdrawiam Jarosław Mociak - Nettelekom GK Sp. z o.o. pgpGjD23Bl8Ln.pgp Description: OpenPGP digital signature ___ ceph-users mailing list ceph-u