Good points, thank you for the insight.

Given that I’m hosting the journals (wal/block.dbs) on ssds, would I need to do 
all the OSDs hosts on each journal ssd at the same time? I’m fairly sure this 
would be the case.


Senior Systems Administrator
Research Computing Services Team
University of Victoria
O: 250.472.4997

From: Janne Johansson <icepic...@gmail.com>
Date: Friday, November 15, 2019 at 11:46 AM
To: Cave Mike <mc...@uvic.ca>
Cc: Paul Emmerich <paul.emmer...@croit.io>, ceph-users 
<ceph-users@lists.ceph.com>
Subject: Re: [ceph-users] Migrating from block to lvm

Den fre 15 nov. 2019 kl 19:40 skrev Mike Cave 
<mc...@uvic.ca<mailto:mc...@uvic.ca>>:
So would you recommend doing an entire node at the same time or per-osd?

You should be able to do it per-OSD (or per-disk in case you run more than one 
OSD per disk), to minimize data movement over the network, letting other OSDs 
on the same host take a bit of the load while re-making the disks one by one. 
You can use "ceph osd reweight <number> 0.0" to make the particular OSD release 
its data but still claim it supplies $crush-weight to the host, meaning the 
other disks will have to take its data more or less.
Moving data between disks in the same host usually goes lots faster than over 
the network to other hosts.

--
May the most significant bit of your life be positive.
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to