You could try flushing out the FileStore journals off the SSD and creating new ones elsewhere (eg, colocated). This will obviously have a substantial impact on performance but perhaps that’s acceptable during your upgrade window?
On Mon, Aug 6, 2018 at 12:32 PM Robert Stanford <rstanford8...@gmail.com> wrote: > > Eugen: I've tried similar approaches in the past and it seems like it > won't work like that. I have to zap the entire journal disk. Also I plan > to use the configuration tunable for making the bluestore partition (wal, > db) larger than the default > > On Mon, Aug 6, 2018 at 2:30 PM, Eugen Block <ebl...@nde.ag> wrote: > >> Hi, >> >> How then can one upgrade journals to BlueStore when there is more than >>> one >>> journal on the same disk? >>> >> >> if you're using one SSD for multiple OSDs the disk probably has several >> partitions. So you could just zap one partition at a time and replace the >> OSD. Or am I misunderstanding the question? >> >> Regards, >> Eugen >> >> >> Zitat von Bastiaan Visser <bvis...@flexyz.com>: >> >> >> As long as your fault domain is host (or even rack) you're good, just >>> take out the entire host and recreate all osd's on it. >>> >>> >>> ----- Original Message ----- >>> From: "Robert Stanford" <rstanford8...@gmail.com> >>> To: "ceph-users" <ceph-users@lists.ceph.com> >>> Sent: Monday, August 6, 2018 8:39:07 PM >>> Subject: [ceph-users] Upgrading journals to BlueStore: a conundrum >>> >>> According to the instructions to upgrade a journal to BlueStore ( >>> http://docs.ceph.com/docs/master/rados/operations/bluestore-migration/), >>> the OSD that uses the journal is destroyed and recreated. >>> >>> I am using SSD journals, and want to use them with BlueStore. Reusing >>> the >>> SSD requires zapping the disk (ceph-disk zap). But this would take down >>> all OSDs that use this journal, not just the one-at-a-time that I destroy >>> and recreate when following the upgrade instructions. >>> >>> How then can one upgrade journals to BlueStore when there is more than >>> one >>> journal on the same disk? >>> >>> R >>> >>> _______________________________________________ >>> ceph-users mailing list >>> ceph-users@lists.ceph.com >>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com >>> _______________________________________________ >>> ceph-users mailing list >>> ceph-users@lists.ceph.com >>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com >>> >> >> >> >> _______________________________________________ >> ceph-users mailing list >> ceph-users@lists.ceph.com >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com >> > > _______________________________________________ > ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com >
_______________________________________________ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com