On 19/01/2019 02.24, Brian Topping wrote:
> 
> 
>> On Jan 18, 2019, at 4:29 AM, Hector Martin <hec...@marcansoft.com> wrote:
>>
>> On 12/01/2019 15:07, Brian Topping wrote:
>>> I’m a little nervous that BlueStore assumes it owns the partition table and 
>>> will not be happy that a couple of primary partitions have been used. Will 
>>> this be a problem?
>>
>> You should look into using ceph-volume in LVM mode. This will allow you to 
>> create an OSD out of any arbitrary LVM logical volume, and it doesn't care 
>> about other volumes on the same PV/VG. I'm running BlueStore OSDs sharing 
>> PVs with some non-Ceph stuff without any issues. It's the easiest way for 
>> OSDs to coexist with other stuff right now.
> 
> Very interesting, thanks!
> 
> On the subject, I just rediscovered the technique of putting boot and root 
> volumes on mdadm-backed stores. The last time I felt the need for this, it 
> was a lot of careful planning and commands. 
> 
> Now, at least with RHEL/CentOS, it’s now available in Anaconda. As it’s set 
> up before mkfs, there’s no manual hackery to reduce the size of a volume to 
> make room for the metadata. Even better, one isn’t stuck using metadata 0.9.0 
> just because they need the /boot volume to have the header at the end (grub 
> now understands mdadm 1.2 headers). Just be sure /boot is RAID 1 and it 
> doesn’t seem to matter what one does with the rest of the volumes. Kernel 
> upgrades process correctly as well (another major hassle in the old days 
> since mkinitrd had to be carefully managed).
> 

Just to add a related experience: you still need 1.0 metadata (that's
the 1.x variant at the end of the partition, like 0.9.0) for an
mdadm-backed EFI system partition if you boot using UEFI. This generally
works well, except on some Dell servers where the firmware inexplicably
*writes* to the ESP, messing up the RAID mirroring. But there is a hacky
workaround. They create a directory ("Dell" IIRC) to put their junk in.
If you create a *file* with the same name ahead of time, that makes the
firmware fail to mkdir, but it doesn't seem to cause any issues and it
doesn't touch the disk in this case, so the RAID stays in sync.


-- 
Hector Martin (hec...@marcansoft.com)
Public Key: https://mrcn.st/pub
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to