Den mån 15 nov. 2021 kl 10:18 skrev MERZOUKI, HAMID <hamid.merzo...@atos.net>:
> Thanks for your answers Janne Johansson,
> "I'm sure you can if you do it even more manually with ceph-volume, but there 
> should seldom be a need to"
> Why do you think "there should seldom be a need to" ?

I meant this as a response to how to handle something like:
"I have one drive with only pvcreate run on it, one with pvcreate,
vgcreate and lastly one with pvcreate,vgcreate and lvcreate run on it
and a named LV to use for OSD data and I want the auto-setup tools to
handle this and make the required steps to make WAL on the first, DB
on the second and have data on the third"

If -for any reason- you have such a setup and these kinds of demands,
you are probably doing it wrong or you get to do it all 100% manually
for this kind of weird setup.

As soon as you start growing into something resembling real world ceph
cluster usage with some 5-10-15 OSD hosts with X drives on them each,
you will probably get into a situation where drives actually are empty
when you get them and where the defaults and auto-setup scripts will
work out fine without having to think a lot about these things.

> "Yes, upgrades do not contain LVM management, as far as I have ever seen."
> But there will be problems if later one existent OSD must be totally 
> recreated, won't it ?

'Totally recreated' just means you get to run "sgdisk -Z" or
"ceph-volume lvm zap" once before remaking a drive into an OSD again,
adding this step to your setup routine is very simple in those cases.

-- 
May the most significant bit of your life be positive.
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to