This question is focused on Tripleo overcloud nodes meant to handle block 
storage or object storage rather than the regular control and compute nodes.

Basically I want to get peoples thoughts on how much manipulation of the 
underlying storage devices we should be expecting to do if we want standalone 
overcloud nodes to provide block and object storage via their local disks, i.e. 
not via NFS/gluster/ceph etc

Consider an overcloud node which will be used for providing object storage 
(swift) from its local disks:
IIUC swift really just cares that for each disk you want to use for storage:
a) it has a partition
b) that the partition has a filesystem on it
c) that the partition is mounted under /srv/node

Given tripleo is taking ownership of installing the operating system on these 
nodes, how much responsibility should tripleo take for getting the above steps 
done? In the case that this machine just came in off the truck, all of those 
steps would need to be done prior to the system being a usable part of the 
overcloud.
If we don't want tripleo dealing with this stuff right now, e.g. its eventually 
going to be done by ironic e.g. 
https://blueprints.launchpad.net/ironic/+spec/utility-ramdisk, then what is the 
best process today? Is it that someone does a bunch of work on these machines 
before we start the tripleo deployment process? Presumably we would at least 
need to be able to feed Heat a list of partitions which we then mount under 
/srv/node and update fstab accordingly so the changes stick?

[Right now we skip all of a)-c) above and just have swift using a folder, 
/srv/node/d1 (doesn't this want to be under /mnt/state?), to store all its 
content.]


Now consider an overcloud node which will be used for providing block storage 
(cinder) from its local disks:
IIUC the cinder LVM driver is the preferred option when accessing local 
storage. In this case cinder really just cares that for each disk you want to 
use for storage it is added to a specific volume group. [Assuming we're not 
going to allow people to create disk partitions and then select particular 
ones]. We would then presumably need to include the appropriate filter options 
in lvm.conf so the selected devices get correctly scanned by lvm at startup?

[Right now we do all this for a dummy loopback device which gets created under 
/mnt/state/var/lib/cinder/ and whose size you can set via the Heat template: 
https://github.com/openstack/tripleo-image-elements/blob/master/elements/cinder-volume/os-refresh-config/post-configure.d/72-cinder-resize-volume-groups
 ]

Thanks
Charles

_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to