--- Begin Message ---
>>
>>However I have some questions:
>>
>>1. When qmeventd receives the BLOCK_WRITE_THRESHOLD event, should the
>>extend request (writing the nodename to the extend queue) be handled 
>>directly in C, or would it be preferable to do it via an API call
>>such 
>>as PUT /nodes/{node}/qemu/{vmid}/extend_request, passing the nodename
>>as 
>>a parameter?

I think that qemevent should be as fast as possible. For my first
version, I was doing an exec of "qm extend ....", but I think it should
be even better if qmevent is simply write vmid/diskid in a text file
somewhere in /etc/pve.  (the the other daemon can manage the queue)


>>2. If we use a local daemon for each node how is it decided which
>>node 
>>will preform the extend operation?
>>Another option is to use a centralized daemon (maybe on the quorum 
>>master) that performs every extend.
The local daemon where the vm is running should resize the lvm, because
if the resize is done on another node, the lvm need to be
rescan/refresh to see the new size. AFAIK, It's not done automatically.



>>3. Is there any specific reason for the event only be triggered at
>>50% 
>>of last chunk, in your implementation? I was thinking of implementing
>>it 
>>with 10% of the current provisioned space to be safe. Any options on
>>this?

I have use same implementation than redhat, but it could be some
tunable value. It's really depend of how much fast is your storage.
as far I remember, it's was 50% of the size of the chunk (1GB).
so when you have 500MB free, it should add another 1GB.

of course, if you have fast nvme, and you write 2GB/s,  it'll be too
short.


if you go with 10% of provisionned, if you have 2TB qcow2, it'll
increase when 200GB is free.  (and how much do we want to increase ?
another 10% ? )

but if you want 2GB qcow2, it'll increase when 200MB is free.
so, we a fast nvme, it could not work with small disk, but ok with big
disk.
I think that's why redhat use percent of fixed chunksize (that you can
configure depending of your storage speed)






>>In terms of locking I'm planning to use the cfs_lock_file to write to
>>the extend queue and cfs_lock_storage to perform the extend on the 
>>target disk.

yes, that's what I had in mind too.


I thing to also handle, is if the server loose quorum for some second
(so /etc/pve readonly). I think we need to keep info in qmeventd memory
and try to write the file in queue when quorum is good again.


--- End Message ---
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

Reply via email to