On 4/15/25 16:10, Timo Veith wrote:
> Hello Mira,
> 
> thank you very much for your reply.
> 
>> Am 15.04.2025 um 11:09 schrieb Mira Limbeck <m.limb...@proxmox.com>:
>>
>> Hi Timo,
>>
>> At the moment I'm working on storage mapping support for iSCSI.
>> This would allow one to configure different portals on each of the hosts
>> that are logically the same storage.
>>
>> If you tried setting up a storage via iSCSI where each host can only
>> access a part of the portals which are announced, you probably noticed
>> some higher pvestatd update times.
>> The storage mapping implementation will alleviate those issues.
>>
>> Other than that I'm not aware of anyone working on iSCSI improvements at
>> the moment.
>> We do have some open enhancement requests in our bug tracker [0]. One of
>> which is yours [1].
> 
> From the list [0] you mentioned iSCSI CHAP credentials in the GUI is 
> something we are interested in too. 
This is probably a bit more work to implement with the current way the
plugin works.
Since the discoverydb is recreated constantly, you would have to set the
credentials before each login. Or pass them to iscsiadm as options,
which needs to make sure that no sensitive information is logged on error.

> 
>>
>> Regarding multipath handling via the GUI there hasn't been much of a
>> discussion on how we could tackle that yet. It is quite easy to set up
>> [2] the usual way.
> 
> I know that it is easy, because otherwise I wouldn’t have been able to 
> configure it ;)
> 
> 
>>
>>
>> Sorry, I might have missed your bug report previously, so I'll go into a
>> bit more detail here. (I'll add that information to the enhancement
>> request as well)
>>
>>> When adding iscsi storage to the data center there could possiblity to
>>> do a iscsi discovery multiple times against different portal ips and
>>> thus get multiple path to a iscsi san.
>>
>> That's already the default. For each target we run the discovery on at
>> least one portal since it should announce all other portals. We haven't
>> encountered a setup where that is not the case.
> 
> I am dealing only with setups that do not announce their portals. I have to 
> do iscsi discovery for every portal ip address. That are mostly Infortrend 
> iSCSI SAN systems but also from Huawei. But I think I know what you mean. 
> Some storage devices give you all portals when you do a discovery against one 
> of their ip adresses.
> However, it would be great to have a possibility to enter multiple portal ip 
> addresses in the web ui. Together with chap credentials.  
I tried just allowing multiple portals, and it didn't scale well.
For setups where each host has access to the same portals and targets,
it already works nicely the way it currently is.
But for asymmetric setups where each host can only connect to different
portals, and maybe different targets altogether, it doesn't bring any
benefit.

That's the reason I'm currently working on a `storage mapping` solution
where you can specify host-specific portals and targets, that all map to
the same `logical` storage.

Do you SANs provide the same target on all portals, or is it always a
different target for each portal?

> 
>>
>>> multipathd should be updated with the path to the luns. The user
>>> would/could only need to have to add vendor specific device configs
>>> like alua or multibus settings.
>>
>> For now that has to be done manually. There exists a multipath.conf
>> setting that automatically creates a multipath mapping for devices that
>> have at least 2 paths available: `find_multipaths yes` [3].
> 
> I will test `find_multipaths yes`. If I understand you correctly then the 
> command `multipath -a <wwid>` will not be necessary. Just like written in the 
> multipath wiki article [2].
> 
>>
>>> Then when adding a certain disk to a vm, it would be good if it's wwn
>>> would be displayed instead of the "CH 00 ID0 LUN0" e.g. So it would be
>>> easier to identify the right one.
>>
>> That would be a nice addition. And shouldn't be too hard to extract that
>> information in the ISCSIPlugin and provide it as additional information
>> via the API.
>> That information could also be listed in the `VM Disks` page of iSCSI
>> storages.
>> Would you like to tackle that?
> 
> Are you asking me to provide the code for that? 
Since you mentioned `If there are any, what are they, what is their
status and can they be supplemented or contributed to?` I assumed you
were willing to contribute code as well. That's why I asked if you
wanted to tackle that improvement.

> 
>>
>>> Also when changing lun size would have been grown on the storage side,
>>> it would be handy to have a button in pve web gui to "refresh" the
>>> disk in the vm. The new size should be reflected in the hardware
>>> details of the vm. And the qemu prozess should be informed of the new
>>> disk size so the vm would not have to be shutdown and restarted.
>>
>> Based on experience, I doubt it would be that easy. Refreshing of the
>> LUN sizes involves the SAN, the client, multipath and QEMU. There's
>> always at least one place where it doesn't update even with
>> `rescan-scsi-bus.sh`, `multipath -r`, etc.
>> If you have a reliable way to make all sides agree on the new size,
>> please let us know.
> 
> Don’t get me wrong, I didn’t meant that it should be possible to resize a 
> iscsi disk right from the PVE web gui. I meant that if one has changed the 
> size of a LUN on SAN side with the measures that are necessary to that there 
> (e.g. with Infortrend you need to login to the management software there, 
> find the LUN and then resize it) then the refreshing of that new size could 
> be triggered by a button in the PVE web gui. When pressing the button an 
> iscsi rescan of the corresponding iscsi session would have to be done and 
> then a multipath map rescan like you wrote and eventually a qemu block device 
> refresh. (And/Or the equvialent for the lxc container) 
> 
> Even if I do all that manually then the size of the LUN in the hardware 
> details of the vm is not beeing updated. 
> 
> I personally do not know how but at least I know that it is possible in 
> ovirt/RHV. 
We've seen some setups in our enterprise support where none of the above
mentioned commands helped after a resize. The host still saw the old
size. Only a reboot helped.
So that's going to be difficult to do for all combinations of hardware
and software.

Do you have a reliable set of commands that work in all your cases of a
resize, so that the host sees the correct size, and multipath resizes
reliably?

> 
> Regards,
> Timo
> 
>>
>>
>>
>> [0]
>> https://bugzilla.proxmox.com/buglist.cgi?bug_severity=enhancement&list_id=50969&resolution=---&short_desc=iscsi&short_desc_type=allwordssubstr
>> [1] https://bugzilla.proxmox.com/show_bug.cgi?id=6133
>> [2] https://pve.proxmox.com/wiki/Multipath
>> [3]
>> https://manpages.debian.org/bookworm/multipath-tools/multipath.conf.5.en.html
>>
> 



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

Reply via email to