On 30/10/2024 09:41, Thomas Lamprecht wrote: > Am 25/10/2024 um 13:13 schrieb Friedrich Weber: >> When KRBD is enabled for an RBD storage, the storage plugin calls out >> to `rbd map` to map an RBD image as a block device on the host. >> Sometimes it might be necessary to pass custom options to `rbd map`. >> For instance, in some setups with Windows VMs, KRBD logs `bad >> crc/signature` and VMs performance is degraded unless the `rxbounce` >> option is enabled, as reported in the forum [1]. >> >> To allow users to specify custom options for KRBD, introduce a >> corresponding `krbd-map-options` property to the RBD plugin. The >> property is designed to only accept a supported set of map options. >> For now, this is only the `rxbounce` map option, but the supported set >> can be extended in the future. >> >> The reasoning for constraining the supported set of map options >> instead of allowing to pass a free-form option string is as follows: >> If `rxbounce` turns out to be a sensible default, accepting a >> free-form option string now will make it hard to switch over the >> default to `rxbounce` while still allowing users to disable `rxbounce` >> if needed. This would require scanning the free-form string for a >> `norxbounce` or similar, which is cumbersome. > > Reading the Ceph KRBD option docs [0] it seems a bit like it might > be valid to always enable this for OS type Windows? Which could safe > us an option here and avoid doing this storage wide.
I don't think the 'bad crc/signature' errors necessarily occur for each and every Windows VM on KRBD. But then again, I just set up a Windows Server 2022 VM on KRBD and got ~10 of those quite quickly, with innocuous actions (opening the browser and the like). Also some users recently reported [1] the need for rxbounce. So yes, enabling rxbounce for all Windows VM disks might be a good alternative, but as Fabian points out, technically this isn't really possible at the moment, because activate_volume doesn't know about the corresponding VM disk's ostype. >> If users need to set a map option that `krbd-map-options` does not >> support (yet), they can alternatively set the RBD config option >> `rbd_default_map_options` [2]. > > But that would work now already? So this is basically just to expose it > directly in the PVE (UI) stack? In my tests, setting the `rbd_default_map_options` works for enabling rxbounce. A forum user reported problems with that approach and I asked for more details [2], but I haven't heard back yet. > One reason I'm not totally happy with such stuff is that storage wide is > quite a big scope; users might then tend to configure the same Ceph pool as > multiple PVE storages, something that can have bad side effects. > We basically had this issue for when the krbd flag was added first, then > it was an "always use krbd or never user krbd" flag, now it's rather an > "always use krbd or else use what works (librbd for VMs and krbd for CTs)" > flag, and a big reason was that otherwise one would need two pools or, > worse, exposing the same pool twice to PVE. This patch feels a bit like > going slightly back to that direction, albeit it's not 1:1 the same and > it might be fine, but I'd also like to have the alternatives evaluated a > bit more closely before going this route. Yeah, I see the point. Of course, another alternative is enabling `rxbounce` unconditionally, as initially requested in [1]. I'm a hesitant to do that because from reading its description I'd expect it could have a performance impact -- it's probably small, if any, but this should probably be checked before changing the default. [1] https://forum.proxmox.com/threads/155741/ [2] https://forum.proxmox.com/threads/155741/post-715664 _______________________________________________ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel