On 19.03.2025 13:05, Mykyta Poturai wrote:
> On 18.03.25 16:26, Jan Beulich wrote:
>> On 18.03.2025 14:31, Mykyta Poturai wrote:
>>> On 18.03.25 12:11, Jan Beulich wrote:
>>>> On 18.03.2025 10:10, Mykyta Poturai wrote:
>>>>> On 15.01.24 11:35, Jan Beulich wrote:
>>>>>> On 14.01.2024 11:01, Mykyta Poturai wrote:
>>>>>>> --- a/xen/include/public/hvm/dm_op.h
>>>>>>> +++ b/xen/include/public/hvm/dm_op.h
>>>>>>> @@ -444,6 +444,17 @@ struct xen_dm_op_nr_vcpus {
>>>>>>>     };
>>>>>>>     typedef struct xen_dm_op_nr_vcpus xen_dm_op_nr_vcpus_t;
>>>>>>>
>>>>>>> +#define XEN_DMOP_inject_msi2 21
>>>>>>> +#define XEN_DMOP_MSI_SOURCE_ID_VALID (1u << 0)
>>>>>>> +
>>>>>>> +struct xen_dm_op_inject_msi2 {
>>>>>>> +    uint64_aligned_t addr;
>>>>>>> +    uint32_t data;
>>>>>>> +    uint32_t source_id; /* PCI SBDF */
>>>>>>
>>>>>> Since the comment says SBDF (not BDF), how are multiple segments handled
>>>>>> here? At least on x86 (VT-d and V-i) source IDs are only 16 bits iirc,
>>>>>> and are local to the respective segment. It would feel wrong to use a
>>>>>> 32-bit quantity there; IOW I'd rather see this as being two 16-bit fields
>>>>>> (segment and source_id).
>>>>>
>>>>> I'm planning on resuming this series in the near future and want to
>>>>> clarify the DM op interface.
>>>>>
>>>>> Wouldn't it be better to keep things consistent and use
>>>>> XEN_DMOP_PCI_SBDF as it's done in xen_dm_op_ioreq_server_range? Also
>>>>> with this, the value can be easily casted to pci_sbdf_t later and split
>>>>> to segment and BDF if needed.
>>>>
>>>> The essence of my earlier comment is: Naming, contents, and comments need
>>>> to be in sync.
>>>>
>>>> I question though that "casting to pci_sbdf_t" is technically possible.
>>>> Nor am I convinced that it would be desirable to do so if it was possible
>>>> (or if "casting" was intended to mean something else than what this is in
>>>> C). See my recent comments on some of Andrii's patches [1][2].
>>>>
>>>> Jan
>>>>
>>>> [1] https://lists.xen.org/archives/html/xen-devel/2025-03/msg01294.html
>>>> [2] https://lists.xen.org/archives/html/xen-devel/2025-03/msg01301.html
>>>
>>> Would something like this be okay then?
>>>
>>> struct xen_dm_op_inject_msi2 {
>>>       /* IN - MSI data (lower 32 bits) */
>>>       uint32_t data;
>>>       /* IN - ITS devid of the device triggering the interrupt */
>>>       uint32_t source_id;
>>>       uint32_t flags;
>>>       uint32_t pad;
>>>       /* IN - MSI address */
>>>       uint64_aligned_t addr;
>>> };
>>>
>>> Added padding and explained source_id purpose better.
>>
>> I fear the comment is far from making clear what layout source_id is to
>> have, hence also leaving open whether a segment number is included there.
>> Of course the issue here may be that I have no clue what "ITS devid"
>> means or implies. What is clear is that "ITS devid" is meaningless on
>> x86, for example.
> 
> ITS devid is implementation defined. Its size is also implementation
> defined but up to 32 bits.
> 
> Quotes from Arm Base System Architecture[1]:
>  > The system designer assigns a requester a unique StreamID to device
> traffic input to the SMMU.
>  > If a requester is a bridge from a different interconnect with an
> originator ID, like a PCIe RequesterID, and devices on that interconnect
> might need to send MSIs, the originator ID is used to generate a
> DeviceID. The function to generate the DeviceID should be an identity or
> a simple offset.
> 
> On the Xen's side it is used as an offset into the ITS translation
> tables and is sourced from msi-map nodes in the device tree.
> 
> Practically for PCI the requester ID is usually the SBDF. Where the
> segment is used to find the host bridge node that contains the msi-map
> node with defined offsets but it is also included as part of an id.
> That's why it was originally called SBDF in the comment. I don't know if
> there is a better way to describe what it is concisely in the comments.

If this is to be usable for other architectures too, it may need to be
a union to put there. With appropriate comments for each member.

Jan

Reply via email to