On 7/17/23 19:07, Peter Maydell wrote:
> On Mon, 17 Jul 2023 at 17:56, Eric Auger <eric.au...@redhat.com> wrote:
>>
>> Hi Peter,
>> On 7/17/23 12:50, Peter Maydell wrote:
>>> On Tue, 11 Jul 2023 at 00:04, Michael S. Tsirkin <m...@redhat.com> wrote:
>>>> From: Eric Auger <eric.au...@redhat.com>
>>>>
>>>> When running on a 64kB page size host and protecting a VFIO device
>>>> with the virtio-iommu, qemu crashes with this kind of message:
>>>>
>>>> qemu-kvm: virtio-iommu page mask 0xfffffffffffff000 is incompatible
>>>> with mask 0x20010000
>>>> qemu: hardware error: vfio: DMA mapping failed, unable to continue
>>>>
>>>> This is due to the fact the IOMMU MR corresponding to the VFIO device
>>>> is enabled very late on domain attach, after the machine init.
>>>> The device reports a minimal 64kB page size but it is too late to be
>>>> applied. virtio_iommu_set_page_size_mask() fails and this causes
>>>> vfio_listener_region_add() to end up with hw_error();
>>>>
>>>> To work around this issue, we transiently enable the IOMMU MR on
>>>> machine init to collect the page size requirements and then restore
>>>> the bypass state.
>>>>
>>>> Fixes: 90519b9053 ("virtio-iommu: Add bypass mode support to assigned 
>>>> device")
>>>> Signed-off-by: Eric Auger <eric.au...@redhat.com>
>>> Hi; Coverity complains about this change (CID 1517772):
>> thank you for reporting the issue
>>>> +static void virtio_iommu_freeze_granule(Notifier *notifier, void *data)
>>>> +{
>>>> +    VirtIOIOMMU *s = container_of(notifier, VirtIOIOMMU, machine_done);
>>>> +    int granule;
>>>> +
>>>> +    if (likely(s->config.bypass)) {
>>>> +        /*
>>>> +         * Transient IOMMU MR enable to collect page_size_mask 
>>>> requirements
>>>> +         * through memory_region_iommu_set_page_size_mask() called by
>>>> +         * VFIO region_add() callback
>>>> +         */
>>>> +         s->config.bypass = false;
>>>> +         virtio_iommu_switch_address_space_all(s);
>>>> +         /* restore default */
>>>> +         s->config.bypass = true;
>>>> +         virtio_iommu_switch_address_space_all(s);
>>>> +    }
>>>> +    s->granule_frozen = true;
>>>> +    granule = ctz64(s->config.page_size_mask);
>>>> +    trace_virtio_iommu_freeze_granule(BIT(granule));
>>> Specifically, in this code, it thinks that ctz64() can
>>> return 64, in which case BIT(granule) is shifting off
>>> the end of the value, which is undefined behaviour.
>>> This can happen if s->config.page_size_mask is 0 -- are
>>> there assertions/checks that that can't happen elsewhere?
>> To me this cannot happen. The page_size_mask is initialized with
>> qemu_target_page_mask(), then further constrained with
>> virtio_iommu_set_page_size_mask() which would call error_setg if the new
>> mask is 0 or (cur_mask & new_mask) == 0
>>
>> What can I do to give coverity a hint that page_size_mask cannot be NULL?
> You can assert() it if you believe it to be true and you
> think an assert() would help a human reader,
> or we could just say "this is a false positive" and
> mark it that way in the Coverity UI. We don't need to
> change things purely to make Coverity happy.
OK
>
>>> Secondly, BIT() only works for values up to 32, since
>>> it works on type unsigned long, which might be a 32-bit
>>> type on some hosts. Since you used ctz64()
>>> you probably want BIT_ULL() which uses the ULL type
>>> which definitely has 64 bits.
>> agreed.
> Looking more closely at this, the file is not entirely
> consistent about how it handles the page_size_mask:
>  * in virtio_iommu_translate() we call ctz32()
>    and then use an open-coded 1 << n on that
>  * in virtio_iommu_set_page_size_mask() we call
>    ctz64() and use BIT()
>  * in virtio_iommu_freeze_granule() we call ctz64()
>    and then use BIT()
>
> So I suspect it's actually true (or at least assumed)
> that the granule value can never be 32 or higher, as
> well as it being non-zero.
yes I don't expect the granule to be >= 32 but I agree this is a mess in
the current implementation and this would deserve some alignment

I will work on a patch ...

Thanks

Eric
>
> thanks
> -- PMM
>


Reply via email to