ore.kernel.org/all/20120731103457.20182.88454.stgit@zurg/
https://lore.kernel.org/all/20120731103503.20182.94365.stgit@zurg/
But I think that change at least breaks hugetlb once so there's the
explicit hugetlb check to recover that behavior back:
https
alled on every event.
* Just call the poller directly to log any events.
* This could in theory increase the threshold under high load,
* but doesn't for now.
*/
static void intel_threshold_interrupt(void)
I think that matches with what I was thinking.. I mean for 2) not sure
whether it can be seen a
x27;m curious the same on how unpoisoning could help here. The reasoning
behind would be great material to be mentioned in the next cover letter.
Shouldn't we consider migrating serious workloads off the host already
where there's a sign of more severe hardware
On Thu, Jan 27, 2022 at 10:24:27AM +0100, Eugenio Perez Martin wrote:
> On Thu, Jan 27, 2022 at 9:06 AM Peter Xu wrote:
> >
> > On Tue, Jan 25, 2022 at 10:40:01AM +0100, Eugenio Perez Martin wrote:
> > > So I think that the first step to remove complexity from the ol
rgs->hole_left->iova > args->iova_last) {
> >
> > IMHO this check is redundant and can be dropped, as it's already done in
> > iova_tree_alloc_map_in_hole().
> >
>
> Assuming we add "iova_found" to iova_tree_alloc_map_in_hole to
> IOVATreeAllocArgs
!args->iova_found) {
return IOVA_ERR_NOMEM;
}
}
map->iova = args->iova_result;
...
Thanks,
> +/*
> + * 2nd try: Last iteration left args->right as the last DMAMap. But
> + * (right, end) hole needs to be checked too
> + */
> +iova_tree_alloc_args_iterate(&args, NULL);
> +if (!iova_tree_alloc_map_in_hole(&args)) {
> +return IOVA_ERR_NOMEM;
> +}
> +}
> +
> +map->iova = MAX(iova_begin,
> +args.hole_left ?
> +args.hole_left->iova + args.hole_left->size + 1 : 0);
> +return iova_tree_insert(tree, map);
> +}
> +
> void iova_tree_destroy(IOVATree *tree)
> {
> g_tree_destroy(tree->tree);
> --
> 2.27.0
>
--
Peter Xu
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
o know your thoughts
too (including Jason). I'll further comment in that thread soon.
Thanks,
--
Peter Xu
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
On Mon, Jan 24, 2022 at 10:20:55AM +0100, Eugenio Perez Martin wrote:
> On Mon, Jan 24, 2022 at 5:33 AM Peter Xu wrote:
> >
> > On Fri, Jan 21, 2022 at 09:27:23PM +0100, Eugenio Pérez wrote:
> > > +int iova_tree_alloc(IOVATree *tree, DMAMap *map, hwaddr iova_b
/lore.kernel.org/qemu-devel/cacgkmetzapd9xqtp_r4w296n_qz7vuv1flnb544fevoyo0o...@mail.gmail.com/
That solution still sounds very sensible to me even without the newly
introduced list in previous two patches.
IMHO we could move "DMAMap *previous, *this" into the IOVATreeAllocArgs
> +}
> +
> +map->iova = MAX(iova_begin,
> +args.hole_left ?
> +args.hole_left->iova + args.hole_left->size + 1 : 0);
> +return iova_tree_insert(tree, map);
> +}
Re the algorithm - I totally agree Jason's version is much be
issues.
> >
> > Do they make you ring any bells?
> >
> > $ ./qemu -m 4G -smp 4 -M q35,accel=kvm,kernel-irqchip=split \
> > -drive file=fedora.qcow2,format=qcow2,if=virtio \
> > -device intel-iommu,intremap=on,device-i
The variable is never used.
CC: Michael S. Tsirkin
CC: Jason Wang
CC: virtualization@lists.linux-foundation.org
CC: net...@vger.kernel.org
CC: linux-ker...@vger.kernel.org
Signed-off-by: Peter Xu
---
drivers/net/virtio_net.c | 8
1 file changed, 4 insertions(+), 4 deletions(-)
diff
k and wake up the virtqueue in this case.
>
> Fixes: 6b1e6cc7855b ("vhost: new device IOTLB API")
> Reported-by: Peter Xu
> Signed-off-by: Jason Wang
Without this patch, this command will trigger the IO hang merely every
time from host to guest:
netperf -H 1.2.3.4 -l 5 -t
13 matches
Mail list logo