e can
> > implement separate helpers for different virtqueue layout/features
> > then the in-order were implemented on top.
> >
> > Tests shows 2%-19% imporvment with packed virtqueue PPS with KVM guest
> > vhost-net/testpmd on the host.
> >
> > Chang
On Mon, Jul 28, 2025 at 6:17 PM Michael S. Tsirkin wrote:
>
> On Mon, Jul 28, 2025 at 02:41:29PM +0800, Jason Wang wrote:
> > This patch implements in order support for both split virtqueue and
> > packed virtqueue. Perfomance could be gained for the device where the
> >
On Mon, Jul 14, 2025 at 10:48 AM Jason Wang wrote:
>
> This patch adds basic in order support for vhost. Two optimizations
> are implemented in this patch:
>
> 1) Since driver uses descriptor in order, vhost can deduce the next
>avail ring head by counting the number of de
On Thu, Jul 24, 2025 at 8:40 AM Jason Wang wrote:
>
> Hello all:
>
> This sereis tries to implement the VIRTIO_F_IN_ORDER to
> virtio_ring. This is done by introducing virtqueue ops so we can
> implement separate helpers for different virtqueue layout/features
> then the in-o
On Mon, Jul 28, 2025 at 02:41:29PM +0800, Jason Wang wrote:
> This patch implements in order support for both split virtqueue and
> packed virtqueue. Perfomance could be gained for the device where the
> memory access could be expensive (e.g vhost-net or a real PCI device):
>
> Ben
This patch implements in order support for both split virtqueue and
packed virtqueue. Perfomance could be gained for the device where the
memory access could be expensive (e.g vhost-net or a real PCI device):
Benchmark with KVM guest:
Vhost-net on the host: (pktgen + XDP_DROP
On Sat, Jul 26, 2025 at 4:57 AM Thorsten Blum wrote:
>
> Hi Jason,
>
> On 23. Jul 2025, at 23:40, Jason Wang wrote:
> >
> > This patch implements in order support for both split virtqueue and
> > packed virtqueue. Perfomance could be gained for the device where
Hi Jason,
On 23. Jul 2025, at 23:40, Jason Wang wrote:
>
> This patch implements in order support for both split virtqueue and
> packed virtqueue. Perfomance could be gained for the device where the
> memory access could be expensive (e.g vhost-net or a real PCI device):
>
>
On Thu, Jul 24, 2025 at 02:40:17PM +0800, Jason Wang wrote:
> This patch implements in order support for both split virtqueue and
> packed virtqueue. Perfomance could be gained for the device where the
> memory access could be expensive (e.g vhost-net or a real PCI device):
>
> Ben
This patch implements in order support for both split virtqueue and
packed virtqueue. Perfomance could be gained for the device where the
memory access could be expensive (e.g vhost-net or a real PCI device):
Benchmark with KVM guest:
Vhost-net on the host: (pktgen + XDP_DROP
Hello all:
This sereis tries to implement the VIRTIO_F_IN_ORDER to
virtio_ring. This is done by introducing virtqueue ops so we can
implement separate helpers for different virtqueue layout/features
then the in-order were implemented on top.
Tests shows 2%-19% imporvment with packed virtqueue
On 7/18/25 11:29 AM, Michael S. Tsirkin wrote:
> Paolo I'm likely confused. That series is in net-next, right?
> So now it would be work to drop it from there, and invalidate
> all the testing it got there, for little benefit -
> the merge conflict is easy to resolve.
Yes, that series is in net-ne
On Fri, Jul 18, 2025 at 11:19:26AM +0200, Paolo Abeni wrote:
> On 7/18/25 4:04 AM, Jason Wang wrote:
> > On Thu, Jul 17, 2025 at 9:52 PM Paolo Abeni wrote:
> >> On 7/17/25 8:01 AM, Jason Wang wrote:
> >>> On Thu, Jul 17, 2025 at 1:55 PM Michael S. Tsirkin
> >>> wrote:
> On Thu, Jul 17, 2025
On 7/18/25 4:04 AM, Jason Wang wrote:
> On Thu, Jul 17, 2025 at 9:52 PM Paolo Abeni wrote:
>> On 7/17/25 8:01 AM, Jason Wang wrote:
>>> On Thu, Jul 17, 2025 at 1:55 PM Michael S. Tsirkin wrote:
On Thu, Jul 17, 2025 at 10:03:00AM +0800, Jason Wang wrote:
> On Thu, Jul 17, 2025 at 8:04 AM
On Thu, Jul 17, 2025 at 9:52 PM Paolo Abeni wrote:
>
> On 7/17/25 8:01 AM, Jason Wang wrote:
> > On Thu, Jul 17, 2025 at 1:55 PM Michael S. Tsirkin wrote:
> >> On Thu, Jul 17, 2025 at 10:03:00AM +0800, Jason Wang wrote:
> >>> On Thu, Jul 17, 2025 at 8:04 AM Jakub Kicinski wrote:
>
> On
On 7/17/25 8:01 AM, Jason Wang wrote:
> On Thu, Jul 17, 2025 at 1:55 PM Michael S. Tsirkin wrote:
>> On Thu, Jul 17, 2025 at 10:03:00AM +0800, Jason Wang wrote:
>>> On Thu, Jul 17, 2025 at 8:04 AM Jakub Kicinski wrote:
On Mon, 14 Jul 2025 16:47:52 +0800 Jason Wang wrote:
> This seri
On Thu, Jul 17, 2025 at 02:01:06PM +0800, Jason Wang wrote:
> On Thu, Jul 17, 2025 at 1:55 PM Michael S. Tsirkin wrote:
> >
> > On Thu, Jul 17, 2025 at 10:03:00AM +0800, Jason Wang wrote:
> > > On Thu, Jul 17, 2025 at 8:04 AM Jakub Kicinski wrote:
> > > >
> > > > On Mon, 14 Jul 2025 16:47:52 +080
On Thu, Jul 17, 2025 at 1:55 PM Michael S. Tsirkin wrote:
>
> On Thu, Jul 17, 2025 at 10:03:00AM +0800, Jason Wang wrote:
> > On Thu, Jul 17, 2025 at 8:04 AM Jakub Kicinski wrote:
> > >
> > > On Mon, 14 Jul 2025 16:47:52 +0800 Jason Wang wrote:
> > > > This series implements VIRTIO_F_IN_ORDER sup
On Thu, Jul 17, 2025 at 10:03:00AM +0800, Jason Wang wrote:
> On Thu, Jul 17, 2025 at 8:04 AM Jakub Kicinski wrote:
> >
> > On Mon, 14 Jul 2025 16:47:52 +0800 Jason Wang wrote:
> > > This series implements VIRTIO_F_IN_ORDER support for vhost-net. This
> > > feature is designed to improve the perfo
On Thu, Jul 17, 2025 at 8:04 AM Jakub Kicinski wrote:
>
> On Mon, 14 Jul 2025 16:47:52 +0800 Jason Wang wrote:
> > This series implements VIRTIO_F_IN_ORDER support for vhost-net. This
> > feature is designed to improve the performance of the virtio ring by
> > optimizing descriptor processing.
> >
On Mon, 14 Jul 2025 16:47:52 +0800 Jason Wang wrote:
> This series implements VIRTIO_F_IN_ORDER support for vhost-net. This
> feature is designed to improve the performance of the virtio ring by
> optimizing descriptor processing.
>
> Benchmarks show a notable improvement. Please see patch 3 for d
of vhost_add_used_ooo()
> - conisty nheads for vhost_add_used_in_order()
> - typo fixes and other tweaks
>
> Thanks
>
> Jason Wang (3):
> vhost: fail early when __vhost_add_used() fails
> vhost: basic in order support
> vhost_net: basic in_order support
>
> driv
This patch adds basic in order support for vhost. Two optimizations
are implemented in this patch:
1) Since driver uses descriptor in order, vhost can deduce the next
avail ring head by counting the number of descriptors that has been
used in next_avail_head. This eliminate the need to
early when vhost_add_used() fails
- drop unused parameters of vhost_add_used_ooo()
- conisty nheads for vhost_add_used_in_order()
- typo fixes and other tweaks
Thanks
Jason Wang (3):
vhost: fail early when __vhost_add_used() fails
vhost: basic in order support
vhost_net: basic in_order
On Thu, Jul 10, 2025 at 5:05 PM Eugenio Perez Martin
wrote:
>
> On Tue, Jul 8, 2025 at 8:48 AM Jason Wang wrote:
> >
> > This patch adds basic in order support for vhost. Two optimizations
> > are implemented in this patch:
> >
> > 1) Since driver uses descr
On Tue, Jul 8, 2025 at 8:48 AM Jason Wang wrote:
>
> This patch adds basic in order support for vhost. Two optimizations
> are implemented in this patch:
>
> 1) Since driver uses descriptor in order, vhost can deduce the next
>avail ring head by counting the number of de
On 7/8/25 2:48 AM, Jason Wang wrote:
This patch adds basic in order support for vhost. Two optimizations
are implemented in this patch:
1) Since driver uses descriptor in order, vhost can deduce the next
avail ring head by counting the number of descriptors that has been
used in
This patch adds basic in order support for vhost. Two optimizations
are implemented in this patch:
1) Since driver uses descriptor in order, vhost can deduce the next
avail ring head by counting the number of descriptors that has been
used in next_avail_head. This eliminate the need to
order support
vhost_net: basic in_order support
drivers/vhost/net.c | 88 +-
drivers/vhost/vhost.c | 121 +++---
drivers/vhost/vhost.h | 8 ++-
3 files changed, 170 insertions(+), 47 deletions(-)
--
2.31.1
Jul 1, 2025 at 2:57 PM Michael S. Tsirkin wrote:
> >>>>
> >>>> On Mon, Jun 16, 2025 at 04:25:17PM +0800, Jason Wang wrote:
> >>>>> This patch implements in order support for both split virtqueue and
> >>>>> packed virtqueue.
> >&g
implements in order support for both split virtqueue and
packed virtqueue.
I'd like to see more motivation for this work, documented.
It's not really performance, not as it stands, see below:
Benchmark with KVM guest + testpmd on the host shows:
For split virtqueue: no obvious differ
On Wed, Jul 2, 2025 at 6:57 PM Michael S. Tsirkin wrote:
>
> On Wed, Jul 02, 2025 at 05:29:18PM +0800, Jason Wang wrote:
> > On Tue, Jul 1, 2025 at 2:57 PM Michael S. Tsirkin wrote:
> > >
> > > On Mon, Jun 16, 2025 at 04:25:17PM +0800, Jason Wang wrote:
> >
On Wed, Jul 02, 2025 at 05:29:18PM +0800, Jason Wang wrote:
> On Tue, Jul 1, 2025 at 2:57 PM Michael S. Tsirkin wrote:
> >
> > On Mon, Jun 16, 2025 at 04:25:17PM +0800, Jason Wang wrote:
> > > This patch implements in order support for both split virtqueue and
> > &
On Tue, Jul 1, 2025 at 2:57 PM Michael S. Tsirkin wrote:
>
> On Mon, Jun 16, 2025 at 04:25:17PM +0800, Jason Wang wrote:
> > This patch implements in order support for both split virtqueue and
> > packed virtqueue.
>
> I'd like to see more motivation for this work,
When writing symtypes information, we iterate through the entire hash
table containing type expansions. The key order varies unpredictably
as new entries are added, making it harder to compare symtypes between
builds.
Resolve this by sorting the type expansions by name before output.
Signed-off
On Mon, Jun 16, 2025 at 04:24:58PM +0800, Jason Wang wrote:
> Hello all:
>
> This sereis tries to implement the VIRTIO_F_IN_ORDER to
> virtio_ring. This is done by introducing virtqueue ops so we can
> implement separate helpers for different virtqueue layout/features
> then
On Mon, Jun 16, 2025 at 04:25:17PM +0800, Jason Wang wrote:
> This patch implements in order support for both split virtqueue and
> packed virtqueue.
I'd like to see more motivation for this work, documented.
It's not really performance, not as it stands, see below:
>
> Be
18:51, Masahiro Yamada
> > > wrote:
> > > >
> > > > On Wed, Jun 25, 2025 at 6:52 PM Giuliano Procida
> > > > wrote:
> > > > >
> > > > > When writing symtypes information, we iterate through the entire hash
> > > &
cida
> > > wrote:
> > > >
> > > > When writing symtypes information, we iterate through the entire hash
> > > > table containing type expansions. The key order varies unpredictably
> > > > as new entries are added, making it harder to compare sy
e through the entire hash
> > > table containing type expansions. The key order varies unpredictably
> > > as new entries are added, making it harder to compare symtypes between
> > > builds.
> > >
> > > Resolve this by sorting the type expansions b
Hi.
On Sun, 29 Jun 2025 at 18:51, Masahiro Yamada wrote:
>
> On Wed, Jun 25, 2025 at 6:52 PM Giuliano Procida wrote:
> >
> > When writing symtypes information, we iterate through the entire hash
> > table containing type expansions. The key order varies unpredictabl
On Wed, Jun 25, 2025 at 6:52 PM Giuliano Procida wrote:
>
> When writing symtypes information, we iterate through the entire hash
> table containing type expansions. The key order varies unpredictably
> as new entries are added, making it harder to compare symtypes between
> buil
When writing symtypes information, we iterate through the entire hash
table containing type expansions. The key order varies unpredictably
as new entries are added, making it harder to compare symtypes between
builds.
Resolve this by sorting the type expansions by name before output.
Signed-off
On Mon, Jun 16, 2025 at 10:25 AM Jason Wang wrote:
>
> Hello all:
>
> This sereis tries to implement the VIRTIO_F_IN_ORDER to
> virtio_ring. This is done by introducing virtqueue ops so we can
> implement separate helpers for different virtqueue layout/features
> t
This patch implements in order support for both split virtqueue and
packed virtqueue.
Benchmark with KVM guest + testpmd on the host shows:
For split virtqueue: no obvious differences were noticed
For packed virtqueue:
1) RX gets 3.1% PPS improvements from 6.3 Mpps to 6.5 Mpps
2) TX gets 4.6
Hello all:
This sereis tries to implement the VIRTIO_F_IN_ORDER to
virtio_ring. This is done by introducing virtqueue ops so we can
implement separate helpers for different virtqueue layout/features
then the in-order were implemented on top.
Tests shows 3%-5% imporvment with packed virtqueue PPS
Previously, the order for acquiring the locks required for the migration
function move_enc_context_from() was: 1) memslot lock 2) vCPU lock. This
can trigger a deadlock warning because a vCPU IOCTL modifying memslots
will acquire the locks in reverse order: 1) vCPU lock 2) memslot lock.
This
we can
> > implement separate helpers for different virtqueue layout/features
> > then the in-order were implemented on top.
> >
> > Tests shows 3%-5% imporvment with packed virtqueue PPS with KVM guest
> > testpmd on the host.
>
> ok this looks quite clean. We are i
On Wed, May 28, 2025 at 02:42:15PM +0800, Jason Wang wrote:
> Hello all:
>
> This sereis tries to implement the VIRTIO_F_IN_ORDER to
> virtio_ring. This is done by introducing virtqueue ops so we can
> implement separate helpers for different virtqueue layout/features
> then
On Wed, May 28, 2025 at 8:42 AM Jason Wang wrote:
>
> Hello all:
>
> This sereis tries to implement the VIRTIO_F_IN_ORDER to
> virtio_ring. This is done by introducing virtqueue ops so we can
> implement separate helpers for different virtqueue layout/features
> then the in-o
This patch implements in order support for both split virtqueue and
packed virtqueue.
Benchmark with KVM guest + testpmd on the host shows:
For split virtqueue: no obvious differences were noticed
For packed virtqueue:
1) RX gets 3.1% PPS improvements from 6.3 Mpps to 6.5 Mpps
2) TX gets 4.6
Hello all:
This sereis tries to implement the VIRTIO_F_IN_ORDER to
virtio_ring. This is done by introducing virtqueue ops so we can
implement separate helpers for different virtqueue layout/features
then the in-order were implemented on top.
Tests shows 3%-5% imporvment with packed virtqueue PPS
>
> > > > Tested-by: Lei Yang
> > > >
> > > > On Mon, Mar 24, 2025 at 1:45 PM Jason Wang wrote:
> > > > >
> > > > > Hello all:
> > > > >
> > > > > This sereis tries to implement the VIRTIO_F_IN_OR
Thanks
Lei
>
>
> > > Tested-by: Lei Yang
> > >
> > > On Mon, Mar 24, 2025 at 1:45 PM Jason Wang wrote:
> > > >
> > > > Hello all:
> > > >
> > > > This sereis tries to implement the VIRTIO_F_IN_ORDER to
> > > &
to implement the VIRTIO_F_IN_ORDER to
> > > virtio_ring. This is done by introducing virtqueue ops so we can
> > > implement separate helpers for different virtqueue layout/features
> > > then the in-order were implmeented on top.
> > >
> > > Tests sh
On Sat, May 17, 2025 at 07:27:51PM +0200, Konrad Dybcio wrote:
> From: Konrad Dybcio
>
> Certain /soc@0 subnodes are very out of order. Reshuffle them.
>
> Signed-off-by: Konrad Dybcio
> ---
> arch/arm64/boot/dts/qcom/sc8280xp.dtsi | 574
> ---
From: Konrad Dybcio
Certain /soc@0 subnodes are very out of order. Reshuffle them.
Signed-off-by: Konrad Dybcio
---
arch/arm64/boot/dts/qcom/sc8280xp.dtsi | 574 -
1 file changed, 287 insertions(+), 287 deletions(-)
diff --git a/arch/arm64/boot/dts/qcom
The TI-SCI processor control handle, 'tsp', will be refactored from
k3_r5_core struct into k3_r5_rproc struct in a future commit. So, the
'tsp' pointer will be initialized inside k3_r5_cluster_rproc_init() now.
Move the k3_r5_release_tsp() function, which releases the tsp handle,
above k3_r5_clust
invoked by the later.
While at it, also re-order the k3_r5_core_of_get_sram_memories() to keep
all the internal memory initialization functions at one place.
Signed-off-by: Beleswar Padhi
Tested-by: Judith Mendez
Reviewed-by: Andrew Davis
---
v12: Changelog:
1. Carried R/B tag.
Link to
Kdamond.update_schemes_tried_regions() reads and stores tried regions
information out of address order. It makes debugging a test failure
difficult. Change the behavior to do the reading and writing in the
address order.
Signed-off-by: SeongJae Park
---
tools/testing/selftests/damon
From: Konrad Dybcio
Certain /soc@0 subnodes are very out of order. Reshuffle them.
Signed-off-by: Konrad Dybcio
---
arch/arm64/boot/dts/qcom/sc8280xp.dtsi | 574 -
1 file changed, 287 insertions(+), 287 deletions(-)
diff --git a/arch/arm64/boot/dts/qcom
The TI-SCI processor control handle, 'tsp', will be refactored from
k3_r5_core struct into k3_r5_rproc struct in a future commit. So, the
'tsp' pointer will be initialized inside k3_r5_cluster_rproc_init() now.
Move the k3_r5_release_tsp() function, which releases the tsp handle,
above k3_r5_clust
invoked by the later.
While at it, also re-order the k3_r5_core_of_get_sram_memories() to keep
all the internal memory initialization functions at one place.
Signed-off-by: Beleswar Padhi
Tested-by: Judith Mendez
---
v11: Changelog:
1. Carried T/B tag.
Link to v10:
https://lore.kern
he legacy interface, the device formatting these as
> > little endian when the guest is big endian would surprise me more
> > than
> > it using guest native byte order (which would make it compatible with
> > the current implementation). Nevertheless somebody trying to
>
On Thu, 17 Apr 2025 11:01:54 -0700 Dan Williams
wrote:
> Darrick J. Wong wrote:
> > On Thu, Apr 10, 2025 at 12:12:33PM -0700, Alison Schofield wrote:
> > > On Thu, Apr 10, 2025 at 11:10:20AM +0200, David Hildenbrand wrote:
> > > > Alison reports an issue with fsdax when large extends end up usin
invoked by the later.
While at it, also re-order the k3_r5_core_of_get_sram_memories() to keep
all the internal memory initialization functions at one place.
Signed-off-by: Beleswar Padhi
---
v10: Changelog:
1. Re-ordered both core_of_get_{internal/sram}_memories() together.
2. Moved releas
The TI-SCI processor control handle, 'tsp', will be refactored from
k3_r5_core struct into k3_r5_rproc struct in a future commit. So, the
'tsp' pointer will be initialized inside k3_r5_cluster_rproc_init() now.
Move the k3_r5_release_tsp() function, which releases the tsp handle,
above k3_r5_clust
Darrick J. Wong wrote:
> On Thu, Apr 10, 2025 at 12:12:33PM -0700, Alison Schofield wrote:
> > On Thu, Apr 10, 2025 at 11:10:20AM +0200, David Hildenbrand wrote:
> > > Alison reports an issue with fsdax when large extends end up using
> > > large ZONE_DEVICE folios:
> > >
> >
> > Passes the ndctl/
On Thu, Apr 10, 2025 at 12:12:33PM -0700, Alison Schofield wrote:
> On Thu, Apr 10, 2025 at 11:10:20AM +0200, David Hildenbrand wrote:
> > Alison reports an issue with fsdax when large extends end up using
> > large ZONE_DEVICE folios:
> >
>
> Passes the ndctl/dax unit tests.
>
> Tested-by: Aliso
> by the letter of the spec virtio_le_to_cpu() would have been
> sufficient.
> But when the legacy interface is not used, it boils down to the same.
>
> And when using the legacy interface, the device formatting these as
> little endian when the guest is big endian would surprise me more
>
On Sun, 13 Apr 2025 17:52:12 -0500
Ira Weiny wrote:
> Device partitions have an implied order which is made more complex by
> the addition of a dynamic partition.
>
> Remove the ram special case information calls in favor of generic calls
> with a check ahead of time to ensure t
/fs/dax.c
> > > +++ b/fs/dax.c
> > > @@ -396,6 +396,7 @@ static inline unsigned long dax_folio_put(struct
> > > folio *folio)
> > > order = folio_order(folio);
> > > if (!order)
> > > return 0;
> > > + folio_rese
Device partitions have an implied order which is made more complex by
the addition of a dynamic partition.
Remove the ram special case information calls in favor of generic calls
with a check ahead of time to ensure the preservation of the implied
partition order.
Signed-off-by: Ira Weiny
(adding CC list again, because I assume it was dropped by accident)
diff --git a/fs/dax.c b/fs/dax.c
index af5045b0f476e..676303419e9e8 100644
--- a/fs/dax.c
+++ b/fs/dax.c
@@ -396,6 +396,7 @@ static inline unsigned long dax_folio_put(struct folio
*folio)
order = folio_order(folio
order)
folio_set_order(folio, new_order);
else
- ClearPageCompound(&folio->page);
+ folio_reset_order(folio);
}
I think that's wrong. We're splitting this folio into order-0 folios,
but folio_reset_order() is going to modify folio->
__split_folio_to_order(struct folio *folio,
> int old_order,
> if (new_order)
> folio_set_order(folio, new_order);
> else
> - ClearPageCompound(&folio->page);
> + folio_reset_order(folio);
> }
I think that's wrong. We
Matthew Wilcox wrote:
> On Thu, Apr 10, 2025 at 01:15:07PM -0700, Dan Williams wrote:
> > For consistency and clarity what about this incremental change, to make
> > the __split_folio_to_order() path reuse folio_reset_order(), and use
> > typical bitfield helpers for manipulating _flags_1?
>
> I d
On Thu, Apr 10, 2025 at 01:15:07PM -0700, Dan Williams wrote:
> For consistency and clarity what about this incremental change, to make
> the __split_folio_to_order() path reuse folio_reset_order(), and use
> typical bitfield helpers for manipulating _flags_1?
I dislike this intensely. It obfusca
31/0x180
> [ 417.817859] __handle_mm_fault+0xee1/0x1a60
> [ 417.818325] ? debug_smp_processor_id+0x17/0x20
> [ 417.818844] handle_mm_fault+0xe1/0x2b0
> [...]
>
> The issue is that when we split a large ZONE_DEVICE folio to order-0
> ones, we don't reset the order/_nr_p
On Thu, Apr 10, 2025 at 11:10:20AM +0200, David Hildenbrand wrote:
> Alison reports an issue with fsdax when large extends end up using
> large ZONE_DEVICE folios:
>
Passes the ndctl/dax unit tests.
Tested-by: Alison Schofield
snip
[ 417.817424] __do_fault+0x31/0x180
[ 417.817859] __handle_mm_fault+0xee1/0x1a60
[ 417.818325] ? debug_smp_processor_id+0x17/0x20
[ 417.818844] handle_mm_fault+0xe1/0x2b0
[...]
The issue is that when we split a large ZONE_DEVICE folio to order-0
ones, we don't reset the order/_nr_pages. As
On 07/04/25 18:59, Andrew Davis wrote:
On 3/17/25 7:05 AM, Beleswar Padhi wrote:
The core's internal memory data structure will be refactored to be part
of the k3_r5_rproc structure in a future commit. As a result, internal
memory initialization will need to be performed inside
k3_r5_cluster_r
On 3/17/25 7:05 AM, Beleswar Padhi wrote:
The core's internal memory data structure will be refactored to be part
of the k3_r5_rproc structure in a future commit. As a result, internal
memory initialization will need to be performed inside
k3_r5_cluster_rproc_init() after rproc_alloc().
Therefor
erent virtqueue layout/features
> then the in-order were implmeented on top.
>
> Tests shows 5% imporvemnt in RX PPS with KVM guest + testpmd on the
> host.
>
> Please review.
>
> Thanks
>
> Jason Wang (19):
> virtio_ring: rename virtqueue_reinit_xxx to virt
; > Fixes: 8345adbf96fc1 ("virtio: console: Accept console size along with
> > resize control message")
> > Signed-off-by: Halil Pasic
> > Cc: sta...@vger.kernel.org # v2.6.35+
> > ---
> >
> > @Michael: I think it would be nice to add a clarification on t
; Signed-off-by: Halil Pasic
> Cc: sta...@vger.kernel.org # v2.6.35+
> ---
>
> @Michael: I think it would be nice to add a clarification on the byte
> order to be used for cols and rows when the legacy interface is used to
> the spec, regardless of what we decide the right byte or
on, Mar 24, 2025 at 1:45 PM Jason Wang wrote:
> >
> > Hello all:
> >
> > This sereis tries to implement the VIRTIO_F_IN_ORDER to
> > virtio_ring. This is done by introducing virtqueue ops so we can
> > implement separate helpers for different virtqueue layo
gt; > - __virtio16 rows;
> > __virtio16 cols;
> > + __virtio16 rows;
> > } size;
>
> The order of the fields after the patch matches the spec, so from that
> perspective, looks fine:
> Reviewed-by: Daniel V
e
> *vdev,
> break;
> case VIRTIO_CONSOLE_RESIZE: {
> struct {
> - __virtio16 rows;
> __virtio16 cols;
> + __virtio16 rows;
> } size;
The order of the fields after th
> by the letter of the spec virtio_le_to_cpu() would have been
> sufficient.
> But when the legacy interface is not used, it boils down to the same.
>
> And when using the legacy interface, the device formatting these as
> little endian when the guest is big endian would surprise me more
>
According to section 5.3.6.2 (Multiport Device Operation) of the virtio
spec(version 1.2) a control buffer with the event VIRTIO_CONSOLE_RESIZE
is followed by a virtio_console_resize struct containing cols then rows.
The kernel implements this the wrong way around (rows then cols) resulting
in the
irtqueue ops so we can
> implement separate helpers for different virtqueue layout/features
> then the in-order were implmeented on top.
>
> Tests shows 5% imporvemnt in RX PPS with KVM guest + testpmd on the
> host.
>
> Please review.
>
> Thanks
>
> Jason Wang (19):
This patch implements in order support for both split virtqueue and
packed virtqueue. Dedicate virtqueue ops are introduced for the packed
virtqueue. Most of the ops were reused but the ones that have major
difference.
KVM guest + testpmd on the host shows 5% improvement in packed
virtqueue TX
Hello all:
This sereis tries to implement the VIRTIO_F_IN_ORDER to
virtio_ring. This is done by introducing virtqueue ops so we can
implement separate helpers for different virtqueue layout/features
then the in-order were implmeented on top.
Tests shows 5% imporvemnt in RX PPS with KVM guest
used, it boils down to the same.
And when using the legacy interface, the device formatting these as
little endian when the guest is big endian would surprise me more than
it using guest native byte order (which would make it compatible with
the current implementation). Nevertheless somebody trying to
The core's internal memory data structure will be refactored to be part
of the k3_r5_rproc structure in a future commit. As a result, internal
memory initialization will need to be performed inside
k3_r5_cluster_rproc_init() after rproc_alloc().
Therefore, move the internal memory initialization f
On 3/14/25 10:17 AM, Luca Weiss wrote:
> During upstreaming the order of clocks was adjusted to match the
> upstream sort order, but mistakently freq-table-hz wasn't re-ordered
> with the new order.
>
> Fix that by moving the entry for the ICE clk to the last place.
>
During upstreaming the order of clocks was adjusted to match the
upstream sort order, but mistakently freq-table-hz wasn't re-ordered
with the new order.
Fix that by moving the entry for the ICE clk to the last place.
Fixes: 5a814af5fc22 ("arm64: dts: qcom: sm6350: Add UFS nodes&quo
The execution order of constructors in undefined and depends on the
toolchain. While recent toolchains seems to have a stable order, it
doesn't work for older ones and may also change at any time.
Stop validating the order and instead only validate that all
constructors are executed.
Rep
On Thu, Mar 06, 2025 at 10:52:39PM +0100, Thomas Weißschuh wrote:
> The execution order of constructors in undefined and depends on the
> toolchain. While recent toolchains seems to have a stable order, it
> doesn't work for older ones and may also change at any time.
>
>
1 - 100 of 3175 matches
Mail list logo