> From: Nicolin Chen
> Sent: Friday, February 21, 2025 5:16 AM
>
> On Wed, Feb 19, 2025 at 06:58:16AM +, Tian, Kevin wrote:
> > > From: Jason Gunthorpe
> > > Sent: Tuesday, February 18, 2025 11:36 PM
> > >
> > > On Fri, Jan 24, 2025 at 04:30:36PM -0800, Nicolin Chen wrote:
> > > > +int iommu
Hello:
This series was applied to bpf/bpf-next.git (net)
by Martin KaFai Lau :
On Sun, 16 Feb 2025 17:34:25 +0800 you wrote:
> This series expands the XDP TX metadata framework to allow user
> applications to pass per packet 64-bit launch time directly to the kernel
> driver, requesting launch ti
On Thu, Feb 20, 2025 at 4:45 PM Michael S. Tsirkin wrote:
>
> On Thu, Feb 20, 2025 at 08:58:38AM +0100, Paolo Abeni wrote:
> > Hi,
> >
> > On 2/15/25 7:04 AM, Akihiko Odaki wrote:
> > > tun simply advances iov_iter when it needs to pad virtio header,
> > > which leaves the garbage in the buffer as
On Thu, Feb 20, 2025 at 12:45:46PM -0800, Nicolin Chen wrote:
>
> diff --git a/include/uapi/linux/iommufd.h b/include/uapi/linux/iommufd.h
> index fd2f13a63f27..be9746ecdc65 100644
> --- a/include/uapi/linux/iommufd.h
> +++ b/include/uapi
Hello:
This patch was applied to netdev/net-next.git (main)
by Jakub Kicinski :
On Sat, 15 Feb 2025 15:04:50 +0900 you wrote:
> tun simply advances iov_iter when it needs to pad virtio header,
> which leaves the garbage in the buffer as is. This will become
> especially problematic when tun start
On Sun, 16 Feb 2025 17:34:26 +0800 Song Yoong Siang wrote:
> Extend the XDP Tx metadata framework so that user can requests launch time
> hardware offload, where the Ethernet device will schedule the packet for
> transmission at a pre-determined time called launch time. The value of
> launch time i
On Wed, Feb 19, 2025 at 06:58:16AM +, Tian, Kevin wrote:
> > From: Jason Gunthorpe
> > Sent: Tuesday, February 18, 2025 11:36 PM
> >
> > On Fri, Jan 24, 2025 at 04:30:36PM -0800, Nicolin Chen wrote:
> > > +int iommufd_viommu_report_event(struct iommufd_viommu *viommu,
> > > +
On 02/20, Mina Almasry wrote:
> We should not enable netmem TX for drivers that don't declare support.
>
> Check for driver netmem TX support during devmem TX binding and fail if
> the driver does not have the functionality.
>
> Check for driver support in validate_xmit_skb as well.
>
> Signed-o
On 02/20, Mina Almasry wrote:
> Add documentation outlining the usage and details of the devmem TCP TX
> API.
>
> Signed-off-by: Mina Almasry
With a few nits below:
Acked-by: Stanislav Fomichev
>
> ---
>
> v4:
> - Mention SO_BINDTODEVICE is recommended (me/Pavel).
>
> v2:
> - Update docume
On 02/20, Mina Almasry wrote:
> The TX path may release the dmabuf in a context where we cannot wait.
> This happens when the user unbinds a TX dmabuf while there are still
> references to its netmems in the TX path. In that case, the netmems will
> be put_netmem'd from a context where we can't unm
On 02/20, Mina Almasry wrote:
> Augment dmabuf binding to be able to handle TX. Additional to all the RX
> binding, we also create tx_vec needed for the TX path.
>
> Provide API for sendmsg to be able to send dmabufs bound to this device:
>
> - Provide a new dmabuf_tx_cmsg which includes the dmab
On Tue, Feb 18, 2025 at 02:50:46PM -0400, Jason Gunthorpe wrote:
> On Tue, Feb 18, 2025 at 10:28:04AM -0800, Nicolin Chen wrote:
> > On Tue, Feb 18, 2025 at 01:18:21PM -0400, Jason Gunthorpe wrote:
> > > On Fri, Jan 24, 2025 at 04:30:42PM -0800, Nicolin Chen wrote:
> > >
> > > > @@ -1831,31 +1831,
On 02/20, Mina Almasry wrote:
> Add support for devmem TX in ncdevmem.
>
> This is a combination of the ncdevmem from the devmem TCP series RFCv1
> which included the TX path, and work by Stan to include the netlink API
> and refactored on top of his generic memory_provider support.
>
> Signed-of
On 02/20, Mina Almasry wrote:
> Currently net_iovs support only pp ref counts, and do not support a
> page ref equivalent.
>
> This is fine for the RX path as net_iovs are used exclusively with the
> pp and only pp refcounting is needed there. The TX path however does not
> use pp ref counts, thus
On 02/20, Mina Almasry wrote:
> Drivers need to make sure not to pass netmem dma-addrs to the
> dma-mapping API in order to support netmem TX.
>
> Add helpers and netmem_dma_*() helpers that enables special handling of
> netmem dma-addrs that drivers can use.
>
> Document in netmem.rst what drive
On 02/20, Mina Almasry wrote:
> Add support for devmem TX in ncdevmem.
>
> This is a combination of the ncdevmem from the devmem TCP series RFCv1
> which included the TX path, and work by Stan to include the netlink API
> and refactored on top of his generic memory_provider support.
>
> Signed-of
break;
>
For reference, this is call trace I see when I hit the warning:
[ 283.567945] [ cut here ]
[ 283.567947] WARNING: CPU: 12 PID: 878 at mm/memremap.c:436
free_zone_device_folio+0x6e/0x140
[ 283.567959] Modules linked in:
[ 283.567963] CPU: 12 UID:
On Wed, Feb 19, 2025 at 09:17:18PM -0800, Nicolin Chen wrote:
> On Tue, Feb 18, 2025 at 11:31:54AM -0400, Jason Gunthorpe wrote:
> > On Fri, Jan 24, 2025 at 04:30:35PM -0800, Nicolin Chen wrote:
> > > This is a reverse search v.s. iommufd_viommu_find_dev, as drivers may want
> > > to convert a stru
On Tue, Feb 18, 2025 at 10:53:55AM -0800, Nicolin Chen wrote:
> > > Is MEV available only in nested mode? Otherwise it perhaps makes
> > > sense to turn it on in all configurations in IOMMUFD paths...
> >
> > I think the arm-smmu-v3's iommufd implementation only supports nested
> > which could be
def _get_qemu_ops(config_path: str,
---
base-commit: 2014c95afecee3e76ca4a56956a936e23283f05b
change-id: 20250220-kunit-list-552a8cdc011e
Best regards,
--
Thomas Weißschuh
> As a general rule, we have tried to keep the data structure definition
> accurately mirror the hardware table design, for easier understanding
> and debug ability of the code.
Could you point me at the datasheet which describes the table?
Andrew
On Thu, Feb 20, 2025 at 10:38:03PM +0800, Jie Luo wrote:
>
>
> On 2/11/2025 9:22 PM, Andrew Lunn wrote:
> > > + /* Configure BM flow control related threshold. */
> > > + PPE_BM_PORT_FC_SET_WEIGHT(bm_fc_val, port_cfg.weight);
> > > + PPE_BM_PORT_FC_SET_RESUME_OFFSET(bm_fc_val, port_cfg.resume_off
On 2/11/2025 9:32 PM, Andrew Lunn wrote:
+/* Scheduler configuration for the assigning and releasing buffers for the
+ * packet passing through PPE, which is different per SoC.
+ */
+static const struct ppe_scheduler_bm_config ipq9574_ppe_sch_bm_config[] = {
+ {1, 0, 0, 0, 0},
+ {1
On 2/11/2025 9:22 PM, Andrew Lunn wrote:
+ /* Configure BM flow control related threshold. */
+ PPE_BM_PORT_FC_SET_WEIGHT(bm_fc_val, port_cfg.weight);
+ PPE_BM_PORT_FC_SET_RESUME_OFFSET(bm_fc_val, port_cfg.resume_offset);
+ PPE_BM_PORT_FC_SET_RESUME_THRESHOLD(bm_fc_val,
On 2/20/25 01:46, Mina Almasry wrote:
On Wed, Feb 19, 2025 at 2:40 PM Pavel Begunkov wrote:
On 2/17/25 23:26, Mina Almasry wrote:
On Thu, Feb 13, 2025 at 5:17 AM Pavel Begunkov wrote:
...
It's asserting that sizeof(ubuf_info_msgzc) <= sizeof(skb->cb), and
I'm guessing increasing skb->cb si
-static inline unsigned long dax_folio_share_put(struct folio *folio)
+static inline unsigned long dax_folio_put(struct folio *folio)
{
- return --folio->page.share;
+ unsigned long ref;
+ int order, i;
+
+ if (!dax_folio_is_shared(folio))
+ ref = 0;
+
On Fri, Jan 24, 2025 at 04:30:43PM -0800, Nicolin Chen wrote:
> diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-iommufd.c
> b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-iommufd.c
> index ceeed907a714..20a0e39d7caa 100644
> --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-iommufd.c
> +++ b/dri
On Thu, Feb 20, 2025 at 08:58:38AM +0100, Paolo Abeni wrote:
> Hi,
>
> On 2/15/25 7:04 AM, Akihiko Odaki wrote:
> > tun simply advances iov_iter when it needs to pad virtio header,
> > which leaves the garbage in the buffer as is. This will become
> > especially problematic when tun starts to allo
On Thu, Feb 20, 2025 at 02:09:10AM +, Mina Almasry wrote:
> +The user application must use MSG_ZEROCOPY flag when sending devmem TCP.
> Devmem
> +cannot be copied by the kernel, so the semantics of the devmem TX are similar
> +to the semantics of MSG_ZEROCOPY.
> +
> + setsockopt(socket_fd,
29 matches
Mail list logo