most important results should be latency and bandwidth. Please let
us know the test results.
Thanks a lot.
Zhu Yanjun
ackets = 0;
+ port->net_stats.rx_bytes = 0;
+ port->net_stats.tx_bytes = 0;
+ port->net_stats.tx_errors = 0;
+ atomic64_set(&port->net_stats.rx_dropped, 0);
+ atomic64_set(&port->net_stats.tx_dropped, 0);
per-cpu variable is b
res |= NETIF_F_LOOPBACK;
+ else
+ ndev->features &= ~NETIF_F_LOOPBACK;
+}
+
+/* This function should be called after ctrl_lock was taken */
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/Documentation/dev-tools/sparse.rst?h=v6.10-rc3#n64
&quo
-Identifier: GPL-2.0-only
+/* Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES.
Not sure if this Copyright (c) is 2022 or 2024.
Zhu Yanjun
+ */
+#include
+#include
+#include
+
+#include "vfio_pci_priv.h"
+
+MODULE_IMPORT_NS(DMA_BUF);
+
+struct vfio_pci_dma_buf {
+
在 2023/7/31 5:42, Matthew Wilcox 写道:
On Sun, Jul 30, 2023 at 09:57:06PM +0800, Zhu Yanjun wrote:
在 2023/7/30 19:18, Matthew Wilcox 写道:
On Sun, Jul 30, 2023 at 07:01:26PM +0800, Zhu Yanjun wrote:
Does the following function have folio version?
"
int sg_alloc_append_table_from_pages(s
在 2023/7/30 19:18, Matthew Wilcox 写道:
On Sun, Jul 30, 2023 at 07:01:26PM +0800, Zhu Yanjun wrote:
Does the following function have folio version?
"
int sg_alloc_append_table_from_pages(struct sg_append_table *sgt_append,
struct page **pages, unsigned int n_pages, unsigne
ruct sg_append_table *sgt_append,
struct page **pages, unsigned int n_pages, unsigned int offset,
unsigned long size, unsigned int max_segment,
unsigned int left_pages, gfp_t gfp_mask)
"
Thanks a lot.
Zhu Yanjun
static inline struct page *sg_page(struct scatterlist *sg)
{
#ifdef CONFIG_DEBUG_SG
other discussion with Christoph:
https://lore.kernel.org/kvm/4-v2-472615b3877e+28f7-vfio_dma_buf_...@nvidia.com/
I read through the above patches. I am interested in the dma-buf.
Zhu Yanjun
Which results in, more or less, we have no way to do P2P DMA
operations without struct page - and fro
After dma-buf is accepted, we will check this dma-buf on rxe.
Zhu Yanjun
>
> To enable to use the dma-buf memory in rxe rdma device, add some changes
> and implementation in this patch series.
>
> This series consists of two patches. The first patch changes the IB core
> to
On Tue, Oct 5, 2021 at 6:20 PM Shunsuke Mie wrote:
>
> ping
Sorry. I will check it soon.
Zhu Yanjun
>
> 2021年10月1日(金) 12:56 Shunsuke Mie :
> >
> > 2021年9月30日(木) 23:41 Daniel Vetter :
> > >
> > > On Wed, Sep 29, 2021 at 01:19:05PM +0900, Shunsuk
On Thu, Sep 30, 2021 at 7:06 PM Shunsuke Mie wrote:
>
> 2021年9月30日(木) 16:23 Zhu Yanjun :
> >
> > On Thu, Sep 30, 2021 at 2:58 PM Shunsuke Mie wrote:
> > >
> > > 2021年9月30日(木) 15:37 Zhu Yanjun :
> > > >
> > > > On Thu, Sep 30, 2021 at
int ret;
> +
> + vmr = malloc(sizeof(*vmr));
> + if (!vmr)
> + return NULL;
> +
Do we need to set vmr to zero like the following?
memset(vmr, 0, sizeof(*vmr));
Zhu Yanjun
> + ret = ibv_cmd_reg_dmabuf_mr(pd, offset, length, iova, fd, ac
On Thu, Sep 30, 2021 at 2:58 PM Shunsuke Mie wrote:
>
> 2021年9月30日(木) 15:37 Zhu Yanjun :
> >
> > On Thu, Sep 30, 2021 at 2:20 PM Shunsuke Mie wrote:
> > >
> > > Implement a new provider method for dma-buf base memory registration.
>
13 matches
Mail list logo