he assertion. I caught a glimpse
at the console and I saw RIP was at cache_alloc_refill.
I'd be happy to provide further information or perform testing to help
solve this issue.
Regards,
Haggai Eran
2012-Aug-20 12:34:44 - [ cut here ]
2012-Aug-20 12:34:44 - kernel BUG a
On 29/08/2012 05:57, David Rientjes wrote:
> On Tue, 28 Aug 2012, Haggai Eran wrote:
>
>> Hi,
>>
>> I believe I have encountered a bug in kernel 3.6-rc3. It starts with the
>> assertion in mm/slab.c:2629 failing, and then the system hangs. I can
>> reproduce thi
On 05/11/2012 23:36, David Rientjes wrote:
> do_wp_page() sets mmun_called if mmun_start and mmun_end were initialized
> and, if so, may call mmu_notifier_invalidate_range_end() with these
> values. This doesn't prevent gcc from emitting a build warning though:
>
> mm/memory.c: In function ‘do_
a flag variable to do_wp_page that marks whether the
invalidate_range_start notifier was called. invalidate_range_end is then
called if the flag is true.
Reported-by: Jiri Slaby
Cc: Avi Kivity
Cc: Andrew Morton
Signed-off-by: Haggai Eran
---
I tested this patch against yesterday's linux
On 07/09/2015 23:38, Parav Pandit wrote:
> @@ -2676,7 +2686,7 @@ static inline int thread_group_empty(struct task_struct
> *p)
> * Protects ->fs, ->files, ->mm, ->group_info, ->comm, keyring
> * subscriptions and synchronises with wait4(). Also used in procfs. Also
> * pins the final relea
On 07/09/2015 23:38, Parav Pandit wrote:
> diff --git a/include/linux/device_cgroup.h b/include/linux/device_cgroup.h
> index 8b64221..cdbdd60 100644
> --- a/include/linux/device_cgroup.h
> +++ b/include/linux/device_cgroup.h
> @@ -1,6 +1,57 @@
> +#ifndef _DEVICE_CGROUP
> +#define _DEVICE_CGROUP
>
On 07/09/2015 23:38, Parav Pandit wrote:
> +/* RDMA resources from device cgroup perspective */
> +enum devcgroup_rdma_rt {
> + DEVCG_RDMA_RES_TYPE_UCTX,
> + DEVCG_RDMA_RES_TYPE_CQ,
> + DEVCG_RDMA_RES_TYPE_PD,
> + DEVCG_RDMA_RES_TYPE_AH,
> + DEVCG_RDMA_RES_TYPE_MR,
> + DEVCG
On 08/09/2015 10:04, Parav Pandit wrote:
> On Tue, Sep 8, 2015 at 11:18 AM, Haggai Eran wrote:
>> On 07/09/2015 23:38, Parav Pandit wrote:
>>> @@ -2676,7 +2686,7 @@ static inline int thread_group_empty(struct
>>> task_struct *p)
>>> * Protects ->fs, ->
On 07/09/2015 23:38, Parav Pandit wrote:
> +void devcgroup_rdma_uncharge_resource(struct ib_ucontext *ucontext,
> + enum devcgroup_rdma_rt type, int num)
> +{
> + struct dev_cgroup *dev_cg, *p;
> + struct task_struct *ctx_task;
> +
> + if (!num)
> +
On 07/09/2015 23:38, Parav Pandit wrote:
> +static void init_ucontext_lists(struct ib_ucontext *ucontext)
> +{
> + INIT_LIST_HEAD(&ucontext->pd_list);
> + INIT_LIST_HEAD(&ucontext->mr_list);
> + INIT_LIST_HEAD(&ucontext->mw_list);
> + INIT_LIST_HEAD(&ucontext->cq_list);
> + INIT
On 07/09/2015 23:38, Parav Pandit wrote:
> Currently user space applications can easily take away all the rdma
> device specific resources such as AH, CQ, QP, MR etc. Due to which other
> applications in other cgroup or kernel space ULPs may not even get chance
> to allocate any rdma resources.
>
On 08/09/2015 13:22, Parav Pandit wrote:
> On Tue, Sep 8, 2015 at 2:10 PM, Haggai Eran wrote:
>> On 07/09/2015 23:38, Parav Pandit wrote:
>>> +static void init_ucontext_lists(struct ib_ucontext *ucontext)
>>> +{
>>> + INIT_LIST_HEAD(&ucontext->p
On 08/09/2015 13:18, Parav Pandit wrote:
>> >
>>> >> + * RDMA resource limits are hierarchical, so the highest configured
>>> >> limit of
>>> >> + * the hierarchy is enforced. Allowing resource limit configuration to
>>> >> default
>>> >> + * cgroup allows fair share to kernel space ULPs as well.
On 08/09/2015 13:50, Parav Pandit wrote:
> On Tue, Sep 8, 2015 at 2:06 PM, Haggai Eran wrote:
>> On 07/09/2015 23:38, Parav Pandit wrote:
>>> +void devcgroup_rdma_uncharge_resource(struct ib_ucontext *ucontext,
>>> + enum devc
On 22/12/2014 18:48, j.gli...@gmail.com wrote:
> static inline void mmu_notifier_invalidate_range_start(struct mm_struct *mm,
> -unsigned long start,
> -unsigned long end,
> -
Hi,
On 22/12/2014 18:48, j.gli...@gmail.com wrote:
> +/* hmm_device_register() - register a device with HMM.
> + *
> + * @device: The hmm_device struct.
> + * Returns: 0 on success or -EINVAL otherwise.
> + *
> + *
> + * Call when device driver want to register itself with HMM. Device driver
> ca
On 06/01/2015 00:44, j.gli...@gmail.com wrote:
> + /* fence_wait() - to wait on device driver fence.
> + *
> + * @fence: The device driver fence struct.
> + * Returns: 0 on success,-EIO on error, -EAGAIN to wait again.
> + *
> + * Called when hmm want to wait for all op
On 06/01/2015 00:44, j.gli...@gmail.com wrote:
> + /* fence_wait() - to wait on device driver fence.
> + *
> + * @fence: The device driver fence struct.
> + * Returns: 0 on success,-EIO on error, -EAGAIN to wait again.
> + *
> + * Called when hmm want to wait for all op
On 10/01/2015 08:48, Jerome Glisse wrote:
> On Thu, Jan 08, 2015 at 01:05:41PM +0200, Haggai Eran wrote:
>> On 06/01/2015 00:44, j.gli...@gmail.com wrote:
>>> + /* fence_wait() - to wait on device driver fence.
>>> +*
>>> +* @fence: The device driver
On 02/04/2015 16:30, Yann Droneaud wrote:
> Hi,
>
> Le jeudi 02 avril 2015 à 10:52 +, Shachar Raindel a écrit :
>>> -Original Message-
>>> From: Yann Droneaud [mailto:ydrone...@opteya.com]
>>> Sent: Thursday, April 02, 2015 1:05 PM
>>> Le mercredi 18 mars 2015 à 17:39 +, Shachar Ra
On Thursday, April 2, 2015 7:44 PM, Shachar Raindel wrote:
>> -Original Message-
>> From: Yann Droneaud [mailto:ydrone...@opteya.com]
>> Sent: Thursday, April 02, 2015 7:35 PM
>> To: Haggai Eran
>> Cc: Shachar Raindel; Sagi Grimberg; oss-secur...@li
On Thursday, April 2, 2015 11:40 PM, Yann Droneaud wrote:
> Le jeudi 02 avril 2015 à 16:44 +, Shachar Raindel a écrit :
>> > -Original Message-
>> > From: Yann Droneaud [mailto:ydrone...@opteya.com]
>> > Sent: Thursday, April 02, 2015 7:35 PM
>
>> > Another related question: as the la
On 13/04/2015 16:29, Yann Droneaud wrote:
> Le jeudi 02 avril 2015 à 18:12 +0000, Haggai Eran a écrit :
...
>>
>> I want to add that we would like to see users registering a very large
>> memory region (perhaps the entire process address space) for local
>> access, and
On 21/05/2015 22:31, j.gli...@gmail.com wrote:
> From: Jerome Glisse
>
> When migrating anonymous memory from system memory to device memory
> CPU pte are replaced with special HMM swap entry so that page fault,
> get user page (gup), fork, ... are properly redirected to HMM helpers.
>
> This pa
On 21/05/2015 23:23, jgli...@redhat.com wrote:
> +int ib_umem_odp_get(struct ib_ucontext *context, struct ib_umem *umem)
> +{
> + struct mm_struct *mm = get_task_mm(current);
> + struct ib_device *ib_device = context->device;
> + struct ib_mirror *ib_mirror;
> + struct pid *our_pid;
On 21/05/2015 22:31, j.gli...@gmail.com wrote:
> From design point of view not much changed since last patchset (2).
> Most of the change are in small details of the API expose to device
> driver. This version also include device driver change for Mellanox
> hardware to use HMM as an alternative to
On 1/29/2019 6:58 PM, jgli...@redhat.com wrote:
> Convert ODP to use HMM so that we can build on common infrastructure
> for different class of devices that want to mirror a process address
> space into a device. There is no functional changes.
Thanks for sending this patch. I think in general
read.gmane.org/gmane.linux.kernel.mm/111710/focus=111711
>>
>> Reported-by: Izik Eidus
>> Signed-off-by: Mike Rapoport
>> Cc: Andrea Arcangeli
>> Cc: Haggai Eran
>> Cc: Peter Zijlstra
>> ---
>> include/linux/mmu_notifier.h | 31 ++
On 03/30/2014 11:33 PM, Jerome Glisse wrote:
On Wed, Jan 22, 2014 at 04:01:15PM +0200, Haggai Eran wrote:
I'm worried about the following scenario:
Given a read-only page, suppose one host thread (thread 1) writes to
that page, and performs COW, but before it call
On 28/07/2014 18:39, Jerome Glisse wrote:
> On Mon, Jul 28, 2014 at 03:27:14PM +0300, Haggai Eran wrote:
>> On 14/06/2014 03:48, Jérôme Glisse wrote:> From: Jérôme Glisse
>>
>>>
>>> Motivation:
>>>
>>> ...
>>>
>>> T
On 29/08/2014 22:10, j.gli...@gmail.com wrote:
> + * - MMU_MUNMAP: the range is being unmapped (outcome of a munmap syscall or
> + * process destruction). However, access is still allowed, up until the
> + * invalidate_range_free_pages callback. This also implies that secondary
> + * page table can
On 14/06/2014 03:48, Jérôme Glisse wrote:> From: Jérôme Glisse
>
> Motivation:
>
> ...
>
> The aim of the heterogeneous memory management is to provide a common API that
> can be use by any such devices in order to mirror process address. The hmm
> code
> provide an unique entry point and int
On 26/08/2014 00:07, Shawn Bohrer wrote:
The following patch fixes the issue by storing the mm_struct of the
>> >
>> > You are doing more than just storing the mm_struct - you are taking
>> > a reference to the process' mm. This can lead to a massive resource
>> > leakage. The reason is bit
On 04/09/2014 00:15, Roland Dreier wrote:
> Have you done any review or testing of these changes? If so can you
> share the results?
We have tested this feature thoroughly inside Mellanox. We ran random
tests that performed MR registrations, memory mappings and unmappings,
calls to madvise with M
On 09/09/2014 17:21, Haggai Eran wrote:
> On 04/09/2014 00:15, Roland Dreier wrote:
>> Have you done any review or testing of these changes? If so can you
>> share the results?
>
> We have tested this feature thoroughly inside Mellanox. We ran random
> tests that pe
On Friday, December 4, 2015 8:02 PM, Nicholas Krause
wrote:
> To: dledf...@redhat.com
> Cc: sean.he...@intel.com; hal.rosenst...@gmail.com; Haggai Eran;
> jguntho...@obsidianresearch.com; Matan Barak; yun.w...@profitbricks.com;
> ted.h@oracle.com; Doron Tsur; Erez Shitr
On 30/11/2015 00:02, Julia Lawall wrote:
> This mmu_notifier_ops structure is never modified, so declare it as
> const, like the other mmu_notifier_ops structures.
>
> Done with the help of Coccinelle.
>
> Signed-off-by: Julia Lawall
Reviewed-by: Haggai Eran
Thanks,
Haggai
On 28/10/2015 10:29, Parav Pandit wrote:
> 3. Resources are not defined by the RDMA cgroup. Resources are defined
> by RDMA/IB subsystem and optionally by HCA vendor device drivers.
> Rationale: This allows rdma cgroup to remain constant while RDMA/IB
> subsystem can evolve without the need of rdma
On 17/07/2015 22:01, Jérôme Glisse wrote:
> The mlx5 driver will need this function for its driver specific bit
> of ODP (on demand paging) on HMM (Heterogeneous Memory Management).
>
> Signed-off-by: Jérôme Glisse
> ---
> drivers/infiniband/core/umem_rbtree.c | 1 +
> 1 file changed, 1 insertio
MA_ADDR_MASK (~(ODP_READ_ALLOWED_BIT | ODP_WRITE_ALLOWED_BIT))
>
> +
Please avoid adding double blank lines. You can find more of these by
running checkpatch on the patch.
Other than that:
Reviewed-by: Haggai Eran
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the
On 17/07/2015 22:01, Jérôme Glisse wrote:
> diff --git a/drivers/infiniband/core/umem_odp.c
> b/drivers/infiniband/core/umem_odp.c
> index ac87ac6..c5e7461 100644
> --- a/drivers/infiniband/core/umem_odp.c
> +++ b/drivers/infiniband/core/umem_odp.c
> @@ -134,7 +134,7 @@ int ib_umem_odp_get(struct
On 17/07/2015 22:01, Jérôme Glisse wrote:
> @@ -151,10 +151,11 @@ int ib_umem_odp_get(struct ib_ucontext *context, struct
> ib_umem *umem)
> context->ib_mirror = ib_mirror_ref(ib_mirror);
> }
> mutex_unlock(&ib_device->hmm_mutex);
> - umem->odp_data.ib_mirror = ib_mir
On 01/02/2016 20:59, Parav Pandit wrote:
> On Tue, Feb 2, 2016 at 12:10 AM, Tejun Heo wrote:
>> So, I'm really not gonna go for individual drivers defining resources
>> on their own. That's a trainwreck waiting to happen. There needs to
>> be a lot more scrutiny than that.
>>
> Not every low lev
On 24/02/2016 17:21, Parav Pandit wrote:
> On Wed, Feb 24, 2016 at 7:56 PM, Haggai Eran wrote:
>> On 20/02/2016 13:00, Parav Pandit wrote:
>>> Added documentation for v1 and v2 version describing high
>>> level design and usage examples on using rdma controller.
&
On 20/02/2016 13:00, Parav Pandit wrote:
> +/**
> + * ib_device_unregister_rdmacg - unregister with rdma cgroup.
> + * @device: device to unregister.
> + *
> + * Unregister with the rdma cgroup. Should be called after
> + * all the resources are deallocated, and after a stage when any
> + * other r
On 20/02/2016 13:00, Parav Pandit wrote:
> Added documentation for v1 and v2 version describing high
> level design and usage examples on using rdma controller.
>
> Signed-off-by: Parav Pandit
I think you might want to mention that resource limits are reflected
in the results returned from ib_uv
Hi,
Overall I the patch looks good to me. I have a few comments below.
On 20/02/2016 13:00, Parav Pandit wrote:
> Resource pool is created/destroyed dynamically whenever
> charging/uncharging occurs respectively and whenever user
> configuration is done. Its a tradeoff of memory vs little more co
On 24/02/2016 18:16, Parav Pandit wrote:
>>> + struct rdmacg_resource_pool *rpool;
>>> + struct rdmacg_pool_info *pool_info = &device->pool_info;
>>> +
>>> + spin_lock(&cg->rpool_list_lock);
>>> + rpool = find_cg_rpool_locked(cg, device);
>> Is it possible for rpool to be NULL?
>>
>
On 25/02/2016 15:34, Parav Pandit wrote:
> On Thu, Feb 25, 2016 at 5:33 PM, Haggai Eran wrote:
>>>>> +retry:
>>>>> + spin_lock(&cg->rpool_list_lock);
>>>>> + rpool = find_cg_rpool_locked(cg, device);
>>>>> +
On 25/02/2016 16:26, Parav Pandit wrote:
>> Can we call kfree() with spin_lock held? All these years I tend to
>> avoid doing so.
> Also it doesn't look correct to hold the lock while freeing the memory
> which is totally unrelated to the lock.
> With that I think current code appears ok with excep
On 29/10/2015 20:46, Parav Pandit wrote:
> On Thu, Oct 29, 2015 at 8:27 PM, Haggai Eran wrote:
>> On 28/10/2015 10:29, Parav Pandit wrote:
>>> 3. Resources are not defined by the RDMA cgroup. Resources are defined
>>> by RDMA/IB subsystem and optionally by HCA vendor de
On 03/11/2015 21:11, Parav Pandit wrote:
> So it looks like below,
> #cat rdma.resources.verbs.list
> Output:
> mlx4_0 uctx ah pd cq mr mw srq qp flow
> mlx4_1 uctx ah pd cq mr mw srq qp flow rss_wq
What happens if you set a limit of rss_wq to mlx4_0 in this example?
Would it fail? I think it would
On 2/12/2019 6:11 PM, Jerome Glisse wrote:
> On Wed, Feb 06, 2019 at 08:44:26AM +0000, Haggai Eran wrote:
>> On 1/29/2019 6:58 PM, jgli...@redhat.com wrote:
>> > Convert ODP to use HMM so that we can build on common infrastructure
>> > for different class of de
On 11/18/2016 8:18 PM, Jérôme Glisse wrote:
> Cliff note: HMM offers 2 things (each standing on its own). First
> it allows to use device memory transparently inside any process
> without any modifications to process program code. Second it allows
> to mirror process address space on a device.
>
>
On 11/25/2016 9:32 PM, Jason Gunthorpe wrote:
> On Fri, Nov 25, 2016 at 02:22:17PM +0100, Christian König wrote:
>
>>> Like you say below we have to handle short lived in the usual way, and
>>> that covers basically every device except IB MRs, including the
>>> command queue on a NVMe drive.
>>
>>
On 11/25/2016 9:32 PM, Jason Gunthorpe wrote:
> On Fri, Nov 25, 2016 at 02:22:17PM +0100, Christian König wrote:
>
>>> Like you say below we have to handle short lived in the usual way, and
>>> that covers basically every device except IB MRs, including the
>>> command queue on a NVMe drive.
>>
>>
On 11/25/2016 6:16 PM, Jerome Glisse wrote:
> Yes this is something i have work on with NVidia, idea is that you will
> see the hmm_pfn_t with the device flag set you can then retrive the struct
> device from it. Issue is now to figure out how from that you can know that
> this is a device with whi
On Mon, 2016-11-28 at 09:48 -0500, Serguei Sagalovitch wrote:
> On 2016-11-27 09:02 AM, Haggai Eran wrote
> >
> > On PeerDirect, we have some kind of a middle-ground solution for
> > pinning
> > GPU memory. We create a non-ODP MR pointing to VRAM but rely on
> &
On Mon, 2016-11-28 at 09:57 -0700, Jason Gunthorpe wrote:
> On Sun, Nov 27, 2016 at 04:02:16PM +0200, Haggai Eran wrote:
> > I think blocking mmu notifiers against something that is basically
> > controlled by user-space can be problematic. This can block things
> > like
>
kfree(umem);
> @@ -149,6 +150,7 @@ struct ib_umem *ib_umem_get(struct ib_ucontext *context,
> unsigned long addr,
>
> page_list = (struct page **) __get_free_page(GFP_KERNEL);
> if (!page_list) {
> + put_pid(umem->pid);
> kfree(umem);
&g
On 10/19/2016 6:51 AM, Dan Williams wrote:
> On Tue, Oct 18, 2016 at 2:42 PM, Stephen Bates wrote:
>> 1. Address Translation. Suggestions have been made that in certain
>> architectures and topologies the dma_addr_t passed to the DMA master
>> in a peer-2-peer transfer will not correctly route to
We could use send_sig_info to send signal from kernel to user space. So
> theoretically GPU driver
> could issue KILL signal to some process.
>
>> On Wed, Nov 30, 2016 at 12:45:58PM +0200, Haggai Eran wrote:
>>> I think we can achieve the kernel's needs with ZONE_DE
On 11/30/2016 8:01 PM, Logan Gunthorpe wrote:
>
>
> On 30/11/16 09:23 AM, Jason Gunthorpe wrote:
>>> Two cases I can think of are RDMA access to an NVMe device's controller
>>> memory buffer,
>>
>> I'm not sure on the use model there..
>
> The NVMe fabrics stuff could probably make use of this.
On 11/30/2016 6:23 PM, Jason Gunthorpe wrote:
>> and O_DIRECT operations that access GPU memory.
> This goes through user space so there is still a VMA..
>
>> Also, HMM's migration between two GPUs could use peer to peer in the
>> kernel, although that is intended to be handled by the GPU driver i
On 11/28/2016 9:02 PM, Jason Gunthorpe wrote:
> On Mon, Nov 28, 2016 at 06:19:40PM +0000, Haggai Eran wrote:
>>>> GPU memory. We create a non-ODP MR pointing to VRAM but rely on
>>>> user-space and the GPU not to migrate it. If they do, the MR gets
>>>>
On 03/03/2016 04:49, Parav Pandit wrote:
> Hi Tejun, Haggai,
>
> On Thu, Mar 3, 2016 at 1:28 AM, Parav Pandit wrote:
+ rpool->refcnt--;
+ if (rpool->refcnt == 0 && rpool->num_max_cnt ==
pool_info->table_len) {
>>>
>>> If the caller charges 2 and then uncharges 1 two times,
On 03/03/2016 05:18, Parav Pandit wrote:
> On Thu, Mar 3, 2016 at 1:28 AM, Parav Pandit wrote:
>> On Wed, Mar 2, 2016 at 11:09 PM, Tejun Heo wrote:
>>> Nothing seems to prevent @cg from going away if this races with
>>> @current being migrated to a different cgroup. Have you run this with
>>> lo
On 05/03/2016 19:20, Parav Pandit wrote:
>> > 3 is fine but resource [un]charging is not hot path?
> charge/uncharge is hot path from cgroup perspective.
Most of the resources the RDMA cgroup handles are only allocated at
the beginning of the application. The RDMA subsystem allows direct
user-spac
up creation and device registration
> time.
>
> Signed-off-by: Parav Pandit
Reviewed-by: Haggai Eran
On 28/02/2016 16:13, Parav Pandit wrote:
> +void ib_rdmacg_query_limit(struct ib_device *device, int *limits, int
> max_count)
> +{
> + rdmacg_query_limit(&device->cg_device, limits);
> +}
> +EXPORT_SYMBOL(ib_rdmacg_query_limit);
You can remove the max_count parameter here as well.
On 28/02/2016 16:13, Parav Pandit wrote:
> diff --git a/drivers/infiniband/core/device.c
> b/drivers/infiniband/core/device.c
> index 00da80e..54ea8ce 100644
> --- a/drivers/infiniband/core/device.c
> +++ b/drivers/infiniband/core/device.c
> @@ -343,28 +343,38 @@ int ib_register_device(struct ib_d
On 01/03/2016 11:22, Parav Pandit wrote:
> On Tue, Mar 1, 2016 at 2:42 PM, Haggai Eran wrote:
>> On 28/02/2016 16:13, Parav Pandit wrote:
>>> diff --git a/drivers/infiniband/core/device.c
>>> b/drivers/infiniband/core/device.c
>>> index 00da80e..54ea8ce 1006
On 28/02/2016 16:13, Parav Pandit wrote:
> Added documentation for v1 and v2 version describing high
> level design and usage examples on using rdma controller.
>
> Signed-off-by: Parav Pandit
Reviewed-by: Haggai Eran
: Parav Pandit
Looks good to me. Thanks.
Reviewed-by: Haggai Eran
On 15/09/2015 06:45, Jason Gunthorpe wrote:
> No, I'm saying the resource pool is *well defined* and *fixed* by each
> hardware.
>
> The only question is how do we expose the N resource limits, the list
> of which is totally vendor specific.
I don't see why you say the limits are vendor specific.
oduce debug_dma_assert_idle()")
Cc: Dan Williams
Cc: Joerg Roedel
Cc: Vinod Koul
Cc: Russell King
Cc: James Bottomley
Cc: Andrew Morton
Cc: Florian Fainelli
Cc: Sebastian Ott
Cc: Jiri Kosina
Cc: Horia Geanta
Signed-off-by: Haggai Eran
---
lib/dma-debug.c | 3 +++
1 file changed, 3 insertions(+)
On 10/18/2016 1:05 AM, Arnd Bergmann wrote:
> @@ -1309,7 +1311,7 @@ static bool validate_net_dev(struct net_device *net_dev,
> static struct net_device *cma_get_net_dev(struct ib_cm_event *ib_event,
> const struct cma_req_info *req)
> {
> - struct socka
On 10/18/2016 1:18 PM, Arnd Bergmann wrote:
> On Tuesday, October 18, 2016 9:47:31 AM CEST Haggai Eran wrote:
>> On 10/18/2016 1:05 AM, Arnd Bergmann wrote:
>>> @@ -1309,7 +1311,7 @@ static bool validate_net_dev(struct net_device
>>> *net_dev,
>>> static struc
78 matches
Mail list logo