Prevent a possible endless loop with DMAPOOL_DEBUG enabled if a buggy
driver corrupts DMA pool memory.
Signed-off-by: Tony Battersby
---
--- linux/mm/dmapool.c.orig 2018-08-06 17:52:53.0 -0400
+++ linux/mm/dmapool.c 2018-08-06 17:53:31.0 -0400
@@ -454,17 +454,39 @@ void
es the
boundary into account, and use it to replace the inaccurate calculation.
Signed-off-by: Tony Battersby
---
This depends on patch #1 "dmapool: fix boundary comparison" for the
calculated blks_per_alloc value to be correct.
The added blks_per_alloc value will also be used in
I posted v3 on August 7. Nobody acked or merged the patches, and then
I got too busy with other stuff to repost until now.
The only change since v3:
*) Dropped patch #10 (the mpt3sas patch) since the mpt3sas maintainers
didn't show any interest.
I believe these patches are ready for merging.
--
ss ranges marked with "*" would not have
been used even though they didn't cross the given boundary.
Fixes: e34f44b3517f ("pool: Improve memory usage for devices which can't cross
boundaries")
Signed-off-by: Tony Battersby
---
Even though I described this as a "
Remove a small amount of code duplication between dma_pool_destroy() and
pool_free_page() in preparation for adding more code without having to
duplicate it. No functional changes.
Signed-off-by: Tony Battersby
---
--- linux/mm/dmapool.c.orig 2018-08-02 09:59:15.0 -0400
+++ linux/mm
hem.
Signed-off-by: Tony Battersby
---
--- linux/mm/dmapool.c.orig 2018-08-03 16:12:23.0 -0400
+++ linux/mm/dmapool.c 2018-08-03 16:13:44.0 -0400
@@ -277,7 +277,7 @@ void dma_pool_destroy(struct dma_pool *p
mutex_lock(&pools_reg_lock);
mutex_lock(&
To represent the size of a single allocation, dmapool currently uses
'unsigned int' in some places and 'size_t' in other places. Standardize
on 'unsigned int' to reduce overhead, but use 'size_t' when counting all
the blocks in the entire pool.
Signed-o
Rename fields in 'struct dma_page' in preparation for moving them into
'struct page'. No functional changes.
in_use -> dma_in_use
offset -> dma_free_off
Signed-off-by: Tony Battersby
---
--- linux/mm/dmapool.c.orig 2018-08-03 17:46:13.0 -0400
+++ linux/m
improves the algorithm from O(n^2) to O(n).
Signed-off-by: Tony Battersby
---
--- linux/mm/dmapool.c.orig 2018-08-03 16:16:49.0 -0400
+++ linux/mm/dmapool.c 2018-08-03 16:45:33.0 -0400
@@ -15,11 +15,16 @@
* Many older drivers still have their own code to do this
ng
'struct dma_page'. In big O notation, this improves the algorithm from
O(n^2) to O(n) while also reducing memory usage.
Thanks to Matthew Wilcox for the suggestion to use struct page.
Signed-off-by: Tony Battersby
---
--- linux/include/linux/mm_types.h.orig 2018-08-01 17:59:46.0
On 11/12/18 11:32 AM, John Garry wrote:
> On 12/11/2018 15:42, Tony Battersby wrote:
>> dmapool originally tried to support pools without a device because
>> dma_alloc_coherent() supports allocations without a device. But nobody
>> ended up using dma pools without a device,
On 11/13/18 1:36 AM, Matthew Wilcox wrote:
> On Mon, Nov 12, 2018 at 10:46:35AM -0500, Tony Battersby wrote:
>> Prevent a possible endless loop with DMAPOOL_DEBUG enabled if a buggy
>> driver corrupts DMA pool memory.
>>
>> Signed-off-by: Tony Battersby
> I lik
On 12/4/18 3:30 PM, Andy Shevchenko wrote:
> On Tue, Dec 4, 2018 at 10:18 PM Matthew Wilcox wrote:
>> On Tue, Dec 04, 2018 at 12:14:43PM -0800, Andrew Morton wrote:
>>> Also, Andy had issues with the v2 series so it would be good to hear an
>>> update from him?
>> Certainly.
> Hmm... I certainly f
es the
boundary into account, and use it to replace the inaccurate calculation.
This depends on the patch "dmapool: fix boundary comparison" for the
calculated blks_per_alloc value to be correct.
Signed-off-by: Tony Battersby
---
mm/dmapool.c | 7 +--
1 file changed, 5 insertions(+), 2
Prevent a possible endless loop with DMAPOOL_DEBUG enabled if a buggy
driver corrupts DMA pool memory.
Signed-off-by: Tony Battersby
---
mm/dmapool.c | 37 ++---
1 file changed, 30 insertions(+), 7 deletions(-)
diff --git a/mm/dmapool.c b/mm/dmapool.c
index
Signed-off-by: Tony Battersby
---
mm/dmapool.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/mm/dmapool.c b/mm/dmapool.c
index 7a9161d4f7a6..49019ef6dd83 100644
--- a/mm/dmapool.c
+++ b/mm/dmapool.c
@@ -415,8 +415,6 @@ void dma_pool_free(struct dma_pool *pool, void *vad
Avoid double-memset of the same allocated memory in dma_pool_alloc()
when both DMAPOOL_DEBUG is enabled and init_on_alloc=1.
Signed-off-by: Tony Battersby
---
mm/dmapool.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/dmapool.c b/mm/dmapool.c
index 49019ef6dd83
Remove a small amount of code duplication between dma_pool_destroy() and
pool_free_page() in preparation for adding more code without having to
duplicate it. No functional changes.
Signed-off-by: Tony Battersby
---
mm/dmapool.c | 34 --
1 file changed, 20
improves the algorithm from O(n^2) to O(n).
Signed-off-by: Tony Battersby
---
mm/dmapool.c | 27 +++
1 file changed, 23 insertions(+), 4 deletions(-)
diff --git a/mm/dmapool.c b/mm/dmapool.c
index 58c11dcaa4e4..b3dd2ace0d2a 100644
--- a/mm/dmapool.c
+++ b/mm/dmapool.c
).
Signed-off-by: Tony Battersby
---
mm/dmapool.c | 128 ---
1 file changed, 100 insertions(+), 28 deletions(-)
diff --git a/mm/dmapool.c b/mm/dmapool.c
index b3dd2ace0d2a..24535483f781 100644
--- a/mm/dmapool.c
+++ b/mm/dmapool.c
@@ -12,11 +12,12
hem.
Signed-off-by: Tony Battersby
---
mm/dmapool.c | 42 +++---
1 file changed, 11 insertions(+), 31 deletions(-)
diff --git a/mm/dmapool.c b/mm/dmapool.c
index a7eb5d0eb2da..0f89de408cbe 100644
--- a/mm/dmapool.c
+++ b/mm/dmapool.c
@@ -275,7 +275,7 @@ v
ss ranges marked with "*" would not have
been used even though they didn't cross the given boundary.
Fixes: e34f44b3517f ("pool: Improve memory usage for devices which can't cross
boundaries")
Signed-off-by: Tony Battersby
---
mm/dmapool.c | 2 +-
1 file changed, 1
This patch series improves dmapool scalability by replacing linear scans
with red-black trees.
History:
In 2018 this patch series made it through 4 versions. v1 used red-black
trees; v2 - v4 put the dma pool info directly into struct page and used
virt_to_page() to get at it. v4 made a brief ap
To represent the size of a single allocation, dmapool currently uses
'unsigned int' in some places and 'size_t' in other places. Standardize
on 'unsigned int' to reduce overhead, but use 'size_t' when counting all
the blocks in the entire pool.
Signed-o
On 5/31/22 15:48, Robin Murphy wrote:
> On 2022-05-31 19:17, Tony Battersby wrote:
>
>> pool->name, blocks,
>> - (size_t) pages *
>> - (pool->allocation / pool->size),
>>
On 5/31/22 17:54, Keith Busch wrote:
> On Tue, May 31, 2022 at 02:23:44PM -0400, Tony Battersby wrote:
>> dma_pool_free() scales poorly when the pool contains many pages because
>> pool_find_page() does a linear scan of all allocated pages. Improve its
>> scalability by repl
This patch series improves dmapool scalability by replacing linear scans
with red-black trees.
Note that Keith Busch is also working on improving dmapool scalability,
so for now I would recommend not merging my scalability patches until
Keith's approach can be evaluated. In the meantime, my patch
ded bloat.
Signed-off-by: Tony Battersby
---
mm/dmapool.c | 42 +++---
1 file changed, 11 insertions(+), 31 deletions(-)
diff --git a/mm/dmapool.c b/mm/dmapool.c
index a7eb5d0eb2da..0f89de408cbe 100644
--- a/mm/dmapool.c
+++ b/mm/dmapool.c
@@ -275,7 +275,7 @@ v
Use sysfs_emit instead of scnprintf, snprintf or sprintf.
Signed-off-by: Tony Battersby
---
Changes since v5:
This patch was not in v5.
mm/dmapool.c | 23 +++
1 file changed, 7 insertions(+), 16 deletions(-)
diff --git a/mm/dmapool.c b/mm/dmapool.c
index 0f89de408cbe
To represent the size of a single allocation, dmapool currently uses
'unsigned int' in some places and 'size_t' in other places. Standardize
on 'unsigned int' to reduce overhead, but use 'size_t' when counting all
the blocks in the entire pool.
Signed-of
ss ranges marked with "*" would not have
been used even though they didn't cross the given boundary.
Fixes: e34f44b3517f ("pool: Improve memory usage for devices which can't cross
boundaries")
Signed-off-by: Tony Battersby
---
mm/dmapool.c | 2 +-
1 file changed, 1
es the
boundary into account, and use it to replace the inaccurate calculation.
This depends on the patch "dmapool: fix boundary comparison" for the
calculated blks_per_alloc value to be correct.
Signed-off-by: Tony Battersby
---
The added blks_per_alloc value will also be used in the
Prevent a possible endless loop with DMAPOOL_DEBUG enabled if a buggy
driver corrupts DMA pool memory.
Signed-off-by: Tony Battersby
---
mm/dmapool.c | 37 ++---
1 file changed, 30 insertions(+), 7 deletions(-)
diff --git a/mm/dmapool.c b/mm/dmapool.c
index
Signed-off-by: Tony Battersby
---
mm/dmapool.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/mm/dmapool.c b/mm/dmapool.c
index facdb3571976..44038089a41a 100644
--- a/mm/dmapool.c
+++ b/mm/dmapool.c
@@ -406,8 +406,6 @@ void dma_pool_free(struct dma_pool *pool, void *vad
Avoid double-memset of the same allocated memory in dma_pool_alloc()
when both DMAPOOL_DEBUG is enabled and init_on_alloc=1.
Signed-off-by: Tony Battersby
---
mm/dmapool.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/dmapool.c b/mm/dmapool.c
index 44038089a41a
pool_free_page() is called only from dma_pool_destroy(), so inline it
and make it less generic since we know that the pool is being destroyed.
Signed-off-by: Tony Battersby
---
Changes since v5:
Take the opposite approach and inline pool_free_page() into
dma_pool_destroy() instead of moving the
improves the algorithm from O(n) to O(1).
Signed-off-by: Tony Battersby
---
Changes since v5:
pool_free_page() no longer exists.
Updated big O usage in description.
mm/dmapool.c | 26 ++
1 file changed, 22 insertions(+), 4 deletions(-)
diff --git a/mm/dmapool.c b/mm
-by: Tony Battersby
---
Changes since v5:
pool_free_page() no longer exists.
Less churn in dma_pool_destroy().
Updated big O usage in description.
mm/dmapool.c | 114 ---
1 file changed, 90 insertions(+), 24 deletions(-)
diff --git a/mm/dmapool.c b
improves the algorithm from O(n^2) to O(n).
Signed-off-by: Tony Battersby
---
Using list_del_init() in dma_pool_alloc() makes it safe to call
list_del() unconditionally when freeing the page.
In dma_pool_free(), the check for being already in avail_page_list could
be written several different
Replace chain_dma_pool with direct calls to dma_alloc_coherent() and
dma_free_coherent(). Since the chain lookup can involve hundreds of
thousands of allocations, it is worthwile to avoid the overhead of the
dma_pool API.
Signed-off-by: Tony Battersby
---
The original code called
drivers/scsi/mpt3sas is running into a scalability problem with the
kernel's DMA pool implementation. With a LSI/Broadcom SAS 9300-8i
12Gb/s HBA and max_sgl_entries=256, during modprobe, mpt3sas does the
equivalent of:
chain_dma_pool = dma_pool_create(size = 128);
for (i = 0; i < 373959; i++)
).
Signed-off-by: Tony Battersby
---
I moved some code from dma_pool_destroy() into pool_free_page() to avoid code
repetition.
--- a/mm/dmapool.c
+++ b/mm/dmapool.c
@@ -15,11 +15,12 @@
* Many older drivers still have their own code to do this.
*
* The current design of this allocator is fairly
On 07/26/2018 03:37 PM, Andy Shevchenko wrote:
> On Thu, Jul 26, 2018 at 9:54 PM, Tony Battersby wrote:
>> dma_pool_alloc() scales poorly when allocating a large number of pages
>> because it does a linear scan of all previously-allocated pages before
>> allocating a
On 07/26/2018 03:42 PM, Matthew Wilcox wrote:
> On Thu, Jul 26, 2018 at 02:54:56PM -0400, Tony Battersby wrote:
>> dma_pool_free() scales poorly when the pool contains many pages because
>> pool_find_page() does a linear scan of all allocated pages. Improve its
>> scalabi
On 07/26/2018 08:07 PM, Matthew Wilcox wrote:
> On Thu, Jul 26, 2018 at 04:06:05PM -0400, Tony Battersby wrote:
>> On 07/26/2018 03:42 PM, Matthew Wilcox wrote:
>>> On Thu, Jul 26, 2018 at 02:54:56PM -0400, Tony Battersby wrote:
>>>> dma_pool_free() scales poorly whe
> That would be true if the test were "if
> (list_empty(&pool->avail_page_list))". But it is testing the list
> pointers in the item rather than the list pointers in the pool. It may
> be a bit confusing if you have never seen that usage before, which is
> why I added a comment. Basically, if y
On 07/27/2018 11:23 AM, Matthew Wilcox wrote:
> On Fri, Jul 27, 2018 at 09:23:30AM -0400, Tony Battersby wrote:
>> On 07/26/2018 08:07 PM, Matthew Wilcox wrote:
>>> If you're up for more major surgery, then I think we can put all the
>>> information currently sto
On 07/27/2018 03:38 PM, Tony Battersby wrote:
> But the bigger problem is that my first patch adds another list_head to
> the dma_page for the avail_page_link to make allocations faster. I
> suppose we could make the lists singly-linked instead of doubly-linked
> to save space.
>
On 07/27/2018 05:35 PM, Andy Shevchenko wrote:
> On Sat, Jul 28, 2018 at 12:27 AM, Tony Battersby
> wrote:
>> On 07/27/2018 03:38 PM, Tony Battersby wrote:
>>> But the bigger problem is that my first patch adds another list_head to
>>> the dma_page for the avail
On 07/27/2018 05:27 PM, Tony Battersby wrote:
> On 07/27/2018 03:38 PM, Tony Battersby wrote:
>> But the bigger problem is that my first patch adds another list_head to
>> the dma_page for the avail_page_link to make allocations faster. I
>> suppose we could make the lists
Major changes since v1:
*) Replaced the red-black tree with virt_to_page(), which takes us to
O(n) instead of O(n * log n). The mpt3sas benchmarks only improved a
little though (18 ms -> 17 ms on alloc and 19 ms -> 15 ms on free).
*) Eliminated struct dma_page. dmapool private data are now stor
ss ranges marked with "*" would not have
been used even though they didn't cross the given boundary.
Fixes: e34f44b3517f ("pool: Improve memory usage for devices which can't cross
boundaries")
Signed-off-by: Tony Battersby
---
As part of developing a later patch
== NULL before patch:
dma_pool_destroy chain pool, (ptrval) busy
Same error message with dev == NULL after patch:
(NULL device *): dma_pool_destroy chain pool, (ptrval) busy
Signed-off-by: Tony Battersby
---
--- linux/mm/dmapool.c.orig 2018-08-02 09:54:25.0 -0400
Remove a small amount of code duplication between dma_pool_destroy() and
pool_free_page() in preparation for adding more code without having to
duplicate it. No functional changes.
Signed-off-by: Tony Battersby
---
--- linux/mm/dmapool.c.orig 2018-08-02 09:59:15.0 -0400
+++ linux/mm
improves the algorithm from O(n^2) to O(n).
Signed-off-by: Tony Battersby
---
Changes since v1:
*) In v1, there was one (original) list for all pages and one (new) list
for pages with free blocks. In v2, there is one list for pages with
free blocks and one list for pages without free blocks
Rename fields in 'struct dma_page' in preparation for moving them into
'struct page'. No functional changes.
in_use -> dma_in_use
offset -> dma_free_o
Signed-off-by: Tony Battersby
---
--- linux/mm/dmapool.c.orig 2018-08-02 10:03:46.0 -0400
+++ linux/mm/d
ng
'struct dma_page'. In big O notation, this improves the algorithm from
O(n^2) to O(n) while also reducing memory usage.
Thanks to Matthew Wilcox for the suggestion to use struct page.
Signed-off-by: Tony Battersby
---
Completely rewritten since v1.
Prior to this patch, if you passed d
Prevent a possible endless loop with DMAPOOL_DEBUG enabled if a buggy
driver corrupts DMA pool memory.
Signed-off-by: Tony Battersby
---
--- linux/mm/dmapool.c.orig 2018-08-02 10:14:25.0 -0400
+++ linux/mm/dmapool.c 2018-08-02 10:16:17.0 -0400
@@ -449,16 +449,35 @@ void
This is my attempt to shrink 'dma_free_o' and 'dma_in_use' in 'struct
page' (originally 'offset' and 'in_use' in 'struct dma_page') to 16-bit
so that it is unnecessary to use the '_mapcount' field of 'struct
page'. However, it adds complexity and makes allocating and freeing up
to 20% slower for l
Replace chain_dma_pool with direct calls to dma_alloc_coherent() and
dma_free_coherent(). Since the chain lookup can involve hundreds of
thousands of allocations, it is worthwile to avoid the overhead of the
dma_pool API.
Signed-off-by: Tony Battersby
---
No changes since v1.
The original
On 08/03/2018 04:56 AM, Andy Shevchenko wrote:
> On Thu, Aug 2, 2018 at 10:57 PM, Tony Battersby wrote:
>> Remove code duplication in error messages. It is now safe to pas a NULL
>> dev to dev_err(), so the checks to avoid doing so are no longer
>> necessary.
>>
>
On 08/03/2018 05:02 AM, Andy Shevchenko wrote:
> On Thu, Aug 2, 2018 at 10:58 PM, Tony Battersby wrote:
>> dma_pool_alloc() scales poorly when allocating a large number of pages
>> because it does a linear scan of all previously-allocated pages before
>> allocating a
On 08/02/2018 07:56 PM, Matthew Wilcox wrote:
>
>> One of the nice things about this is that dma_pool_free() can do some
>> additional sanity checks:
>> *) Check that the offset of the passed-in address corresponds to a valid
>> block offset.
> Can't we do that already? Subtract the base address o
On 08/02/2018 07:56 PM, Matthew Wilcox wrote:
>
>> struct dma_pool { /* the pool */
>> #define POOL_FULL_IDX 0
>> #define POOL_AVAIL_IDX 1
>> #define POOL_N_LISTS2
>> struct list_head page_list[POOL_N_LISTS];
>> spinlock_t lock;
>> -size_t size;
>> struct dev
On 08/03/2018 09:41 AM, Tony Battersby wrote:
> On 08/03/2018 04:56 AM, Andy Shevchenko wrote:
>> On Thu, Aug 2, 2018 at 10:57 PM, Tony Battersby
>> wrote:
>>> Remove code duplication in error messages. It is now safe to pas a NULL
>>> dev to dev_err(), so th
On 08/03/2018 12:01 PM, Andy Shevchenko wrote:
> On Fri, Aug 3, 2018 at 6:59 PM, Andy Shevchenko
> wrote:
>> On Fri, Aug 3, 2018 at 6:17 PM, Tony Battersby wrote:
>>> But then I decided to simplify it to just use dev_err(). I still have
>>> the old version. Wh
On 08/03/2018 12:22 PM, Matthew Wilcox wrote:
> On Fri, Aug 03, 2018 at 06:59:20PM +0300, Andy Shevchenko wrote:
> I'm pretty sure this was created in an order to avoid bad looking (and
> in some cases frightening) "NULL device *" part.
>> JFYI: git log --no-merges --grep 'NULL device \*'
>
On 08/03/2018 01:03 PM, Tony Battersby wrote:
> On 08/03/2018 12:22 PM, Matthew Wilcox wrote:
>> On Fri, Aug 03, 2018 at 06:59:20PM +0300, Andy Shevchenko wrote:
>>>>>> I'm pretty sure this was created in an order to avoid bad looking (and
>>>>>&g
On 08/03/2018 02:38 PM, Andy Shevchenko wrote:
>
>> dma_alloc_coherent() does appear to support a NULL dev, so it might make
>> sense in theory. But I can't find any in-tree callers that actually
>> pass a NULL dev to dma_pool_create(). So for one of the dreaded (NULL
>> device *) messages to sho
For v3 of the patchset, I was also considering to add a note to the
kernel-doc comments for dma_pool_create() to use dma_alloc_coherent()
directly instead of a dma pool if the driver intends to allow userspace
to mmap() the returned pages, due to the new use of the _mapcount union
in struct page.
On 08/03/2018 05:07 PM, Matthew Wilcox wrote:
> On Fri, Aug 03, 2018 at 02:43:07PM -0400, Tony Battersby wrote:
>> Out of curiosity, I just tried to create a dmapool with a NULL dev and
>> it crashed on this:
>>
>> static inline int dev_to_node(struct device *d
Major changes since v2:
*) Addressed review comments.
*) Changed the description of patch #2.
*) Dropped "dmapool: reduce footprint in struct page", but split off
parts of it for merging (patches #7 and #8) and used it to improve
patch #9.
---
drivers/scsi/mpt3sas is running into a scala
ss ranges marked with "*" would not have
been used even though they didn't cross the given boundary.
Fixes: e34f44b3517f ("pool: Improve memory usage for devices which can't cross
boundaries")
Signed-off-by: Tony Battersby
---
No changes since v2.
Even though I desc
hem.
Signed-off-by: Tony Battersby
---
Changes since v2:
*) This was "dmapool: cleanup error messages" in v2.
*) Remove one more check for dev == NULL in dma_pool_destroy() that is
unrelated to error messages.
--- linux/mm/dmapool.c.orig 2018-08-03 16:12:23.0 -0400
+++ linux
improves the algorithm from O(n^2) to O(n).
Signed-off-by: Tony Battersby
---
Changes since v2:
*) Use list_move()/list_move_tail() instead of list_del+list_add().
*) Renamed POOL_N_LISTS to POOL_MAX_IDX.
*) Use defined names instead of 0/1 indexes for INIT_LIST_HEAD().
--- linux/mm
Rename fields in 'struct dma_page' in preparation for moving them into
'struct page'. No functional changes.
in_use -> dma_in_use
offset -> dma_free_off
Signed-off-by: Tony Battersby
---
Changes since v2:
Use dma_free_off instead of dma_free_o.
--- linux/mm/dmapoo
Remove a small amount of code duplication between dma_pool_destroy() and
pool_free_page() in preparation for adding more code without having to
duplicate it. No functional changes.
Signed-off-by: Tony Battersby
---
No changes since v2.
--- linux/mm/dmapool.c.orig 2018-08-02 09:59
ng
'struct dma_page'. In big O notation, this improves the algorithm from
O(n^2) to O(n) while also reducing memory usage.
Thanks to Matthew Wilcox for the suggestion to use struct page.
Signed-off-by: Tony Battersby
---
Changes since v2:
Just a re-diff after the changes in prior patches
To represent the size of a single allocation, dmapool currently uses
'unsigned int' in some places and 'size_t' in other places. Standardize
on 'unsigned int' to reduce overhead, but use 'size_t' when counting all
the blocks in the entire pool.
Signed-off
es the
boundary into account, and use it to replace the inaccurate calculation.
Signed-off-by: Tony Battersby
---
This was split off from "dmapool: reduce footprint in struct page" in v2.
This depends on patch #1 "dmapool: fix boundary comparison" for the
calculated blks_p
Prevent a possible endless loop with DMAPOOL_DEBUG enabled if a buggy
driver corrupts DMA pool memory.
Signed-off-by: Tony Battersby
---
Changes since v2:
This is closer to the improved version from "dmapool: reduce footprint
in struct page" in v2 thanks to a previous pa
Replace chain_dma_pool with direct calls to dma_alloc_coherent() and
dma_free_coherent(). Since the chain lookup can involve hundreds of
thousands of allocations, it is worthwile to avoid the overhead of the
dma_pool API.
Signed-off-by: Tony Battersby
---
No changes since v1.
The original
On 08/08/2018 05:54 AM, Andy Shevchenko wrote:
> On Tue, Aug 7, 2018 at 7:49 PM, Tony Battersby wrote:
>> The "total number of blocks in pool" debug statistic currently does not
>> take the boundary value into account, so it diverges from the "total
>> numbe
83 matches
Mail list logo