From: zijun_hu
get_cpu_number() doesn't use existing helper to iterate over possible
CPUs, it will cause error in case of discontinuous @cpu_possible_mask
such as 0b0001, such discontinuous @cpu_possible_mask is resulted
in likely because one core have failed to come up on a SMP ma
On 09/15/2017 03:20 AM, Marc Zyngier wrote:
> On Thu, Sep 14 2017 at 1:15:14 pm BST, zijun_hu wrote:
>> From: zijun_hu
>>
>> get_cpu_number() doesn't use existing helper to iterate over possible
>> CPUs, so error happens in case of discontinuous @cpu_possible_mas
From: zijun_hu
get_cpu_number() doesn't use existing helper to iterate over possible
CPUs, so error happens in case of discontinuous @cpu_possible_mask
such as 0b0001.
fixed by using existing helper for_each_possible_cpu().
Signed-off-by: zijun_hu
---
drivers/irqchip/irq-gic-v3.
On 2017/9/7 0:40, Tejun Heo wrote:
> On Thu, Sep 07, 2017 at 12:04:59AM +0800, zijun_hu wrote:
>> On 2017/9/6 22:33, Tejun Heo wrote:
>>> Hello,
>>>
>>> On Wed, Sep 06, 2017 at 11:34:14AM +0800, zijun_hu wrote:
>>>> From: zijun_hu
>
On 2017/9/6 22:33, Tejun Heo wrote:
> Hello,
>
> On Wed, Sep 06, 2017 at 11:34:14AM +0800, zijun_hu wrote:
>> From: zijun_hu
>>
>> type bool is used to index three arrays in alloc_and_link_pwqs()
>> it doesn't look like conventional.
>>
>> i
From: zijun_hu
type bool is used to index three arrays in alloc_and_link_pwqs()
it doesn't look like conventional.
it is fixed by using type int to index the relevant arrays.
Signed-off-by: zijun_hu
---
kernel/workqueue.c | 9 +
1 file changed, 5 insertions(+), 4 deletions(-)
On 07/19/2017 06:44 PM, Zhaoyang Huang wrote:
> /proc/vmallocinfo will not show the area allocated by vm_map_ram, which
> will make confusion when debug. Add vm_struct for them and show them in
> proc.
>
> Signed-off-by: Zhaoyang Huang
> ---
another patch titled "vmalloc: show lazy-purged vma inf
On 07/18/2017 04:31 PM, Zhaoyang Huang (黄朝阳) wrote:
>
> It is no need to find the very beginning of the area within
> alloc_vmap_area, which can be done by judging each node during the process
>
it seems the original code is wrote to achieve the following two purposes :
A, the result vamp_area ha
On 07/17/2017 04:45 PM, zijun_hu wrote:
> On 07/17/2017 04:07 PM, Zhaoyang Huang wrote:
>> It is no need to find the very beginning of the area within
>> alloc_vmap_area, which can be done by judging each node during the process
>>
>> For current approach, the worst cas
On 07/17/2017 04:07 PM, Zhaoyang Huang wrote:
> It is no need to find the very beginning of the area within
> alloc_vmap_area, which can be done by judging each node during the process
>
> For current approach, the worst case is that the starting node which be found
> for searching the 'vmap_area_
On 10/13/2016 02:39 PM, zijun_hu wrote:
Hi Nicholas,
could you give some comments for this patch?
thanks a lot
> Hi Nicholas,
>
> i find __insert_vmap_area() is introduced by you
> could you offer comments for this patch related to that funciton
>
> thanks
>
> On 10/
From: zijun_hu
the percpu allocator only works well currently when allocates a power of
2 aligned area, but there aren't any hints for alignment requirement, so
memory leakage maybe be caused if allocate other alignment areas
the alignment must be a even at least since the LSB of a chunk
On 2016/10/14 8:33, Tejun Heo wrote:
> Hello,
>
> On Fri, Oct 14, 2016 at 07:49:44AM +0800, zijun_hu wrote:
>> the main intent of this change is making the CPU grouping algorithm more
>> easily to understand, especially, for newcomer for memory managements
>> take me
On 2016/10/14 8:34, Tejun Heo wrote:
> On Tue, Oct 11, 2016 at 09:29:27PM +0800, zijun_hu wrote:
>> From: zijun_hu
>>
>> as shown by pcpu_setup_first_chunk(), the first chunk is same as the
>> reserved chunk if the reserved size is nonzero but the dynamic is zero
On 2016/10/14 8:28, Tejun Heo wrote:
> Hello,
>
> On Fri, Oct 14, 2016 at 08:23:06AM +0800, zijun_hu wrote:
>> for the current code, only power of 2 alignment value can works well
>>
>> is it acceptable to performing a power of 2 checking and returning error code
>
On 2016/10/14 8:34, Tejun Heo wrote:
> On Tue, Oct 11, 2016 at 09:29:27PM +0800, zijun_hu wrote:
>> From: zijun_hu
>>
>> as shown by pcpu_setup_first_chunk(), the first chunk is same as the
>> reserved chunk if the reserved size is nonzero but the dynamic is zero
On 2016/10/14 7:31, Tejun Heo wrote:
> On Tue, Oct 11, 2016 at 09:24:50PM +0800, zijun_hu wrote:
>> From: zijun_hu
>>
>> the LSB of a chunk->map element is used for free/in-use flag of a area
>> and the other bits for offset, the sufficient and necessary condition
On 2016/10/14 7:29, Tejun Heo wrote:
> On Tue, Oct 11, 2016 at 10:00:28PM +0800, zijun_hu wrote:
>> From: zijun_hu
>>
>> as shown by pcpu_build_alloc_info(), the number of units within a percpu
>> group is educed by rounding up the number of CPUs within the group to
On 2016/10/14 7:29, Tejun Heo wrote:
> On Tue, Oct 11, 2016 at 10:00:28PM +0800, zijun_hu wrote:
>> From: zijun_hu
>>
>> as shown by pcpu_build_alloc_info(), the number of units within a percpu
>> group is educed by rounding up the number of CPUs within the group to
On 2016/10/14 7:37, Tejun Heo wrote:
> Hello, Zijun.
>
> On Tue, Oct 11, 2016 at 08:48:45PM +0800, zijun_hu wrote:
>> compared with the original algorithm theoretically and practically, the
>> new one educes the same grouping results, besides, it is more effective,
>
Hi Nicholas,
i find __insert_vmap_area() is introduced by you
could you offer comments for this patch related to that funciton
thanks
On 10/12/2016 10:46 PM, Michal Hocko wrote:
> [Let's CC Nick who has written this code]
>
> On Wed 12-10-16 22:30:13, zijun_hu wrote:
>
On 10/13/2016 05:41 AM, Andrew Morton wrote:
> On Tue, 11 Oct 2016 22:00:28 +0800 zijun_hu wrote:
>
>> as shown by pcpu_build_alloc_info(), the number of units within a percpu
>> group is educed by rounding up the number of CPUs within the group to
>> @upa boundary, there
On 10/13/2016 05:41 AM, Andrew Morton wrote:
> On Tue, 11 Oct 2016 22:00:28 +0800 zijun_hu wrote:
>
>> as shown by pcpu_build_alloc_info(), the number of units within a percpu
>> group is educed by rounding up the number of CPUs within the group to
>> @upa boundary, there
On 2016/10/12 22:46, Michal Hocko wrote:
> [Let's CC Nick who has written this code]
>
> On Wed 12-10-16 22:30:13, zijun_hu wrote:
>> From: zijun_hu
>>
>> the KVA allocator organizes vmap_areas allocated by rbtree. in order to
>> insert a new vmap_area
From: zijun_hu
the KVA allocator organizes vmap_areas allocated by rbtree. in order to
insert a new vmap_area @i_va into the rbtree, walk around the rbtree from
root and compare the vmap_area @t_va met on the rbtree against @i_va; walk
toward the left branch of @t_va if @i_va is lower than @t_va
From: zijun_hu
many seq_file helpers exist for simplifying implementation of virtual files
especially, for /proc nodes. however, the helpers for iteration over
list_head are available but aren't adopted to implement /proc/vmallocinfo
currently.
simplify /proc/vmallocinfo implementati
On 10/12/2016 05:54 PM, Michal Hocko wrote:
> On Wed 12-10-16 16:44:31, zijun_hu wrote:
>> On 10/12/2016 04:25 PM, Michal Hocko wrote:
>>> On Wed 12-10-16 15:24:33, zijun_hu wrote:
> [...]
>>>> i found the following code segments in mm/vmalloc.c
>>>
On 10/12/2016 04:25 PM, Michal Hocko wrote:
> On Wed 12-10-16 15:24:33, zijun_hu wrote:
>> On 10/12/2016 02:53 PM, Michal Hocko wrote:
>>> On Wed 12-10-16 08:28:17, zijun_hu wrote:
>>>> On 2016/10/12 1:22, Michal Hocko wrote:
>>>>> On Tue 11-10-16 21
On 10/12/2016 02:53 PM, Michal Hocko wrote:
> On Wed 12-10-16 08:28:17, zijun_hu wrote:
>> On 2016/10/12 1:22, Michal Hocko wrote:
>>> On Tue 11-10-16 21:24:50, zijun_hu wrote:
>>>> From: zijun_hu
>>>>
>>>> the LSB of a chunk->map element
On 10/12/2016 02:53 PM, Michal Hocko wrote:
> On Wed 12-10-16 08:28:17, zijun_hu wrote:
>> On 2016/10/12 1:22, Michal Hocko wrote:
>>> On Tue 11-10-16 21:24:50, zijun_hu wrote:
>>>> From: zijun_hu
>>>>
>>>> the LSB of a chunk->map element
On 2016/10/12 1:22, Michal Hocko wrote:
> On Tue 11-10-16 21:24:50, zijun_hu wrote:
>> From: zijun_hu
>>
>> the LSB of a chunk->map element is used for free/in-use flag of a area
>> and the other bits for offset, the sufficient and necessary condition of
>&
Hi all,
please ignore this patch since it includes a build error
i resend the fixed patch in v2 version
i am sorry for my incaution
On 2016/10/11 21:03, zijun_hu wrote:
> From: zijun_hu
>
> as shown by pcpu_build_alloc_info(), the number of units within a percpu
> group is educed
From: zijun_hu
as shown by pcpu_build_alloc_info(), the number of units within a percpu
group is educed by rounding up the number of CPUs within the group to
@upa boundary, therefore, the number of CPUs isn't equal to the units's
if it isn't aligned to @upa no
From: zijun_hu
as shown by pcpu_setup_first_chunk(), the first chunk is same as the
reserved chunk if the reserved size is nonzero but the dynamic is zero
this special scenario is referred as the special case by below content
fix several trivial issues:
1) correct or fix several comments
the
From: zijun_hu
the LSB of a chunk->map element is used for free/in-use flag of a area
and the other bits for offset, the sufficient and necessary condition of
this usage is that both size and alignment of a area must be even numbers
however, pcpu_alloc() doesn't force its @align paramete
From: zijun_hu
as shown by pcpu_build_alloc_info(), the number of units within a percpu
group is educed by rounding up the number of CPUs within the group to
@upa boundary, therefore, the number of CPUs isn't equal to the units's
if it isn't aligned to @upa no
On 2016/10/11 20:48, zijun_hu wrote:
> From: zijun_hu
> in order to verify the new algorithm, we enumerate many pairs of type
> @pcpu_fc_cpu_distance_fn_t function and the relevant CPU IDs array such
> below sample, then apply both algorithms to the same pair and print the
> g
From: zijun_hu
pcpu_build_alloc_info() groups CPUs according to relevant proximity
together to allocate memory for each percpu unit based on group.
however, the grouping algorithm consists of three loops and a goto
statement actually, and is inefficient and difficult to understand
the original
From: zijun_hu
in order to ensure the percpu group areas within a chunk aren't
distributed too sparsely, pcpu_embed_first_chunk() goes to error handling
path when a chunk spans over 3/4 VMALLOC area, however, during the error
handling, it forget to free the memory allocated for all percpu g
From: zijun_hu
pcpu_embed_first_chunk() calculates the range a percpu chunk spans into
@max_distance and uses it to ensure that a chunk is not too big compared
to the total vmalloc area. However, during calculation, it used incorrect
top address by adding a unit size to the highest group's
Hi Tejun,
as we discussed, i include some discussion content in the commit message.
could you give some new comments or acknowledgment for this patch?
On 2016/9/30 19:30, zijun_hu wrote:
> From: zijun_hu
>
> it will cause memory leakage for pcpu_embed_first_chunk() to go to
> lab
From: zijun_hu
it will cause memory leakage for pcpu_embed_first_chunk() to go to
label @out_free if the chunk spans over 3/4 VMALLOC area. all memory
are allocated and recorded into array @areas for each CPU group, but
the memory allocated aren't be freed before returning after going to
On 2016/9/30 0:44, Tejun Heo wrote:
> Hello,
>
> On Fri, Sep 30, 2016 at 12:03:20AM +0800, zijun_hu wrote:
>> From: zijun_hu
>>
>> it will cause memory leakage for pcpu_embed_first_chunk() to go to
>> label @out_free if the chunk spans over 3/4 VMALLOC area
From: zijun_hu
it will cause memory leakage for pcpu_embed_first_chunk() to go to
label @out_free if the chunk spans over 3/4 VMALLOC area. all memory
are allocated and recorded into array @areas for each CPU group, but
the memory allocated aren't be freed before returning after going to
From: zijun_hu
it will cause memory leakage for pcpu_embed_first_chunk() to go to
label @out_free if the chunk spans over 3/4 VMALLOC area. all memory
are allocated and recorded into array @areas for each CPU group, but
the memory allocated aren't be freed before returning after going to
On 2016/9/29 18:35, Tejun Heo wrote:
> Hello,
>
> On Sat, Sep 24, 2016 at 07:20:49AM +0800, zijun_hu wrote:
>> it is error to represent the max range max_distance spanned by all the
>> group areas as the offset of the highest group area plus unit size in
>> pcpu_emb
From: zijun_hu
simplify /proc/vmallocinfo implementation via existing seq_file
helpers for list_head
Signed-off-by: zijun_hu
---
Changes in v2:
- more detailed commit message is provided
- the redundant type cast for list_entry() is removed as advised
by rient...@google.com
mm
From: zijun_hu
__insert_vmap_area() has a few obvious logic errors as shown by comments
within below code segments
static void __insert_vmap_area(struct vmap_area *va)
{
as a internal function parameter, we assume vmap_area @va has nonzero size
...
if (va->va_start < tmp-&
From: zijun_hu
macro PAGE_ALIGNED() is prone to cause error because it doesn't follow
convention to parenthesize parameter @addr within macro body, for example
unsigned long *ptr = kmalloc(...); PAGE_ALIGNED(ptr + 16);
for the left parameter of macro IS_ALIGNED(), (unsigned long)(ptr + 1
On 09/22/2016 07:15 AM, David Rientjes wrote:
> On Thu, 22 Sep 2016, zijun_hu wrote:
>
>>> We don't support inserting when va->va_start == tmp_va->va_end, plain and
>>> simple. There's no reason to do so. NACK to the patch.
>>>
>> i a
From: zijun_hu
it is error to represent the max range max_distance spanned by all the
group areas as the offset of the highest group area plus unit size in
pcpu_embed_first_chunk(), it should equal to the offset plus the size
of the highest group area
in order to fix this issue,let us find the
On 2016/9/24 3:23, Tejun Heo wrote:
> On Sat, Sep 24, 2016 at 02:20:24AM +0800, zijun_hu wrote:
>> From: zijun_hu
>>
>> correct max_distance from (base of the highest group + ai->unit_size)
>> to (base of the highest group + the group size)
>>
>> Signed-o
From: zijun_hu
correct max_distance from (base of the highest group + ai->unit_size)
to (base of the highest group + the group size)
Signed-off-by: zijun_hu
---
mm/percpu.c | 14 --
1 file changed, 8 insertions(+), 6 deletions(-)
diff --git a/mm/percpu.c b/mm/percpu.c
in
From: zijun_hu
simplify grouping cpu logic in pcpu_build_alloc_info() to improve
readability and performance, it discards the goto statement too
for every possible cpu, decide whether it can share group id of any
lower index CPU, use the group id if so, otherwise a new group id
is allocated to
On 2016/9/23 22:42, Tejun Heo wrote:
> Hello,
>
> On Wed, Sep 21, 2016 at 12:19:53PM +0800, zijun_hu wrote:
>> From: zijun_hu
>>
>> endless loop maybe happen if either of parameter addr and end is not
>> page aligned for kernel API function ioremap_page_range()
On 2016/9/23 22:27, Michal Hocko wrote:
> On Fri 23-09-16 22:14:40, zijun_hu wrote:
>> On 2016/9/23 21:33, Michal Hocko wrote:
>>> On Fri 23-09-16 21:00:18, zijun_hu wrote:
>>>> On 09/23/2016 08:42 PM, Michal Hocko wrote:
>>>>>>>> no, it
On 2016/9/23 21:33, Michal Hocko wrote:
> On Fri 23-09-16 21:00:18, zijun_hu wrote:
>> On 09/23/2016 08:42 PM, Michal Hocko wrote:
>>>>>> no, it don't work for many special case
>>>>>> for example, provided PMD_SIZE=2M
>>>>>> ma
On 09/23/2016 08:42 PM, Michal Hocko wrote:
no, it don't work for many special case
for example, provided PMD_SIZE=2M
mapping [0x1f8800, 0x208800) virtual range will be split to two ranges
[0x1f8800, 0x20) and [0x20,0x208800) and map them separately
the first range
On 2016/9/23 16:45, Michal Hocko wrote:
> On Thu 22-09-16 23:13:17, zijun_hu wrote:
>> On 2016/9/22 20:47, Michal Hocko wrote:
>>> On Wed 21-09-16 12:19:53, zijun_hu wrote:
>>>> From: zijun_hu
>>>>
>>>> endless loop maybe happen if either
On 09/21/2016 12:34 PM, zijun_hu wrote:
> From: zijun_hu
>
> fix the following bug:
> - endless loop maybe happen when v[un]mapping improper ranges
>whose either boundary is not aligned to page
>
> Signed-off-by: zijun_hu
> ---
> mm/vmalloc.c | 9 +++--
>
On 09/21/2016 12:19 PM, zijun_hu wrote:
> From: zijun_hu
>
> endless loop maybe happen if either of parameter addr and end is not
> page aligned for kernel API function ioremap_page_range()
>
> in order to fix this issue and alert improper range parameters to user
>
On 2016/9/23 11:30, Nicholas Piggin wrote:
> On Fri, 23 Sep 2016 00:30:20 +0800
> zijun_hu wrote:
>
>> On 2016/9/22 20:37, Michal Hocko wrote:
>>> On Thu 22-09-16 09:13:50, zijun_hu wrote:
>>>> On 09/22/2016 08:35 AM, David Rientjes wrote:
>>> [..
On 2016/9/22 20:37, Michal Hocko wrote:
> On Thu 22-09-16 09:13:50, zijun_hu wrote:
>> On 09/22/2016 08:35 AM, David Rientjes wrote:
> [...]
>>> The intent is as it is implemented; with your change, lazy_max_pages() is
>>> potentially increased depending on the n
On 2016/9/22 20:47, Michal Hocko wrote:
> On Wed 21-09-16 12:19:53, zijun_hu wrote:
>> From: zijun_hu
>>
>> endless loop maybe happen if either of parameter addr and end is not
>> page aligned for kernel API function ioremap_page_range()
>
> Does this happen in
On 09/21/2016 12:23 PM, zijun_hu wrote:
> From: zijun_hu
>
> correct a few logic error for __insert_vmap_area() since the else
> if condition is always true and meaningless
>
> in order to fix this issue, if vmap_area inserted is lower than one
> on rbtree then walk a
On 09/22/2016 08:35 AM, David Rientjes wrote:
> On Thu, 22 Sep 2016, zijun_hu wrote:
>
>> On 2016/9/22 5:21, David Rientjes wrote:
>>> On Wed, 21 Sep 2016, zijun_hu wrote:
>>>
>>>> From: zijun_hu
>>>>
>>>> correct lazy_max_pa
On 2016/9/22 7:15, David Rientjes wrote:
> On Thu, 22 Sep 2016, zijun_hu wrote:
>
>>> We don't support inserting when va->va_start == tmp_va->va_end, plain and
>>> simple. There's no reason to do so. NACK to the patch.
>>>
>> i am sorry
On 2016/9/22 5:21, David Rientjes wrote:
> On Wed, 21 Sep 2016, zijun_hu wrote:
>
>> From: zijun_hu
>>
>> correct lazy_max_pages() return value if the number of online
>> CPUs is power of 2
>>
>> Signed-off-by: zijun_hu
>> ---
>> mm/vma
On 2016/9/22 5:16, David Rientjes wrote:
> On Wed, 21 Sep 2016, zijun_hu wrote:
>
>> diff --git a/mm/vmalloc.c b/mm/vmalloc.c
>> index cc6ecd6..a125ae8 100644
>> --- a/mm/vmalloc.c
>> +++ b/mm/vmalloc.c
>> @@ -2576,32 +2576,13 @@ void pcpu_free_vm_areas(st
On 2016/9/22 6:45, David Rientjes wrote:
> On Thu, 22 Sep 2016, zijun_hu wrote:
>
>>>> correct a few logic error for __insert_vmap_area() since the else
>>>> if condition is always true and meaningless
>>>>
>>>> in order to fix this issue,
On 2016/9/22 5:10, David Rientjes wrote:
> On Wed, 21 Sep 2016, zijun_hu wrote:
>
>> From: zijun_hu
>>
>> correct a few logic error for __insert_vmap_area() since the else
>> if condition is always true and meaningless
>>
>> in order to fix this issu
On 09/20/2016 01:49 PM, zijun_hu wrote:
> From: zijun_hu
>
> for ioremap_page_range(), endless loop maybe happen if either of parameter
> addr and end is not page aligned, in order to fix this issue and hint range
> parameter requirements BUG_ON() checkup are performed f
Hi All,
please ignore this patch
as advised by Nicholas Piggin, i split this patch to smaller patches
and resend them in another mail thread
On 09/20/2016 02:02 PM, zijun_hu wrote:
> From: zijun_hu
>
> correct a few logic error in __insert_vmap_area() since the else if
> conditi
From: zijun_hu
fix the following bug:
- endless loop maybe happen when v[un]mapping improper ranges
whose either boundary is not aligned to page
Signed-off-by: zijun_hu
---
mm/vmalloc.c | 9 +++--
1 file changed, 7 insertions(+), 2 deletions(-)
diff --git a/mm/vmalloc.c b/mm
From: zijun_hu
improve performance for pcpu_get_vm_areas() in below aspects
- reduce the counter of vmap_areas overlay checkup loop to half
- find the previous or next one of a vamp_area by list_head but rbtree
Signed-off-by: zijun_hu
---
include/linux/list.h | 11 +++
mm/internal.h
From: zijun_hu
correct lazy_max_pages() return value if the number of online
CPUs is power of 2
Signed-off-by: zijun_hu
---
mm/vmalloc.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index a125ae8..2804224 100644
--- a/mm/vmalloc.c
+++ b/mm
From: zijun_hu
simplify /proc/vmallocinfo implementation via seq_file helpers
for list_head
Signed-off-by: zijun_hu
---
mm/vmalloc.c | 27 +--
1 file changed, 5 insertions(+), 22 deletions(-)
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index cc6ecd6..a125ae8 100644
--- a
From: zijun_hu
correct a few logic error for __insert_vmap_area() since the else
if condition is always true and meaningless
in order to fix this issue, if vmap_area inserted is lower than one
on rbtree then walk around left branch; if higher then right branch
otherwise intersects with the
From: zijun_hu
endless loop maybe happen if either of parameter addr and end is not
page aligned for kernel API function ioremap_page_range()
in order to fix this issue and alert improper range parameters to user
WARN_ON() checkup and rounding down range lower boundary are performed
firstly
From: zijun_hu
canonicalize macro PAGE_ALIGNED() definition
Signed-off-by: zijun_hu
---
include/linux/mm.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index ef815b9..ec68186 100644
--- a/include/linux/mm.h
+++ b/include/linux
On 09/20/2016 02:54 PM, Nicholas Piggin wrote:
> On Tue, 20 Sep 2016 14:02:26 +0800
> zijun_hu wrote:
>
>> From: zijun_hu
>>
>> correct a few logic error in __insert_vmap_area() since the else if
>> condition is always true and meaningless
>>
>>
From: zijun_hu
correct a few logic error in __insert_vmap_area() since the else if
condition is always true and meaningless
avoid endless loop under [un]mapping improper ranges whose boundary
are not aligned to page
correct lazy_max_pages() return value if the number of online cpus
is power of
From: zijun_hu
for ioremap_page_range(), endless loop maybe happen if either of parameter
addr and end is not page aligned, in order to fix this issue and hint range
parameter requirements BUG_ON() checkup are performed firstly
for ioremap_pte_range(), loop end condition is optimized due to
From: zijun_hu
canonicalize macro PAGE_ALIGNED() definition
Signed-off-by: zijun_hu
---
include/linux/mm.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index ef815b9..ec68186 100644
--- a/include/linux/mm.h
+++ b/include/linux
On 09/03/2016 08:15 PM, Dmitry Vyukov wrote:
> Hello,
>
> While running syzkaller fuzzer I've got the following GPF:
>
> general protection fault: [#1] SMP DEBUG_PAGEALLOC KASAN
> Dumping ftrace buffer:
>(ftrace buffer empty)
> Modules linked in:
> CPU: 2 PID: 4268 Comm: syz-executor Not
On 09/01/2016 07:21 PM, Mark Rutland wrote:
> On Thu, Sep 01, 2016 at 06:58:29PM +0800, zijun_hu wrote:
>> From: zijun_hu
>>
>> regard FDT_SW_MAGIC as good fdt magic during mapping fdt area
>> see fdt_check_header() for details
>
> It looks like we should only se
From: zijun_hu
regard FDT_SW_MAGIC as good fdt magic during mapping fdt area
see fdt_check_header() for details
Signed-off-by: zijun_hu
---
arch/arm64/mm/mmu.c | 3 ++-
scripts/dtc/libfdt/fdt.h | 3 ++-
scripts/dtc/libfdt/libfdt.h | 2 ++
scripts/dtc
From: zijun_hu
remove duplicate macro __KERNEL__ check
Signed-off-by: zijun_hu
---
arch/arm64/include/asm/processor.h | 2 --
1 file changed, 2 deletions(-)
diff --git a/arch/arm64/include/asm/processor.h
b/arch/arm64/include/asm/processor.h
index ace0a96e7d6e..df2e53d3a969 100644
--- a
i am sorry, this patch has many bugs
i resend it in another mail thread
please ignore it
On 2016/8/27 23:27, zijun_hu wrote:
> From: zijun_hu
>
> this patch fixes the following bugs:
>
> - no bootmem is implemented by memblock currently, but config option
>CONFIG_NO_BOOT
i am sorry, this patch has many bugs
i resend it in another mail thread
please ignore it
On 2016/8/27 23:35, zijun_hu wrote:
> From: zijun_hu
>
> in ___alloc_bootmem_node_nopanic(), substitute kzalloc_node()
> for kzalloc() in order to allocate memory within given node
> pref
i am sorry, this patch has many bugs
i resend it in another mail thread
please ignore it
On 2016/8/28 15:48, kbuild test robot wrote:
> Hi zijun_hu,
>
> [auto build test ERROR on mmotm/master]
> [also build test ERROR on v4.8-rc3 next-20160825]
> [if your patch is applied to the
From: zijun_hu
in ___alloc_bootmem_node_nopanic(), replace kzalloc() by
kzalloc_node() in order to allocate memory within given node
preferentially when slab is available
Signed-off-by: zijun_hu
---
mm/bootmem.c | 14 ++
1 file changed, 2 insertions(+), 12 deletions(-)
diff --git
From: zijun_hu
this patch fixes the following bugs:
- the same ARCH_LOW_ADDRESS_LIMIT statements are duplicated between
header and relevant source
- don't ensure ARCH_LOW_ADDRESS_LIMIT perhaps defined by ARCH in
asm/processor.h is preferred over default in linux/bootmem.h
compl
From: zijun_hu
in ___alloc_bootmem_node_nopanic(), substitute kzalloc_node()
for kzalloc() in order to allocate memory within given node
preferentially when slab is available
free_all_bootmem_core() is optimized to make the first two parameters
of __free_pages_bootmem() looks consistent with
From: zijun_hu
this patch fixes the following bugs:
- no bootmem is implemented by memblock currently, but config option
CONFIG_NO_BOOTMEM doesn't depend on CONFIG_HAVE_MEMBLOCK
- the same ARCH_LOW_ADDRESS_LIMIT statements are duplicated between
header and relevant source
-
From: zijun_hu
it causes double align requirement for __get_vm_area_node() if parameter
size is power of 2 and VM_IOREMAP is set in parameter flags, for example
size=0x1 -> fls_long(0x1)=17 -> align=0x2
get_count_order_long() is implemented and used instead of fls_long() for
On 08/18/2016 05:01 PM, Peter Zijlstra wrote:
> On Thu, Aug 18, 2016 at 04:19:10PM +0800, zijun_hu wrote:
>> From: zijun_hu
>>
>> for LP64 ABI, struct rb_node aligns at 8 bytes boundary due to
>> sizeof(long) == 8 normally, so 0x07 should be used to extract
>&g
From: zijun_hu
for LP64 ABI, struct rb_node aligns at 8 bytes boundary due to
sizeof(long) == 8 normally, so 0x07 should be used to extract
node's parent rather than 0x03
the mask is corrected based on normal alignment of struct rb_node
macros are introduced to replace hard coding number
On 2016/8/18 8:28, Al Viro wrote:
> On Thu, Aug 18, 2016 at 08:10:19AM +0800, zijun_hu wrote:
>
>> Documentation/kbuild/makefiles.txt:
>> The kernel includes a set of headers that is exported to userspace.
>> Many headers can be exported as-is but other headers require a
&
On 2016/8/18 7:59, Al Viro wrote:
> On Thu, Aug 18, 2016 at 07:51:19AM +0800, zijun_hu wrote:
>>> What the hell is anything without __KERNEL__ doing with linux/bitops.h in
>>> the first place? IOW, why do we have those ifdefs at all?
>>>
>>
>> __KERNEL__
1 - 100 of 116 matches
Mail list logo