Thanks Johannes!!
On 4/15/2021 8:12 PM, Johannes Weiner wrote:
> Makes sense, it's more graceful in the event of a bug.
>
> But what motivates this change? Is it something you hit recently with
> an upstream kernel and we should investigate?
We specifically didn't hit the issue around this chang
Thanks Vlastimil for the review comments!!
On 2/19/2021 4:56 PM, Vlastimil Babka wrote:
> Can you share the use case for doing this? If it's to replace a failed RAM,
> then
> it's probably extremely rare, right.
>
>> We have the proof-of-concept code tried on the Snapdragon systems with
>> the s
Thanks David for the review comments!!
On 2/18/2021 11:46 PM, David Hildenbrand wrote:
>> I would like to start discussion about balancing the occupancy of
>> memory zones in a node in the system whose imabalance may be caused by
>> migration of pages to other zones during hotremove and then ho
On 2/6/2021 3:58 AM, David Rientjes wrote:
>> In the code, when COMPACT_SKIPPED is being returned, the page will
>> always be NULL. So, I'm not sure how much useful it is for the page ==
>> NULL check here. Or I failed to understand your point here?
>>
> Your code is short-circuiting the rest of _
Thanks David for the review!!
On 2/2/2021 2:54 AM, David Rientjes wrote:
> On Mon, 1 Feb 2021, Charan Teja Reddy wrote:
>
>> By defination, COMPACT[STALL|FAIL] events needs to be counted when there
>
> s/defination/definition/\
Done.
>
>> is 'At least in one zone compaction wasn't deferred or
On 1/25/2021 4:24 AM, David Rientjes wrote:
> On Wed, 20 Jan 2021, Vlastimil Babka wrote:
>
>> On 1/19/21 8:26 PM, David Rientjes wrote:
>>> On Mon, 18 Jan 2021, Charan Teja Reddy wrote:
>>>
should_proactive_compact_node() returns true when sum of the
weighted fragmentation score of a
Thanks Vlasitmil!!
On 1/18/2021 6:07 PM, Vlastimil Babka wrote:
> On 1/18/21 1:20 PM, Charan Teja Reddy wrote:
>> should_proactive_compact_node() returns true when sum of the
>> weighted fragmentation score of all the zones in the node is greater
>> than the wmark_high of compaction, which then tr
Thank you Vlastimil!!
On 1/15/2021 6:15 PM, Vlastimil Babka wrote:
> On 1/13/21 3:03 PM, Charan Teja Reddy wrote:
>> should_proactive_compact_node() returns true when sum of the
>> fragmentation score of all the zones in the node is greater than the
>> wmark_high of compaction which then triggers
Thanks Michal!!
On 11/26/2020 2:48 PM, Michal Hocko wrote:
> On Wed 25-11-20 16:18:06, Charan Teja Kalla wrote:
>>
>>
>> On 11/24/2020 1:11 PM, Michal Hocko wrote:
>>> On Mon 23-11-20 20:40:40, Charan Teja Kalla wrote:
>>>>
>>>> Thanks
Thanks Vlastimil!
On 11/24/2020 7:09 PM, Vlastimil Babka wrote:
> On 11/23/20 4:10 PM, Charan Teja Kalla wrote:
>>
>> Thanks Michal!
>> On 11/23/2020 7:43 PM, Michal Hocko wrote:
>>> On Mon 23-11-20 19:33:16, Charan Teja Reddy wrote:
>>>> When the pages
On 11/24/2020 1:11 PM, Michal Hocko wrote:
> On Mon 23-11-20 20:40:40, Charan Teja Kalla wrote:
>>
>> Thanks Michal!
>> On 11/23/2020 7:43 PM, Michal Hocko wrote:
>>> On Mon 23-11-20 19:33:16, Charan Teja Reddy wrote:
>>>> When the pages are failed
Thanks Michal!
On 11/23/2020 7:43 PM, Michal Hocko wrote:
> On Mon 23-11-20 19:33:16, Charan Teja Reddy wrote:
>> When the pages are failed to get isolate or migrate, the page owner
>> information along with page info is dumped. If there are continuous
>> failures in migration(say page is pinned)
On 9/23/2020 9:54 PM, Robin Murphy wrote:
> On 2020-09-23 15:53, Charan Teja Reddy wrote:
>> In of_iommu_xlate(), check if iommu device is enabled before traversing
>> the iommu_device_list through iommu_ops_from_fwnode(). It is of no use
>> in traversing the iommu_device_list only to return NO_
Thanks Michal.
On 8/13/2020 10:00 PM, Michal Hocko wrote:
> On Thu 13-08-20 21:51:29, Charan Teja Kalla wrote:
>> Thanks Michal for comments.
>>
>> On 8/13/2020 5:11 PM, Michal Hocko wrote:
>>> On Tue 11-08-20 18:28:23, Charan Teja Reddy wrote:
>>> [...]
Thanks Michal for comments.
On 8/13/2020 5:11 PM, Michal Hocko wrote:
> On Tue 11-08-20 18:28:23, Charan Teja Reddy wrote:
> [...]
>> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
>> index e4896e6..839039f 100644
>> --- a/mm/page_alloc.c
>> +++ b/mm/page_alloc.c
>> @@ -1304,6 +1304,11 @@ static v
On 8/12/2020 3:30 PM, David Hildenbrand wrote:
> On 12.08.20 11:46, Charan Teja Kalla wrote:
>>
>> Thanks David for the inputs.
>>
>> On 8/12/2020 2:35 AM, David Hildenbrand wrote:
>>> On 11.08.20 14:58, Charan Teja Reddy wrote:
>>>> The fol
Thanks David for the inputs.
On 8/12/2020 2:35 AM, David Hildenbrand wrote:
> On 11.08.20 14:58, Charan Teja Reddy wrote:
>> The following race is observed with the repeated online, offline and a
>> delay between two successive online of memory blocks of movable zone.
>>
>> P1
Thanks David for the comments.
On 8/11/2020 1:59 PM, David Hildenbrand wrote:
> On 10.08.20 18:10, Charan Teja Reddy wrote:
>> The following race is observed with the repeated online, offline and a
>> delay between two successive online of memory blocks of movable zone.
>>
>> P1
Thanks David.
On 8/11/2020 1:06 AM, David Rientjes wrote:
> On Mon, 10 Aug 2020, Charan Teja Reddy wrote:
>
>> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
>> index e4896e6..25e7e12 100644
>> --- a/mm/page_alloc.c
>> +++ b/mm/page_alloc.c
>> @@ -3106,6 +3106,7 @@ static void free_unref_page_com
Thanks David for the comments.
On 8/3/2020 1:35 PM, David Hildenbrand wrote:
> On 02.08.20 14:54, Charan Teja Reddy wrote:
>> When onlining a first memory block in a zone, pcp lists are not updated
>> thus pcp struct will have the default setting of ->high = 0,->batch = 1.
>> This means till the s
Thanks Mike for the inputs.
On 6/22/2020 5:10 PM, Ruhl, Michael J wrote:
>> -Original Message-
>> From: charante=codeaurora@mg.codeaurora.org
>> On Behalf Of Charan Teja
>> Kalla
>> Sent: Monday, June 22, 2020 5:26 AM
>> To: Ruhl, Michael J ; Sumit
Hello Mike,
On 6/19/2020 7:11 PM, Ruhl, Michael J wrote:
>> -Original Message-
>> From: charante=codeaurora@mg.codeaurora.org
>> On Behalf Of Charan Teja
>> Kalla
>> Sent: Friday, June 19, 2020 7:57 AM
>> To: Sumit Semwal ; Ruhl, Michael J
>>
There exists a sleep-while-atomic bug while accessing the dmabuf->name
under mutex in the dmabuffs_dname(). This is caused from the SELinux
permissions checks on a process where it tries to validate the inherited
files from fork() by traversing them through iterate_fd() (which
traverse files under
On 6/17/2020 11:13 PM, Ruhl, Michael J wrote:
>> -Original Message-
>> From: charante=codeaurora@mg.codeaurora.org
>> On Behalf Of Charan Teja
>> Kalla
>> Sent: Wednesday, June 17, 2020 2:29 AM
>> To: Ruhl, Michael J ; Sumit Semwal
>>
On 6/17/2020 1:51 PM, David Laight wrote:
> From: Charan Teja Kalla
>> Sent: 17 June 2020 07:29
> ...
>>>> If name is freed you will copy garbage, but the only way
>>>> for that to happen is that _set_name or _release have to be called
>>>> at ju
Thanks Michael for the comments..
On 6/16/2020 7:29 PM, Ruhl, Michael J wrote:
>> -Original Message-
>> From: dri-devel On Behalf Of
>> Ruhl, Michael J
>> Sent: Tuesday, June 16, 2020 9:51 AM
>> To: Charan Teja Kalla ; Sumit Semwal
>> ; open list:D
Thanks Sumit for the fix.
On 6/11/2020 5:14 PM, Sumit Semwal wrote:
> Charan Teja reported a 'use-after-free' in dmabuffs_dname [1], which
> happens if the dma_buf_release() is called while the userspace is
> accessing the dma_buf pseudo fs's dmabuffs_dname() in another process,
> and dma_buf_rele
Thanks Mel for feedback.
On 6/9/2020 5:58 PM, Mel Gorman wrote:
> On Tue, May 19, 2020 at 03:28:04PM +0530, Charan Teja Reddy wrote:
>> When boosting is enabled, it is observed that rate of atomic order-0
>> allocation failures are high due to the fact that free levels in the
>> system are checked
There exists a sleep-while-atomic bug while accessing the dmabuf->name
under mutex in the dmabuffs_dname(). This is caused from the SELinux
permissions checks on a process where it tries to validate the inherited
files from fork() by traversing them through iterate_fd() (which
traverse files under
When boosting is enabled, it is observed that rate of atomic order-0
allocation failures are high due to the fact that free levels in the
system are checked with ->watermark_boost offset. This is not a problem
for sleepable allocations but for atomic allocations which looks like
regression.
This p
Adding more people to get additional reviewer inputs.
On 6/5/2020 3:13 AM, Andrew Morton wrote:
> On Tue, 19 May 2020 15:28:04 +0530 Charan Teja Reddy
> wrote:
>
>> When boosting is enabled, it is observed that rate of atomic order-0
>> allocation failures are high due to the fact that free lev
Thank you Andrew for the comments..
On 5/20/2020 7:10 AM, Andrew Morton wrote:
> On Tue, 19 May 2020 15:28:04 +0530 Charan Teja Reddy
> wrote:
>
>> When boosting is enabled, it is observed that rate of atomic order-0
>> allocation failures are high due to the fact that free levels in the
>> sys
Thank you for the reply.
On 5/13/2020 9:33 PM, Sumit Semwal wrote:
> On Wed, 13 May 2020 at 21:16, Daniel Vetter wrote:
>>
>> On Wed, May 13, 2020 at 02:51:12PM +0200, Greg KH wrote:
>>> On Wed, May 13, 2020 at 05:40:26PM +0530, Charan Teja Kalla wrote:
>>>>
Thank you Greg for the comments.
On 5/12/2020 2:22 PM, Greg KH wrote:
> On Fri, May 08, 2020 at 12:11:03PM +0530, Charan Teja Reddy wrote:
>> The following race occurs while accessing the dmabuf object exported as
>> file:
>> P1 P2
>> dma_buf_release() dmabuffs_
On 5/12/2020 7:01 PM, Charan Teja Kalla wrote:
>
> Thank you Andrew for the reply.
>
> On 5/12/2020 1:41 AM, Andrew Morton wrote:
>> On Mon, 11 May 2020 19:10:08 +0530 Charan Teja Reddy
>> wrote:
>>
>>> Updating the zone watermarks by any means, like
Thank you Andrew for the reply.
On 5/12/2020 1:41 AM, Andrew Morton wrote:
> On Mon, 11 May 2020 19:10:08 +0530 Charan Teja Reddy
> wrote:
>
>> Updating the zone watermarks by any means, like extra_free_kbytes,
>> min_free_kbytes, water_mark_scale_factor e.t.c, when watermark_boost is
>> set
Thank you Greg for the comments.
On 5/6/2020 2:30 PM, Greg KH wrote:
On Wed, May 06, 2020 at 02:00:10PM +0530, Charan Teja Kalla wrote:
Thank you Greg for the reply.
On 5/5/2020 3:38 PM, Greg KH wrote:
On Tue, Apr 28, 2020 at 01:24:02PM +0530, Charan Teja Reddy wrote:
The following race
Thank you Greg for the reply.
On 5/5/2020 3:38 PM, Greg KH wrote:
On Tue, Apr 28, 2020 at 01:24:02PM +0530, Charan Teja Reddy wrote:
The following race occurs while accessing the dmabuf object exported as
file:
P1 P2
dma_buf_release() dmabuffs_dname()
38 matches
Mail list logo