On 14.01.25 17:06, Thomas Weißschuh wrote:
The selftest started failing since commit e93d2521b27f
("x86/vdso: Split virtual clock pages into dedicated mapping")
was merged. While debugging I stumbled upon some memory usage
optimizations.
With these test now runs on a VM with onl
The selftest started failing since commit e93d2521b27f
("x86/vdso: Split virtual clock pages into dedicated mapping")
was merged. While debugging I stumbled upon some memory usage
optimizations.
With these test now runs on a VM with only 60MiB of memory.
Signed-off-by: Thomas
The selftest started failing since commit e93d2521b27f
("x86/vdso: Split virtual clock pages into dedicated mapping")
was merged. While debugging I stumbled upon some memory usage
optimizations.
With these test now runs on a VM with only 60MiB of memory.
Signed-off-by: Thomas
The selftest started failing since commit e93d2521b27f
("x86/vdso: Split virtual clock pages into dedicated mapping")
was merged. While debugging I stumbled upon some memory usage
optimizations.
With these test now runs on a VM with only 60MiB of memory.
Signed-off-by: Thomas
On Thu, Jan 21, 2021 at 06:49:39PM +0530, Gautham Ananthakrishna wrote:
> We tested this patch set recently and found it limiting negative dentry to a
> small part of total memory. The following is the test result we ran on two
> types of servers, one is 256G memory with 24 CPUS and another is 3T
For each memory cgroup, account its usage of the
top tier memory at the time a top tier page is assigned and
uncharged from the cgroup.
Signed-off-by: Tim Chen
---
include/linux/memcontrol.h | 1 +
mm/memcontrol.c| 39 +-
2 files changed, 39 inser
In memory cgroup's sysfs, report the memory cgroup's usage
of top tier memory in a new field: "toptier_usage_in_bytes".
Signed-off-by: Tim Chen
---
mm/memcontrol.c | 8
1 file changed, 8 insertions(+)
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index fe7bb8613f5a..68590f46fa76 10064
Ping? These patches are looking pretty good in our internal testing.
On Thu, Jan 21, 2021 at 06:49:39PM +0530, Gautham Ananthakrishna wrote:
> For most filesystems result of every negative lookup is cached, content of
> directories is usually cached too. Production of negative dentries isn't
> li
For most filesystems result of every negative lookup is cached, content of
directories is usually cached too. Production of negative dentries isn't
limited with disk speed. It's really easy to generate millions of them if
system has enough memory.
Getting this memory back ins't that easy because s
The vmstat threshold is 32 (MEMCG_CHARGE_BATCH), Actually the threshold
can be as big as MEMCG_CHARGE_BATCH * PAGE_SIZE. It still fits into s32.
So introducing struct batched_lruvec_stat to optimize memory usage.
The size of struct lruvec_stat is 304 bytes on 64 bits system. As it
is a per-cpu
On Wed, Dec 9, 2020 at 11:52 AM Roman Gushchin wrote:
>
> On Wed, Dec 09, 2020 at 10:31:55AM +0800, Muchun Song wrote:
> > On Wed, Dec 9, 2020 at 10:21 AM Roman Gushchin wrote:
> > >
> > > On Tue, Dec 08, 2020 at 05:51:32PM +0800, Muchun Song wrote:
> > > > The vmstat threshold is 32 (MEMCG_CHARG
On Wed, Dec 09, 2020 at 10:31:55AM +0800, Muchun Song wrote:
> On Wed, Dec 9, 2020 at 10:21 AM Roman Gushchin wrote:
> >
> > On Tue, Dec 08, 2020 at 05:51:32PM +0800, Muchun Song wrote:
> > > The vmstat threshold is 32 (MEMCG_CHARGE_BATCH), so the type of s32
> > > of lruvec_stat_cpu is enough.
A
On Wed, Dec 9, 2020 at 10:21 AM Roman Gushchin wrote:
>
> On Tue, Dec 08, 2020 at 05:51:32PM +0800, Muchun Song wrote:
> > The vmstat threshold is 32 (MEMCG_CHARGE_BATCH), so the type of s32
> > of lruvec_stat_cpu is enough. And introduce struct per_cpu_lruvec_stat
> >
On Tue, Dec 08, 2020 at 06:21:18PM -0800, Roman Gushchin wrote:
> On Tue, Dec 08, 2020 at 05:51:32PM +0800, Muchun Song wrote:
> > The vmstat threshold is 32 (MEMCG_CHARGE_BATCH), so the type of s32
> > of lruvec_stat_cpu is enough. And introduce struct per_cpu_lruvec_stat
> &g
On Tue, Dec 08, 2020 at 05:51:32PM +0800, Muchun Song wrote:
> The vmstat threshold is 32 (MEMCG_CHARGE_BATCH), so the type of s32
> of lruvec_stat_cpu is enough. And introduce struct per_cpu_lruvec_stat
> to optimize memory usage.
>
> The size of struct lruvec_stat is 304 bytes on
On Tue, Dec 8, 2020 at 1:53 AM Muchun Song wrote:
>
> The vmstat threshold is 32 (MEMCG_CHARGE_BATCH), so the type of s32
> of lruvec_stat_cpu is enough. And introduce struct per_cpu_lruvec_stat
> to optimize memory usage.
>
> The size of struct lruvec_stat is 304 bytes on 64
The vmstat threshold is 32 (MEMCG_CHARGE_BATCH), so the type of s32
of lruvec_stat_cpu is enough. And introduce struct per_cpu_lruvec_stat
to optimize memory usage.
The size of struct lruvec_stat is 304 bytes on 64 bits system. As it
is a per-cpu structure. So with this patch, we can save 304 / 2
2 (MEMCG_CHARGE_BATCH), so the type of s32
> > > > of lruvec_stat_cpu is enough. And introduce struct per_cpu_lruvec_stat
> > > > to optimize memory usage.
> > >
> > > How much savings are we talking about here? I am not deeply familiar
> > > with the
u is enough. And introduce struct per_cpu_lruvec_stat
> > > to optimize memory usage.
> >
> > How much savings are we talking about here? I am not deeply familiar
> > with the pcp allocator but can it compact smaller data types much
> > better?
>
> It is a percpu struct. The
On Mon, Dec 7, 2020 at 8:36 PM Michal Hocko wrote:
>
> On Sun 06-12-20 16:56:39, Muchun Song wrote:
> > The vmstat threshold is 32 (MEMCG_CHARGE_BATCH), so the type of s32
> > of lruvec_stat_cpu is enough. And introduce struct per_cpu_lruvec_stat
> > to optimize memory usa
On Sun 06-12-20 16:56:39, Muchun Song wrote:
> The vmstat threshold is 32 (MEMCG_CHARGE_BATCH), so the type of s32
> of lruvec_stat_cpu is enough. And introduce struct per_cpu_lruvec_stat
> to optimize memory usage.
How much savings are we talking about here? I am not deeply familiar
wit
The vmstat threshold is 32 (MEMCG_CHARGE_BATCH), so the type of s32
of lruvec_stat_cpu is enough. And introduce struct per_cpu_lruvec_stat
to optimize memory usage.
Signed-off-by: Muchun Song
---
include/linux/memcontrol.h | 6 +-
mm/memcontrol.c| 2 +-
2 files changed, 6
In GAUDI we don't have an MMU towards the HBM device memory. Therefore,
the user access that memory directly through physical address (via the
different engines) without the need to go through the driver to
allocate/free memory on the HBM.
For system monitoring purposes, the driver will keep track
Committer: Peter Zijlstra
CommitterDate: Tue, 29 Sep 2020 09:56:59 +02:00
lockdep: Optimize the memory usage of circular queue
Qian Cai reported a BFS_EQUEUEFULL warning [1] after read recursive
deadlock detection merged into tip tree recently. Unlike the previous
lockep graph searching, which
On Mon, Sep 28, 2020 at 05:47:38PM +0800, Boqun Feng wrote:
> I think your version is better and should be functionally identical to
> mine, also FWIW, I tested with a lockdep boot selftests, everything
> worked fine.
Excellent, thanks! I'll go feed it to the robots.
> > +*
> > +* Otherwise set @lock to NULL to fetch the next entry from
> > +* queue.
> > +*/
> > + if (lock->parent) {
> > + head = get_dep_list(lock->parent, offset);
> > +
*/
> + if (lock->parent) {
> + head = get_dep_list(lock->parent, offset);
> + lock = list_next_or_null_rcu(head, &lock->entry,
> + struct lock_list, entry);
> + } el
On Mon, Sep 28, 2020 at 10:08 AM Boqun Feng wrote:
>
> On Mon, Sep 28, 2020 at 10:03:19AM +0200, Dmitry Vyukov wrote:
> > On Thu, Sep 24, 2020 at 5:13 PM Boqun Feng wrote:
> > >
> > > Ping ;-)
> > >
> > > Regards,
> > > Boqun
> >
> > Hi Boqun,
> >
> > Peter says this may also fix this issue:
> >
On Mon, Sep 28, 2020 at 10:03:19AM +0200, Dmitry Vyukov wrote:
> On Thu, Sep 24, 2020 at 5:13 PM Boqun Feng wrote:
> >
> > Ping ;-)
> >
> > Regards,
> > Boqun
>
> Hi Boqun,
>
> Peter says this may also fix this issue:
> https://syzkaller.appspot.com/bug?extid=62ebe501c1ce9a91f68c
> please add th
On Thu, Sep 24, 2020 at 5:13 PM Boqun Feng wrote:
>
> Ping ;-)
>
> Regards,
> Boqun
Hi Boqun,
Peter says this may also fix this issue:
https://syzkaller.appspot.com/bug?extid=62ebe501c1ce9a91f68c
please add the following tag to the patch so that the bug report will
be closed on merge:
Reported-b
Ping ;-)
Regards,
Boqun
On Thu, Sep 17, 2020 at 04:01:50PM +0800, Boqun Feng wrote:
> Qian Cai reported a BFS_EQUEUEFULL warning [1] after read recursive
> deadlock detection merged into tip tree recently. Unlike the previous
> lockep graph searching, which iterate every lock class (every node in
Qian Cai reported a BFS_EQUEUEFULL warning [1] after read recursive
deadlock detection merged into tip tree recently. Unlike the previous
lockep graph searching, which iterate every lock class (every node in
the graph) exactly once, the graph searching for read recurisve deadlock
detection needs to
um_info->idx = 2;
+ }
+
+ ret = fdt_setprop(fdt, node, "linux,usable-memory", um_info->buf,
+ (um_info->idx * sizeof(u64)));
+
+out:
+ of_node_put(dn);
+ return ret;
+}
+
+
+/**
+ * update_usable_mem_fdt - Updates kdump kernel
On 28/07/20 7:14 pm, Michael Ellerman wrote:
Hari Bathini writes:
diff --git a/arch/powerpc/kexec/file_load_64.c
b/arch/powerpc/kexec/file_load_64.c
index 2df6f4273ddd..8df085a22fd7 100644
--- a/arch/powerpc/kexec/file_load_64.c
+++ b/arch/powerpc/kexec/file_load_64.c
@@ -17,9 +17,21 @@
#
Hari Bathini writes:
> diff --git a/arch/powerpc/kexec/file_load_64.c
> b/arch/powerpc/kexec/file_load_64.c
> index 2df6f4273ddd..8df085a22fd7 100644
> --- a/arch/powerpc/kexec/file_load_64.c
> +++ b/arch/powerpc/kexec/file_load_64.c
> @@ -17,9 +17,21 @@
> #include
> #include
> #include
> +
Hari Bathini writes:
> Kdump kernel, used for capturing the kernel core image, is supposed
> to use only specific memory regions to avoid corrupting the image to
> be captured. The regions are crashkernel range - the memory reserved
> explicitly for kdump kernel, memory used for the tce-table,
uf[0] = 0;
+ um_info->buf[1] = 0;
+ um_info->idx = 2;
+ }
+
+ ret = fdt_setprop(fdt, node, "linux,usable-memory", um_info->buf,
+ (um_info->idx * sizeof(*(um_info->buf;
+
+out:
+ kfree(pathname);
+
Hari Bathini writes:
> On 24/07/20 5:36 am, Thiago Jung Bauermann wrote:
>>
>> Hari Bathini writes:
>>
>>> Kdump kernel, used for capturing the kernel core image, is supposed
>>> to use only specific memory regions to avoid corrupting the image to
>>> be captured. The regions are crashkernel r
On 24/07/20 5:36 am, Thiago Jung Bauermann wrote:
>
> Hari Bathini writes:
>
>> Kdump kernel, used for capturing the kernel core image, is supposed
>> to use only specific memory regions to avoid corrupting the image to
>> be captured. The regions are crashkernel range - the memory reserved
>
Hari Bathini writes:
> Kdump kernel, used for capturing the kernel core image, is supposed
> to use only specific memory regions to avoid corrupting the image to
> be captured. The regions are crashkernel range - the memory reserved
> explicitly for kdump kernel, memory used for the tce-table,
>idx = 2;
+ }
+
+ ret = fdt_setprop(fdt, node, "linux,usable-memory", um_info->buf,
+ (um_info->idx * sizeof(*(um_info->buf;
+
+out:
+ kfree(pathname);
+ of_node_put(dn);
+ return ret;
+}
+
+
+/**
+ * update_usable_mem_fd
Hi,
Not sure exactly where to direct this so I'm just hitting linux-mm and
linux-kernel and hoping someone sees it.
We're in the process of updating our embedded targets from Linux 4.4.x
to the latest kernel (currently v5.6 but we're planning to go to v5.7
and maybe v5.8 depending on timing).
On 17/07/20 3:33 am, Thiago Jung Bauermann wrote:
>
> Hari Bathini writes:
>
>> On 16/07/20 4:22 am, Thiago Jung Bauermann wrote:
>>>
>>> Hari Bathini writes:
>>>
+ * each representing a memory range.
+ */
+ ranges = (len >> 2) / (n_mem_addr_cells + n_mem_size_cells);
Hari Bathini writes:
> On 16/07/20 4:22 am, Thiago Jung Bauermann wrote:
>>
>> Hari Bathini writes:
>>
>
>
>
>>> +/**
>>> + * get_node_path - Get the full path of the given node.
>>> + * @dn:Node.
>>> + * @path: Updated with the full path of the node.
>>> + *
>>> + * Re
On 16/07/20 4:22 am, Thiago Jung Bauermann wrote:
>
> Hari Bathini writes:
>
>> +/**
>> + * get_node_path - Get the full path of the given node.
>> + * @dn:Node.
>> + * @path: Updated with the full path of the node.
>> + *
>> + * Returns nothing.
>> + */
>> +static voi
Hari Bathini writes:
> /**
> + * get_usable_memory_ranges - Get usable memory ranges. This list includes
> + *regions like crashkernel, opal/rtas &
> tce-table,
> + *that kdump kernel could use.
> + * @mem_ranges: Range lis
kimage *image, void
*fdt,
unsigned long initrd_load_addr,
unsigned long initrd_len, const char *cmdline)
{
+ struct crash_mem *umem = NULL;
int chosen_node, ret;
/* Remove memory reservation for the current device tree. */
@@
um_info->buf[1] = 0;
+ um_info->idx = 2;
+ }
+
+ ret = fdt_setprop(fdt, node, "linux,usable-memory", um_info->buf,
+ (um_info->idx * sizeof(*(um_info->buf;
+
+out:
+ kfree(pathname);
+ return ret;
+}
+
+
>buf[0] = 0;
+ um_info->buf[1] = 0;
+ um_info->idx = 2;
+ }
+
+ ret = fdt_setprop(fdt, node, "linux,usable-memory", um_info->buf,
+ (um_info->idx * sizeof(*(um_info->buf;
+
+out:
+ kfree(pathname);
+
with various
> kernel modules loaded.
Hmm... I find it difficult to judge in any direction. 32 bit machines are
smaller and have less of everything - including CPUs and workqueues
themselves, so while changing configuration for 32bit systems would reduce
memory usage the impact isn't gonna be
; >
> > Reducing the memory usage for the pool_workqueue is valuable.
> >
> > And 32bit system often has less memory, less workqueues,
> > less works, less concurrent flush_workqueue()s, so we can
> > slash the flush color on 32bit system to reduce memory usage
> >
On Mon, Jun 01, 2020 at 08:44:42AM +, Lai Jiangshan wrote:
> The major memory ussage in workqueue is on the pool_workqueue.
> The pool_workqueue has alignment requirement which often leads
> to padding.
>
> Reducing the memory usage for the pool_workqueue is valuable.
>
The major memory ussage in workqueue is on the pool_workqueue.
The pool_workqueue has alignment requirement which often leads
to padding.
Reducing the memory usage for the pool_workqueue is valuable.
And 32bit system often has less memory, less workqueues,
less works, less concurrent
The parse events parser leaks memory for certain expressions as well as
allowing a char* to reference stack, heap or .rodata. This series of patches
improves the hygeine and adds free-ing operations to reclaim memory in
the parser in error and non-error situations.
The series of patches was genera
From: Jian-Hong Pan
commit ee6db78f5db9bfe426c57a1ec9713827ebccd2d4 upstream.
Testing with RTL8822BE hardware, when available memory is low, we
frequently see a kernel panic and system freeze.
First, rtw_pci_rx_isr encounters a memory allocation failure (trimmed):
rx routine starvation
WARNING
From: Jian-Hong Pan
commit ee6db78f5db9bfe426c57a1ec9713827ebccd2d4 upstream.
Testing with RTL8822BE hardware, when available memory is low, we
frequently see a kernel panic and system freeze.
First, rtw_pci_rx_isr encounters a memory allocation failure (trimmed):
rx routine starvation
WARNING
On 8/27/19 3:43 AM, Michal Hocko wrote:
If there are no objection to the patch I will post it as a standalong
one.
On Mon 26-08-19 12:55:21, Michal Hocko wrote:
From 59d128214a62bf2d83c2a2a9cde887b4817275e7 Mon Sep 17 00:00:00 2001
From: Michal Hocko
Date: Mon, 26 Aug 2019 12:43:15 +0200
S
On Tue 27-08-19 20:19:34, Yafang Shao wrote:
> On Tue, Aug 27, 2019 at 8:03 PM Michal Hocko wrote:
> >
> > On Tue 27-08-19 19:56:16, Yafang Shao wrote:
> > > On Tue, Aug 27, 2019 at 7:50 PM Michal Hocko wrote:
> > > >
> > > > On Tue 27-08-19 19:43:49, Yafang Shao wrote:
> > > > > On Tue, Aug 27,
On Tue, Aug 27, 2019 at 8:03 PM Michal Hocko wrote:
>
> On Tue 27-08-19 19:56:16, Yafang Shao wrote:
> > On Tue, Aug 27, 2019 at 7:50 PM Michal Hocko wrote:
> > >
> > > On Tue 27-08-19 19:43:49, Yafang Shao wrote:
> > > > On Tue, Aug 27, 2019 at 6:43 PM Michal Hocko wrote:
> > > > >
> > > > > If
On Tue 27-08-19 19:56:16, Yafang Shao wrote:
> On Tue, Aug 27, 2019 at 7:50 PM Michal Hocko wrote:
> >
> > On Tue 27-08-19 19:43:49, Yafang Shao wrote:
> > > On Tue, Aug 27, 2019 at 6:43 PM Michal Hocko wrote:
> > > >
> > > > If there are no objection to the patch I will post it as a standalong
>
On Tue, Aug 27, 2019 at 7:50 PM Michal Hocko wrote:
>
> On Tue 27-08-19 19:43:49, Yafang Shao wrote:
> > On Tue, Aug 27, 2019 at 6:43 PM Michal Hocko wrote:
> > >
> > > If there are no objection to the patch I will post it as a standalong
> > > one.
> >
> > I have no objection to your patch. It c
On Tue 27-08-19 19:43:49, Yafang Shao wrote:
> On Tue, Aug 27, 2019 at 6:43 PM Michal Hocko wrote:
> >
> > If there are no objection to the patch I will post it as a standalong
> > one.
>
> I have no objection to your patch. It could fix the issue.
>
> I still think that it is not proper to use
On Tue, Aug 27, 2019 at 6:43 PM Michal Hocko wrote:
>
> If there are no objection to the patch I will post it as a standalong
> one.
I have no objection to your patch. It could fix the issue.
I still think that it is not proper to use a new scan_control here as
it breaks the global reclaim conte
If there are no objection to the patch I will post it as a standalong
one.
On Mon 26-08-19 12:55:21, Michal Hocko wrote:
> From 59d128214a62bf2d83c2a2a9cde887b4817275e7 Mon Sep 17 00:00:00 2001
> From: Michal Hocko
> Date: Mon, 26 Aug 2019 12:43:15 +0200
> Subject: [PATCH] mm, memcg: do not set r
On Fri 23-08-19 18:03:01, Yang Shi wrote:
>
>
> On 8/23/19 3:00 PM, Adric Blake wrote:
> > Synopsis:
> > A WARN_ON_ONCE is hit twice in set_task_reclaim_state under the
> > following conditions:
> > - a memory cgroup has been created and a task assigned it it
> > - memory.limit_in_bytes has been
On Sat, Aug 24, 2019 at 10:57 AM Hillf Danton wrote:
>
>
> On Fri, 23 Aug 2019 18:00:15 -0400 Adric Blake wrote:
> > Synopsis:
> > A WARN_ON_ONCE is hit twice in set_task_reclaim_state under the
> > following conditions:
> > - a memory cgroup has been created and a task assigned it it
> > - memory
On 8/23/19 3:00 PM, Adric Blake wrote:
Synopsis:
A WARN_ON_ONCE is hit twice in set_task_reclaim_state under the
following conditions:
- a memory cgroup has been created and a task assigned it it
- memory.limit_in_bytes has been set
- memory has filled up, likely from cache
In my usage, I cre
Synopsis:
A WARN_ON_ONCE is hit twice in set_task_reclaim_state under the
following conditions:
- a memory cgroup has been created and a task assigned it it
- memory.limit_in_bytes has been set
- memory has filled up, likely from cache
In my usage, I create a cgroup under the current session scope
Hi all,
I realize this already is merged, and it had some previous review
comments that led to the decisions in this patch, but I'd still like
to ask here, where I think I'm reaching the relevant parties:
On Wed, Jul 10, 2019 at 1:43 AM Jian-Hong Pan wrote:
...
> This patch allocates a new, data
Direct page reclamation and compaction have high and unpredictable
latency costs for applications. This patch adds code to predict if
system is about to run out of free memory by watching the historical
memory consumption trends. It computes a best fit line to this
historical data using method of l
y: Jian-Hong Pan
> Cc:
2 patches applied to wireless-drivers-next.git, thanks.
ee6db78f5db9 rtw88: pci: Rearrange the memory usage for skb in RX ISR
29b68a920f6a rtw88: pci: Use DMA sync instead of remapping in RX ISR
--
https://patchwork.kernel.org/patch/11039275/
https://wireless.wiki.kernel.org/en/developers/documentation/submittingpatches
Jian-Hong Pan 於 2019年7月11日 週四 下午1:28寫道:
>
> Jian-Hong Pan 於 2019年7月11日 週四 下午1:25寫道:
> >
> > Testing with RTL8822BE hardware, when available memory is low, we
> > frequently see a kernel panic and system freeze.
> >
> > First, rtw_pci_rx_isr encounters a memory allocation failure (trimmed):
> >
>
On Mon, Jul 22, 2019 at 11:24:39AM -0700, Bart Van Assche wrote:
> Hi Peter,
>
> An unfortunate side effect of commit 669de8bda87b ("kernel/workqueue: Use
> dynamic lockdep keys for workqueues") is that all stack traces associated
> with the lockdep key are leaked when a workqueue is destroyed. Fi
Hi Peter,
An unfortunate side effect of commit 669de8bda87b ("kernel/workqueue: Use
dynamic lockdep keys for workqueues") is that all stack traces associated
with the lockdep key are leaked when a workqueue is destroyed. Fix this by
storing each unique stack trace once. Please consider this patch
Jian-Hong Pan 於 2019年7月11日 週四 下午1:25寫道:
>
> Testing with RTL8822BE hardware, when available memory is low, we
> frequently see a kernel panic and system freeze.
>
> First, rtw_pci_rx_isr encounters a memory allocation failure (trimmed):
>
> rx routine starvation
> WARNING: CPU: 7 PID: 9871 at driv
Testing with RTL8822BE hardware, when available memory is low, we
frequently see a kernel panic and system freeze.
First, rtw_pci_rx_isr encounters a memory allocation failure (trimmed):
rx routine starvation
WARNING: CPU: 7 PID: 9871 at drivers/net/wireless/realtek/rtw88/pci.c:822
rtw_pci_rx_is
David Laight 於 2019年7月10日 週三 下午4:57寫道:
>
> From: Jian-Hong Pan
> > Sent: 10 July 2019 09:38
> >
> > Testing with RTL8822BE hardware, when available memory is low, we
> > frequently see a kernel panic and system freeze.
> >
> > First, rtw_pci_rx_isr encounters a memory allocation failure (trimmed):
From: Jian-Hong Pan
> Sent: 10 July 2019 09:38
>
> Testing with RTL8822BE hardware, when available memory is low, we
> frequently see a kernel panic and system freeze.
>
> First, rtw_pci_rx_isr encounters a memory allocation failure (trimmed):
>
> rx routine starvation
> WARNING: CPU: 7 PID: 987
Tony Chuang 於 2019年7月10日 週三 下午4:37寫道:
>
> > Subject: [PATCH v2 1/2] rtw88: pci: Rearrange the memory usage for skb in
> > RX ISR
> >
> > Testing with RTL8822BE hardware, when available memory is low, we
> > frequently see a kernel panic and system freeze.
> &g
Testing with RTL8822BE hardware, when available memory is low, we
frequently see a kernel panic and system freeze.
First, rtw_pci_rx_isr encounters a memory allocation failure (trimmed):
rx routine starvation
WARNING: CPU: 7 PID: 9871 at drivers/net/wireless/realtek/rtw88/pci.c:822
rtw_pci_rx_is
> Subject: [PATCH v2 1/2] rtw88: pci: Rearrange the memory usage for skb in
> RX ISR
>
> Testing with RTL8822BE hardware, when available memory is low, we
> frequently see a kernel panic and system freeze.
>
> First, rtw_pci_rx_isr encounters a memory allocation failu
Testing with RTL8822BE hardware, when available memory is low, we
frequently see a kernel panic and system freeze.
First, rtw_pci_rx_isr encounters a memory allocation failure (trimmed):
rx routine starvation
WARNING: CPU: 7 PID: 9871 at drivers/net/wireless/realtek/rtw88/pci.c:822
rtw_pci_rx_is
On 7/8/19 1:32 AM, Jian-Hong Pan wrote:
diff --git a/drivers/net/wireless/realtek/rtw88/pci.c
b/drivers/net/wireless/realtek/rtw88/pci.c
index cfe05ba7280d..1bfc99ae6b84 100644
--- a/drivers/net/wireless/realtek/rtw88/pci.c
+++ b/drivers/net/wireless/realtek/rtw88/pci.c
@@ -786,6 +786,15 @@ stat
n one SKB.
> (Could probably enlarge it to RX VHT AMSDU ~11K)
If you allocate 8192+24 the memory allocated will be either 12k or 16k
and the skb truesize set appropriately.
(Probably 16k if dma memory.)
If this is fed into IP it is quite likely that a single byte of data
will end up queued on the socket in 16k of dma-able memory.
The 'truesize' stops this using all the system memory, but it isn't
good for memory usage.
David
-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT,
UK
Registration No: 1397386 (Wales)
> > > @@ -803,25 +812,14 @@ static void rtw_pci_rx_isr(struct rtw_dev
> *rtwdev,
> > > struct rtw_pci *rtwpci,
> > > skb_put(skb, pkt_stat.pkt_len);
> > > skb_reserve(skb, pkt_offset);
> > >
> > > - /* alloc a smaller skb to mac80211 *
From: Jian-Hong Pan
> Sent: 08 July 2019 07:33
> To: Yan-Hsuan Chuang; Kalle Valo; David S . Miller
>
> Testing with RTL8822BE hardware, when available memory is low, we
> frequently see a kernel panic and system freeze.
>
> First, rtw_pci_rx_isr encounters a memory allocation failure (trimmed):
Tony Chuang 於 2019年7月8日 週一 下午3:23寫道:
>
> > Subject: [PATCH] rtw88/pci: Rearrange the memory usage for skb in RX ISR
>
> nit, "rtw88: pci:" would be better.
Ok.
> >
> > When skb allocation fails and the "rx routine starvation" is hit, the
> >
> Subject: [PATCH] rtw88/pci: Rearrange the memory usage for skb in RX ISR
nit, "rtw88: pci:" would be better.
>
>
> When skb allocation fails and the "rx routine starvation" is hit, the
> function returns immediately without updating the RX ring. At thi
Testing with RTL8822BE hardware, when available memory is low, we
frequently see a kernel panic and system freeze.
First, rtw_pci_rx_isr encounters a memory allocation failure (trimmed):
rx routine starvation
WARNING: CPU: 7 PID: 9871 at drivers/net/wireless/realtek/rtw88/pci.c:822
rtw_pci_rx_is
Em Tue, Jul 02, 2019 at 10:37:15AM -0700, Numfor Mbiziwo-Tiapo escreveu:
> Running the perf test command after building perf with a memory
> sanitizer causes a warning that says:
> WARNING: MemorySanitizer: use-of-uninitialized-value... in
> mmap-thread-lookup.c
> Initializing the go variable to 0
Running the perf test command after building perf with a memory
sanitizer causes a warning that says:
WARNING: MemorySanitizer: use-of-uninitialized-value... in mmap-thread-lookup.c
Initializing the go variable to 0 fixes this change.
Signed-off-by: Numfor Mbiziwo-Tiapo
---
tools/perf/tests/mmap
From: Michael Chan
[ Upstream commit d629522e1d66561f38e5c8d4f52bb6d254ec0707 ]
Skip RDMA context memory allocations, reduce to 1 ring, and disable
TPA when running in the kdump kernel. Without this patch, the driver
fails to initialize with memory allocation errors when running in a
typical kd
From: Michael Chan
[ Upstream commit d629522e1d66561f38e5c8d4f52bb6d254ec0707 ]
Skip RDMA context memory allocations, reduce to 1 ring, and disable
TPA when running in the kdump kernel. Without this patch, the driver
fails to initialize with memory allocation errors when running in a
typical kd
On 3/14/2019 2:06 AM, Vishal Goel wrote:
This patch allows for small memory optimization by creating the
kmem cache for "struct smack_rule" instead of using kzalloc.
For adding new smack rule, kzalloc is used to allocate the memory
for "struct smack_rule". kzalloc will always allocate 32 or 64 by
From: Zi Yan
It prepares for the following patches to enable memcg-based NUMA
node page migration. We are going to limit memory usage in each node
on a per-memcg basis.
Signed-off-by: Zi Yan
---
include/linux/cgroup-defs.h | 1 +
include/linux/memcontrol.h | 67
On 3/14/2019 2:06 AM, Vishal Goel wrote:
This patch allows for small memory optimization by creating the
kmem cache for "struct smack_rule" instead of using kzalloc.
For adding new smack rule, kzalloc is used to allocate the memory
for "struct smack_rule". kzalloc will always allocate 32 or 64 by
This patch allows for small memory optimization by creating the
kmem cache for "struct smack_rule" instead of using kzalloc.
For adding new smack rule, kzalloc is used to allocate the memory
for "struct smack_rule". kzalloc will always allocate 32 or 64 bytes
for 1 structure depending upon the kzal
3.18-stable review patch. If anyone has any objections, please let me know.
--
[ Upstream commit 721b1d98fb517ae99ab3b757021cf81db41e67be ]
kcopyd has no upper limit to the number of jobs one can allocate and
issue. Under certain workloads this can lead to excessive memory
4.9-stable review patch. If anyone has any objections, please let me know.
--
[ Upstream commit 721b1d98fb517ae99ab3b757021cf81db41e67be ]
kcopyd has no upper limit to the number of jobs one can allocate and
issue. Under certain workloads this can lead to excessive memory usage
4.19-stable review patch. If anyone has any objections, please let me know.
--
[ Upstream commit 721b1d98fb517ae99ab3b757021cf81db41e67be ]
kcopyd has no upper limit to the number of jobs one can allocate and
issue. Under certain workloads this can lead to excessive memory
1 - 100 of 316 matches
Mail list logo