Hi Michael,

On Friday 07 April 2017 07:16 PM, Michael Ellerman wrote:
Hari Bathini <hbath...@linux.vnet.ibm.com> writes:
On Friday 07 April 2017 07:24 AM, Michael Ellerman wrote:
My preference would be that the fadump kernel "just works". If it's
using too much memory then the fadump kernel should do whatever it needs
to use less memory, eg. shrinking nr_cpu_ids etc.
Do we actually know *why* the fadump kernel is running out of memory?
Obviously large numbers of CPUs is one of the main drivers (lots of
stacks required). But other than that what is causing the memory
pressure? I would like some data on that before we proceed.
Almost the same amount of memory in comparison with the memory
required to boot the production kernel but that is unwarranted for fadump
(dump capture) kernel.
That's not data! :)

I am collating the data. Sorry! I should have mentioned it :)

The dump kernel is booted with *much* less memory than the production
kernel (that's the whole issue!) and so it doesn't need to create struct
pages for all that memory, which means it should need less memory.

What I meant was, if we were to boot production kernel with mem=X, where X is the smallest possible value to boot the kernel without resulting in an OOM, fadump
needed nearly the same amount to be reserved for it to capture dump without
hitting an OOM. But this was an observation on system with not so much memory.
Will try on a system with large memory and report back with data..


The vfs caches are also sized based on the available memory, so they
should also shrink in the dump kernel.

I want some actual numbers on what's driving the memory usage.

I tried some of these parameters to see how much memory they would save:

So, if parameters like
cgroup_disable=memory,
0 bytes saved.

Interesting.. was CONFIG_MEMCG set on the kernel?


transparent_hugepages=never,
0 bytes saved.

Not surprising unless transparent hugepages were used

numa=off,
64KB saved.

In the memory starved dump capture environment, every byte counts, I guess :)
Also, depends on the numa config?

nr_cpus=1,
3MB saved (vs 16 CPUs)


Now maybe on your system those do save memory for some reason, but
please prove it to me. Otherwise I'm inclined to merge:

diff --git a/arch/powerpc/kernel/fadump.c b/arch/powerpc/kernel/fadump.c
index 8ff0dd4e77a7..03f1f253c372 100644
--- a/arch/powerpc/kernel/fadump.c
+++ b/arch/powerpc/kernel/fadump.c
@@ -79,8 +79,10 @@ int __init early_init_dt_scan_fw_dump(unsigned long node,
         * dump data waiting for us.
         */
        fdm_active = of_get_flat_dt_prop(node, "ibm,kernel-dump", NULL);
-       if (fdm_active)
+       if (fdm_active) {
                fw_dump.dump_active = 1;
+               nr_cpu_ids = 1;
+       }

        /* Get the sizes required to store dump data for the firmware provided
         * dump sections.

Necessary but not sufficient is the point I am trying to make. Apparently not
convincing enough. Will try and come back with relevant data :)

Thanks
Hari

Reply via email to