Thank you for the clarification Bjorn! I was on vacation..sorry for my delay.
Closing the loop here, I understand we're not getting this patch
merged (due to its restriction to domain 0) and there was a suggestion
in the thread of trying to block MSIs from the IOMMU init code (which
also have the
Just a heads-up on top of my own email: as soon as I sent the email, I
had an idea. I was trying with ramoops.ftrace_size=X, so I removed
this parameter and it worked. As expected, it's a small buffer so
collected mostly the ramoops own functions, but worked in the end -
I'll debug the reason it fa
Hi Ashish, non-technical comment: in the subject, you might want to
s/SWIOTBL/SWIOTLB .
cheers,
Guilherme
Hi Dave and Kairui, thanks for your responses! OK, if that makes sense
to you I'm fine with it. I'd just recommend to test recent kernels in
multiple distros with the minimum "range" to see if 64M is enough for
crashkernel, maybe we'd need to bump that.
Cheers,
Guilherme
Hi Saeed, thanks for your patch/idea! Comments inline, below.
On Wed, Nov 18, 2020 at 8:29 PM Saeed Mirzamohammadi
wrote:
>
> This adds crashkernel=auto feature to configure reserved memory for
> vmcore creation to both x86 and ARM platforms based on the total memory
> size.
>
> Cc: sta...@vger.k
Thank you Vincent, much appreciated! I'll respond in the patch thread,
hopefully we can get that included in 5.4.y .
Cheers,
Guilherme
Thanks a lot Bjorn! I confess except for PPC64 Server machines, I
never saw other "domains" or segments. Is it common in x86 to have
that? The early_quirks() are restricted to the first segment, no
matter how many host bridges we have in segment ?
Thanks again!
On Mon, Nov 16, 2020 at 10:07 PM Eric W. Biederman
wrote:
> [...]
> > I think we need to disable MSIs in the crashing kernel before the
> > kexec. It adds a little more code in the crash_kexec() path, but it
> > seems like a worthwhile tradeoff.
>
> Disabling MSIs in the b0rken kernel is not poss
On Mon, Nov 16, 2020 at 6:45 PM Eric W. Biederman wrote:
> The way to do that would be to collect the set of pci devices when the
> kexec on panic kernel is loaded, not during crash_kexec. If someone
> performs a device hotplug they would need to reload the kexec on panic
> kernel.
>
> I am not n
On Mon, Oct 26, 2020 at 6:21 PM Thomas Gleixner wrote:
> [...]
> > So, I don't want to hijack Liu's thread, but do you think it makes
> > sense to have my approach as a (debug) parameter to prevent such a
> > degenerate case?
>
> At least it makes sense to some extent even if it's incomplete. What
On Mon, Oct 26, 2020 at 4:59 PM Thomas Gleixner wrote:
>
> On Mon, Oct 26 2020 at 12:06, Guilherme Piccoli wrote:
> > On Sun, Oct 25, 2020 at 8:12 AM Pingfan Liu wrote:
> >
> > Some time ago (2 years) we faced a similar issue in x86-64, a hard to
> > debug problem
On Sun, Oct 25, 2020 at 8:12 AM Pingfan Liu wrote:
>
> On Thu, Oct 22, 2020 at 4:37 PM Thomas Gleixner wrote:
> >
> > On Thu, Oct 22 2020 at 13:56, Pingfan Liu wrote:
> > > I hit a irqflood bug on powerpc platform, and two years ago, on a x86
> > > platform.
> > > When the bug happens, the kerne
On Mon, Aug 17, 2020 at 2:05 PM Greg KH wrote:
>
> On Mon, Aug 17, 2020 at 01:59:00PM -0300, Guilherme G. Piccoli wrote:
> > On 17/08/2020 13:49, Greg KH wrote:
> > > [...]
> > >> I'm sorry, I hoped the subject + thread would suffice heh
> > >
> > > There is no thread here :(
> > >
> >
> > Wow, th
Great catch Colin, thanks!
Feel free to add my:
Reviewed-by: Guilherme G. Piccoli
P.S. Not sure if it's only me, but the diff is soo much easier to
read when git is set to use patience diff algorithm:
https://termbin.com/f8ig
Thanks Vlastimil! I agree with you..it's a good new behavior to report
that case, I guess.
Cheers,
Guilherme
On Tue, Jun 16, 2020 at 5:02 AM Vlastimil Babka wrote:
>
> On 6/16/20 9:38 AM, kernel test robot wrote:
> > Greeting,
> >
> > FYI, we noticed the following commit (built with gcc-9):
> >
On Mon, May 18, 2020 at 9:54 AM Enderborg, Peter
wrote:
> Usually change existing causes confusion. It should not be a problem but it
> happen.
>
I am sorry, but I really didn't understand your statement, can you be
more specific?
Thanks again!
Hi Peter, thanks for the feedback. What do you mean by "trace
notification" ? We seem to have a trace event in that function you
mentioned. Also, accounting for that function is enough to
differentiate when the compaction is triggered by the kernel itself or
by the user (which is our use case here)
On Sun, May 10, 2020 at 10:25 PM David Rientjes wrote:
> [...]
> The kernel log is not preferred for this (or drop_caches, really) because
> the amount of info can causing important information to be lost. We don't
> really gain anything by printing that someone manually triggered
> compaction; t
On Fri, May 8, 2020 at 3:31 PM David Rientjes wrote:
> It doesn't make sense because it's only being done here for the entire
> system, there are also per-node sysfs triggers so you could do something
> like iterate over the nodemask of all nodes with memory and trigger
> compaction manually and t
On Fri, Oct 11, 2019 at 8:52 PM Qian Cai wrote:
>
>
> It simply error-prone to reuse the sysctl.conf from the first kernel, as it
> could contains lots of things that will kill kdump kernel.
Makes sense, I agree with you. But still, there's no
formal/right/single way to do kdump, so I don't thin
On Fri, Oct 11, 2019 at 8:36 PM Qian Cai wrote:
>
> Typically, kdump kernel has its own initramfs, and don’t even need to mount a
> rootfs, so I can’t see how sysfs/sysctl is relevant here.
Thanks for the quick response. Kdump in Ubuntu, for example, rely in
mounting the root filesystem.
Even in
On Fri, Oct 12, 2018 at 12:57 PM Dmitry Safonov wrote:
>
> Hi Guilherme,
>
> Just to let you know - I've done with more urgent issues now,
> so I'll be back on this patch on Monday, installing qemu-system-hppa
> and debugging the root case.
>
> Thanks,
> Dmitry
Awesome Dmitry, thanks for the head
On Tue, Oct 2, 2018 at 6:33 PM Dmitry Safonov wrote:
> [...]
> Well, v5 passes 0day, so all previous reports are fixed.
> But there is a new one about reboot on parisc platform which takes ~3
> mins after the patch with ldisc locked on tty_reopen().
>
> I believe it's related to holding read side
On Mon, Oct 1, 2018 at 12:50 PM Arnd Bergmann wrote:
>
> Hi Guilherme,
>
> I think what happened is that nobody felt responsible for picking
> it up. The patch was sent 'to: linux-kernel@vger.kernel.org'
> with a number of people in Cc but nobody listed as a recipient
> personally.
>
> I'd suggest
24 matches
Mail list logo