On Tue, Sep 22, 2020 at 03:00:17PM +0100, Matthew Wilcox (Oracle) wrote:
> Here is a very rare race which leaks memory:
>
> Page P0 is allocated to the page cache.
> Page P1 is free.
>
> Thread A Thread BThread C
> find_get_entry():
> xas_load() returns P0
>
On Wed, Sep 23, 2020 at 10:00:41AM -0500, George Prekas wrote:
> If an interrupt arrives between llist_add and
> send_call_function_single_ipi in the following code snippet, then the
> remote CPU will not receive the IPI in a timely manner and subsequent
> SMP calls even from other CPUs for other f
On Wed, Sep 23, 2020 at 02:11:23PM -0400, Nitesh Narayan Lal wrote:
> Introduce a new API hk_num_online_cpus(), that can be used to
> retrieve the number of online housekeeping CPUs that are meant to handle
> managed IRQ jobs.
>
> This API is introduced for the drivers that were previously relying
On Wed, Sep 23, 2020 at 04:37:44PM -0700, Prasad Sodagudi wrote:
> There are all changes related to cpu hotplug path and would like to seek
> upstream review. These are all patches in Qualcomm downstream kernel
> for a quite long time. First patch sets the rt prioity to hotplug
> task and second pa
On Wed, Sep 23, 2020 at 11:52:51AM -0400, Steven Rostedt wrote:
> On Wed, 23 Sep 2020 10:40:32 +0200
> pet...@infradead.org wrote:
>
> > However, with migrate_disable() we can have each task preempted in a
> > migrate_disable() region, worse we can stack them all on the _same_ CPU
> > (super ridic
On Thu, Sep 24, 2020 at 06:44:12AM +0200, Dmitry Vyukov wrote:
> On Thu, Sep 24, 2020 at 6:36 AM Herbert Xu
> wrote:
> > > (k-slock-AF_INET6){+.-.}-{2:2}
That's a seqlock.
> > What's going on with all these bogus lockdep reports?
> >
> > These are two completely different locks, one is for TCP
On Wed, Sep 23, 2020 at 10:31:10AM +0200, Thomas Gleixner wrote:
>In practice migrate disable could be taken into account on placement
>decisions, but yes we don't have anything like that at the moment.
I think at the very least we should do some of that.
The premise is wanting to run the
On Wed, Sep 23, 2020 at 09:29:35PM +0800, Xianting Tian wrote:
> In the file fair.c, sometims update_tg_load_avg(cfs_rq, 0) is used,
> sometimes update_tg_load_avg(cfs_rq, false) is used. So change it
> to use bool parameter.
afaict it's never true (or my git-grep failed), so why not remove the
ar
On Mon, Sep 21, 2020 at 09:27:57PM +0200, Thomas Gleixner wrote:
> Alternatively this could of course be solved with per CPU page tables
> which will come around some day anyway I fear.
Previously (with PTI) we looked at making the entire kernel map per-CPU,
and that takes a 2K copy on switch_mm()
On Mon, Sep 21, 2020 at 09:27:57PM +0200, Thomas Gleixner wrote:
> On Mon, Sep 21 2020 at 09:24, Linus Torvalds wrote:
> > On Mon, Sep 21, 2020 at 12:39 AM Thomas Gleixner wrote:
> >>
> >> If a task is migrated to a different CPU then the mapping address will
> >> change which will explode in colo
On Mon, Sep 21, 2020 at 09:16:54PM +0200, Thomas Gleixner wrote:
> On Mon, Sep 21 2020 at 18:36, Peter Zijlstra wrote:
> > +/*
> > + * Migrate-Disable and why it is (strongly) undesired.
> > + *
> > + * The premise of the Real-Time schedulers we have on Linux
> > + * (SCHED_FIFO/SCHED_DEADLINE) is
On Mon, Sep 14, 2020 at 07:34:14AM -0700, kan.li...@linux.intel.com wrote:
> From: Kan Liang
>
> Changes since V1:
> - Drop the platform device solution
> - A new uncore PCI sub driver solution is introduced which searches
> the PCIe Root Port device via pci_get_device() and id table.
> Regis
On Mon, Sep 21, 2020 at 11:21:54AM +0200, Daniel Vetter wrote:
> So question to rt/worker folks: What's the best way to let userspace set
> the scheduling mode and priorities of things the kernel does on its
> behalf? Surely we're not the first ones where if userspace runs with some
> rt priority
On Fri, Sep 18, 2020 at 12:48:24PM +0200, Oleg Nesterov wrote:
> Of course, this assumes that atomic_t->counter underflows "correctly", just
> like "unsigned int".
We're documented that we do. Lots of code relies on that.
See Documentation/atomic_t.txt TYPES
> But again, do we really want this?
On Fri, Sep 18, 2020 at 12:01:12PM +0200, pet...@infradead.org wrote:
> + u64 sum = per_cpu_sum(*(u64 *)sem->read_count);
Moo, that doesn't work, we have to do two separate sums. I shouldn't try
to be clever on a Friday I suppose :-(
On Fri, Sep 18, 2020 at 12:04:32PM +0200, pet...@infradead.org wrote:
> On Fri, Sep 18, 2020 at 12:01:12PM +0200, pet...@infradead.org wrote:
> > @@ -198,7 +198,9 @@ EXPORT_SYMBOL_GPL(__percpu_down_read);
> > */
> > static bool readers_active_check(struct percpu_rw_semaphore *sem)
> > {
> > -
On Fri, Sep 18, 2020 at 12:01:12PM +0200, pet...@infradead.org wrote:
> @@ -198,7 +198,9 @@ EXPORT_SYMBOL_GPL(__percpu_down_read);
> */
> static bool readers_active_check(struct percpu_rw_semaphore *sem)
> {
> - if (per_cpu_sum(*sem->read_count) != 0)
> + u64 sum = per_cpu_sum(*(u64 *)s
On Fri, Sep 18, 2020 at 11:07:02AM +0200, Jan Kara wrote:
> If people really wanted to avoid irq-safe inc/dec for archs where it is
> more expensive, one idea I had was that we could add 'read_count_in_irq' to
> percpu_rw_semaphore. So callers in normal context would use read_count and
> callers in
On Fri, Sep 18, 2020 at 09:00:03AM +0200, Thomas Gleixner wrote:
> >> +void migrate_disable(void)
> >> +{
> >> + unsigned long flags;
> >> +
> >> + if (!current->migration_ctrl.disable_cnt) {
> >> + raw_spin_lock_irqsave(¤t->pi_lock, flags);
> >> + current->migration_ctrl.disab
On Thu, Sep 17, 2020 at 06:30:01PM +0200, Sebastian Siewior wrote:
> On 2020-09-17 17:54:10 [+0200], pet...@infradead.org wrote:
> > I'm not sure what the problem with FPU was, I was throwing alternatives
> > at tglx to see what would stick, in part to (re)discover the design
> > constraints of thi
On Thu, Sep 17, 2020 at 05:13:41PM +0200, Sebastian Siewior wrote:
> On 2020-09-17 16:49:37 [+0200], pet...@infradead.org wrote:
> > I'm aware of the duct-tape :-) But I was under the impression that we
> > didn't want the duct-tape, and that there was lots of issues with the
> > FPU code, or was t
On Wed, Sep 16, 2020 at 11:45:27AM -0700, Jakub Kicinski wrote:
> When CONFIG_LOCKDEP is not set, lock_is_held() and lockdep_is_held()
> are not declared or defined. This forces all callers to use ifdefs
> around these checks.
>
> Recent RCU changes added a lot of lockdep_is_held() calls inside
>
On Thu, Sep 17, 2020 at 04:38:50PM +0200, Sebastian Siewior wrote:
> On 2020-09-17 16:24:38 [+0200], pet...@infradead.org wrote:
> > And if I'm not mistaken, the above migrate_enable() *does* require being
> > able to schedule, and our favourite piece of futex:
> >
> > raw_spin_lock_irq(&q.pi_
On Thu, Sep 17, 2020 at 11:42:11AM +0200, Thomas Gleixner wrote:
> +static inline void update_nr_migratory(struct task_struct *p, long delta)
> +{
> + if (p->nr_cpus_allowed > 1 && p->sched_class->update_migratory)
> + p->sched_class->update_migratory(p, delta);
> +}
Right, so as
On Thu, Sep 17, 2020 at 01:48:38PM +0100, Matthew Wilcox wrote:
> On Thu, Sep 17, 2020 at 02:01:33PM +0200, Oleg Nesterov wrote:
> > IIUC, file_end_write() was never IRQ safe (at least if !CONFIG_SMP), even
> > before 8129ed2964 ("change sb_writers to use percpu_rw_semaphore"), but this
> > doesn't
On Tue, Sep 15, 2020 at 06:26:52PM +0200, Rafael J. Wysocki wrote:
> On Tue, Sep 15, 2020 at 12:44 PM Peter Zijlstra wrote:
> >
> > Make acpi_processor_idle use the common broadcast code, there's no
> > reason not to. This also removes some RCU usage after
> > rcu_idle_enter().
> >
> > Signed-off-
On Wed, Sep 16, 2020 at 03:58:17PM +0200, Sebastian Andrzej Siewior wrote:
> On 2020-09-16 14:10:20 [+0200], pet...@infradead.org wrote:
>
> squeeze that in please:
>
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index a4fe22b8b8418..bed3cd28af578 100644
> --- a/kernel/sched/core.c
>
On Wed, Sep 16, 2020 at 01:53:11PM +0200, Jiri Olsa wrote:
> There's a possible race in perf_mmap_close when checking ring buffer's
> mmap_count refcount value. The problem is that the mmap_count check is
> not atomic because we call atomic_dec and atomic_read separately.
>
> perf_mmap_close:
>
On Wed, Sep 16, 2020 at 12:10:21PM -0300, Arnaldo Carvalho de Melo wrote:
> Em Wed, Sep 16, 2020 at 04:17:00PM +0200, pet...@infradead.org escreveu:
> > On Wed, Sep 16, 2020 at 11:07:44AM -0300, Arnaldo Carvalho de Melo wrote:
> > > Em Wed, Sep 16, 2020 at 10:20:18AM +0200, Jiri Olsa escreveu:
> >
On Wed, Sep 16, 2020 at 01:28:19PM +0200, Dmitry Vyukov wrote:
> On Fri, Sep 4, 2020 at 6:05 PM Tetsuo Handa
> wrote:
> >
> > Hello. Can we apply this patch?
> >
> > This patch addresses top crashers for syzbot, and applying this patch
> > will help utilizing syzbot's resource for finding other bu
On Wed, Sep 16, 2020 at 11:07:44AM -0300, Arnaldo Carvalho de Melo wrote:
> Em Wed, Sep 16, 2020 at 10:20:18AM +0200, Jiri Olsa escreveu:
> > > IIRC BUILD_ID_SIZE is 20 bytes which is the correct size for SHA-1. A
> > > build ID may be 128-bits (16 bytes) if md5 or uuid hashes are used.
> > > Shou
On Wed, Sep 16, 2020 at 12:18:45PM +0200, Sebastian Andrzej Siewior wrote:
> With this on top of -rc5 I get:
>
> [ 42.678670] process 1816 (hackbench) no longer affine to cpu2
> [ 42.678684] process 1817 (hackbench) no longer affine to cpu2
> [ 42.710502] [ cut here ]
On Wed, Sep 16, 2020 at 08:32:20PM +0800, Hou Tao wrote:
> I have simply test the performance impact on both x86 and aarch64.
>
> There is no degradation under x86 (2 sockets, 18 core per sockets, 2 threads
> per core)
Yeah, x86 is magical here, it's the same single instruction for both ;-)
But
On Wed, Sep 16, 2020 at 08:52:07AM -0400, Qian Cai wrote:
> On Tue, 2020-09-15 at 16:30 +0200, pet...@infradead.org wrote:
> > On Tue, Sep 15, 2020 at 08:48:17PM +0800, Boqun Feng wrote:
> > > I think this happened because seqcount_##lockname##_init() is defined at
> > > function rather than macro,
On Wed, Sep 16, 2020 at 01:53:11PM +0200, Jiri Olsa wrote:
> There's a possible race in perf_mmap_close when checking ring buffer's
> mmap_count refcount value. The problem is that the mmap_count check is
> not atomic because we call atomic_dec and atomic_read separately.
>
> perf_mmap_close:
>
On Wed, Sep 16, 2020 at 09:00:59AM -0400, Qian Cai wrote:
>
>
> - Original Message -
> > On Wed, Sep 16, 2020 at 08:52:07AM -0400, Qian Cai wrote:
> > > On Tue, 2020-09-15 at 16:30 +0200, pet...@infradead.org wrote:
> > > > On Tue, Sep 15, 2020 at 08:48:17PM +0800, Boqun Feng wrote:
> > >
On Wed, Sep 16, 2020 at 10:46:41AM +0200, Marco Elver wrote:
> On Wed, 16 Sep 2020 at 10:30, wrote:
> > On Tue, Sep 15, 2020 at 08:09:16PM +0200, Marco Elver wrote:
> > > On Tue, 15 Sep 2020 at 19:40, Nick Desaulniers
> > > wrote:
> > > > On Tue, Sep 15, 2020 at 10:21 AM Borislav Petkov wrote:
On Tue, Sep 15, 2020 at 08:09:16PM +0200, Marco Elver wrote:
> On Tue, 15 Sep 2020 at 19:40, Nick Desaulniers
> wrote:
> > On Tue, Sep 15, 2020 at 10:21 AM Borislav Petkov wrote:
> > > init/calibrate.o: warning: objtool: asan.module_ctor()+0xc: call without
> > > frame pointer save/setup
> > >
On Tue, Sep 15, 2020 at 12:51:47PM -0700, Nick Desaulniers wrote:
> It would be much nicer if we had the flexibility to disable stack
> protectors per function, rather than per translation unit. I'm going
> to encourage you to encourage your favorite compile vendor ("write to
> your senator") to s
On Wed, Sep 16, 2020 at 03:44:15AM +, Xu Wang wrote:
> seq_puts is a lot cheaper than seq_printf, so use that to print
> literal strings.
performance is not a consideration here.
> Signed-off-by: Xu Wang
> ---
> kernel/sched/stats.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
On Wed, Sep 16, 2020 at 08:59:36AM +0800, Huang Ying wrote:
> So in this patch, if MPOL_BIND is used to bind the memory of the
> application to multiple nodes, and in the hint page fault handler both
> the faulting page node and the accessing node are in the policy
> nodemask, the page will be tri
On Tue, Sep 15, 2020 at 01:55:58PM -0700, Andy Lutomirski wrote:
> The old smap_save() code was:
>
> pushf
> pop %0
>
> with %0 defined by an "=rm" constraint. This is fine if the
> compiler picked the register option, but it was incorrect with an
> %rsp-relative memory operand. With some i
On Tue, Sep 15, 2020 at 08:48:17PM +0800, Boqun Feng wrote:
> I think this happened because seqcount_##lockname##_init() is defined at
> function rather than macro, so when the seqcount_init() gets expand in
Bah! I hate all this :/
I suspect the below, while more verbose than I'd like is the best
On Tue, Sep 15, 2020 at 05:51:50PM +0200, pet...@infradead.org wrote:
> Anyway, I'll rewrite the Changelog and stuff it in locking/urgent.
How's this?
---
Subject: locking/percpu-rwsem: Use this_cpu_{inc,dec}() for read_count
From: Hou Tao
Date: Tue, 15 Sep 2020 22:07:50 +0800
From: Hou Tao
On Tue, Sep 15, 2020 at 08:06:59PM +0200, Michal Suchanek wrote:
> This reverts commit 116ac378bb3ff844df333e7609e7604651a0db9d.
>
> This commit causes the kernel to oops and reboot when injecting a SLB
> multihit which causes a MCE.
>
> Before this commit a SLB multihit was corrected by the kern
On Tue, Sep 15, 2020 at 05:11:23PM +0100, Will Deacon wrote:
> On Tue, Sep 15, 2020 at 06:03:44PM +0200, pet...@infradead.org wrote:
> > On Tue, Sep 15, 2020 at 05:51:50PM +0200, pet...@infradead.org wrote:
> >
> > > Anyway, I'll rewrite the Changelog and stuff it in locking/urgent.
> >
> > How's
On Tue, Sep 15, 2020 at 05:31:14PM +0200, Oleg Nesterov wrote:
> > So yeah, fs/super totally abuses percpu_rwsem, and yes, using it from
> > IRQ context is totally out of spec. That said, we've (grudgingly)
> > accomodated them before.
>
> Yes, I didn't expect percpu_up_ can be called from IRQ :/
On Tue, Sep 15, 2020 at 10:07:50PM +0800, Hou Tao wrote:
> Under aarch64, __this_cpu_inc() is neither IRQ-safe nor atomic, so
> when percpu_up_read() is invoked under IRQ-context (e.g. aio completion),
> and it interrupts the process on the same CPU which is invoking
> percpu_down_read(), the decre
On Mon, Sep 14, 2020 at 01:29:34PM -0400, Qian Cai wrote:
> On Wed, 2020-09-09 at 10:08 +0530, Naresh Kamboju wrote:
> > While booting x86_64 with Linux next 20200908 tag kernel this warning
> > was noticed.
>
> This pretty much looks like the same issue in:
>
> https://lore.kernel.org/lkml/20200
On Tue, Sep 15, 2020 at 12:50:54AM -0700, Hugh Dickins wrote:
> This is just an FYI written from a position of ignorance: I may
> have got it wrong, and my build environment too piecemeal to matter
> to anyone else; but what I saw was weird enough to be worth mentioning,
> in case it saves someone
On Mon, Sep 14, 2020 at 12:04:22PM -0500, Josh Poimboeuf wrote:
> There have been some reports of "bad bp value" warnings printed by the
> frame pointer unwinder:
>
> WARNING: kernel stack regs at 5bac7112 in sh:1014 has bad 'bp'
> value
>
> This warning happens when u
On Mon, Sep 14, 2020 at 12:28:41PM -0300, Arnaldo Carvalho de Melo wrote:
> > > struct {
> > > struct perf_event_header header;
>
> > > u32 pid, tid;
> > > u64 addr;
> > > u64 len;
> > > u64
On Mon, Sep 14, 2020 at 12:27:35PM +0100, Qais Yousef wrote:
> What does PREEMPT_RT do to deal with softirqs delays?
Makes the lot preemptible, you found the patch below.
> I have tried playing with enabling threadirqs, which AFAIU should make
> softirqs
> preemptible, right?
Not yet,..
> I re
On Mon, Sep 14, 2020 at 12:03:36PM +0200, Vincent Guittot wrote:
> Vincent Guittot (4):
> sched/fair: relax constraint on task's load during load balance
> sched/fair: reduce minimal imbalance threshold
> sched/fair: minimize concurrent LBs between domain level
> sched/fair: reduce busy loa
On Mon, Sep 14, 2020 at 02:52:16PM +1000, Nicholas Piggin wrote:
> Reading and modifying current->mm and current->active_mm and switching
> mm should be done with irqs off, to prevent races seeing an intermediate
> state.
>
> This is similar to commit 38cf307c1f20 ("mm: fix kthread_use_mm() vs TLB
On Sat, Sep 12, 2020 at 03:05:48AM -0400, Gabriel Krisman Bertazi wrote:
> In preparation to remove TIF_IA32, stop using it in perf events code.
>
> Tested by running perf on 32-bit, 64-bit and x32 applications.
>
> Suggested-by: Andy Lutomirski
> Signed-off-by: Gabriel Krisman Bertazi
Acked-b
On Sun, Sep 13, 2020 at 11:02:49PM +0200, Jiri Olsa wrote:
> Add new version of mmap event. The MMAP3 record is an
> augmented version of MMAP2, it adds build id value to
> identify the exact binary object behind memory map:
>
> struct {
> struct perf_event_header header;
>
> u32
On Sun, Sep 13, 2020 at 11:41:00PM -0700, Stephane Eranian wrote:
> On Sun, Sep 13, 2020 at 2:03 PM Jiri Olsa wrote:
> what happens if I set mmap3 and mmap2?
>
> I think using mmap3 for every mmap may be overkill as you add useless
> 20 bytes to an mmap record.
> I am not sure if your code handle
On Fri, Sep 11, 2020 at 05:46:45PM +0100, Qais Yousef wrote:
> On 09/09/20 17:09, qianjun.ker...@gmail.com wrote:
> > From: jun qian
> >
> > When get the pending softirqs, it need to process all the pending
> > softirqs in the while loop. If the processing time of each pending
> > softirq is need
On Fri, Sep 11, 2020 at 01:17:02PM +0100, Valentin Schneider wrote:
> On 11/09/20 09:17, Peter Zijlstra wrote:
> > The intent of balance_callback() has always been to delay executing
> > balancing operations until the end of the current rq->lock section.
> > This is because balance operations must
On Fri, Sep 11, 2020 at 01:17:45PM +0100, Valentin Schneider wrote:
> > @@ -6968,6 +7064,8 @@ int sched_cpu_deactivate(unsigned int cp
> >*/
> > synchronize_rcu();
> >
> > + balance_push_set(cpu, true);
> > +
>
> IIUC this is going to make every subsequent finish_lock_switch()
> m
On Fri, Sep 11, 2020 at 03:55:22PM +0300, Adrian Hunter wrote:
> On 11/09/20 2:41 pm, pet...@infradead.org wrote:
> > On Tue, Sep 01, 2020 at 12:16:17PM +0300, Adrian Hunter wrote:
> >> Add synchronize_rcu() after list_del_rcu() in
> >> ftrace_remove_trampoline_from_kallsyms() to protect readers of
On Wed, Sep 09, 2020 at 05:09:31PM +0800, qianjun.ker...@gmail.com wrote:
> From: jun qian
>
> When get the pending softirqs, it need to process all the pending
> softirqs in the while loop. If the processing time of each pending
> softirq is need more than 2 msec in this loop, or one of the soft
On Tue, Sep 01, 2020 at 12:16:17PM +0300, Adrian Hunter wrote:
> Add synchronize_rcu() after list_del_rcu() in
> ftrace_remove_trampoline_from_kallsyms() to protect readers of
> ftrace_ops_trampoline_list (in ftrace_get_trampoline_kallsym)
> which is used when kallsyms is read.
>
> Fixes: fc0ea795
On Wed, Sep 02, 2020 at 12:54:41PM +0530, Viresh Kumar wrote:
> + atomic_t reset_pending;
> + atomic_set(&stats->reset_pending, 0);
> + if (atomic_read(&stats->reset_pending))
> + bool pending = atomic_read(&stats->reset_pending);
> + atomic_set(&stats->reset_pending, 1);
> +
On Fri, Sep 04, 2020 at 04:31:44PM -0400, Gabriel Krisman Bertazi wrote:
> Syscall User Dispatch (SUD) must take precedence over seccomp, since the
> use case is emulation (it can be invoked with a different ABI) such that
> seccomp filtering by syscall number doesn't make sense in the first
> plac
On Fri, Sep 04, 2020 at 04:31:43PM -0400, Gabriel Krisman Bertazi wrote:
> +struct syscall_user_dispatch {
> + char __user *selector;
> + unsigned long dispatcher_start;
> + unsigned long dispatcher_end;
> +};
> +int do_syscall_user_dispatch(struct pt_regs *regs)
> +{
> + struct s
On Fri, Sep 04, 2020 at 04:31:40PM -0400, Gabriel Krisman Bertazi wrote:
> diff --git a/include/linux/entry-common.h b/include/linux/entry-common.h
> index efebbffcd5cc..72ce9ca860c6 100644
> --- a/include/linux/entry-common.h
> +++ b/include/linux/entry-common.h
> @@ -21,10 +21,6 @@
> # define _T
On Fri, Sep 04, 2020 at 04:31:39PM -0400, Gabriel Krisman Bertazi wrote:
> +static inline void __set_tsk_syscall_intercept(struct task_struct *tsk,
> +unsigned int type)
> +{
> + tsk->syscall_intercept |= type;
> +
> + if (tsk->syscall_intercept)
> +
On Mon, Sep 07, 2020 at 09:05:02PM +0800, qianjun.ker...@gmail.com wrote:
> From: jun qian
>
> It is hard to understand what the meaning of the value from
> the return value of wakeup_preempt_entity, so I fix it.
> @@ -6822,9 +6828,9 @@ static unsigned long wakeup_gran(struct sched_entity
> *se
On Thu, Sep 10, 2020 at 06:59:21PM -0300, Jason Gunthorpe wrote:
> So, I suggest pXX_offset_unlocked()
Urgh, no. Elsewhere in gup _unlocked() means it will take the lock
itself (get_user_pages_unlocked()) -- although often it seems to mean
the lock is already held (git grep _unlocked and marvel).
Hi,
While playing with hotplug, I ran into the below:
[ 2305.676384] [ cut here ]
[ 2305.681543] WARNING: CPU: 1 PID: 15 at kernel/sched/core.c:1924
__set_cpus_allowed_ptr+0x1bd/0x230
[ 2305.691540] Modules linked in: kvm_intel kvm irqbypass rapl intel_cstate
intel_uncor
On Thu, Sep 10, 2020 at 04:10:06PM +0200, pet...@infradead.org wrote:
> Hi,
>
> While playing with hotplug, I ran into the below:
Ah, it could be I wrecked my kernel bad... :-(
I'll let you know if I can reproduce this on a pristine kernel.
On Thu, Sep 10, 2020 at 04:37:45PM +0200, pet...@infradead.org wrote:
> On Thu, Sep 10, 2020 at 04:10:06PM +0200, pet...@infradead.org wrote:
> > Hi,
> >
> > While playing with hotplug, I ran into the below:
>
> Ah, it could be I wrecked my kernel bad... :-(
>
> I'll let you know if I can reprod
On Thu, Sep 10, 2020 at 02:43:13PM +0300, Anatoly Pugachev wrote:
> Hello!
>
> The following git patch 044d0d6de9f50192f9697583504a382347ee95ca
> (linux git master branch) introduced the following kernel OOPS upon
> kernel boot on my sparc64 T5-2 ldom (VM):
https://lkml.kernel.org/r/2020090815415
On Thu, Sep 10, 2020 at 10:32:23AM +0200, pet...@infradead.org wrote:
> > @@ -363,7 +363,14 @@ perf_ibs_event_update(struct perf_ibs *perf_ibs,
> > struct perf_event *event,
> > static inline void perf_ibs_enable_event(struct perf_ibs *perf_ibs,
> > struct hw_
On Tue, Sep 08, 2020 at 04:47:36PM -0500, Kim Phillips wrote:
> Stephane Eranian found a bug in that IBS' current Fetch counter was not
> being reset when the driver would write the new value to clear it along
> with the enable bit set, and found that adding an MSR write that would
> first disable
On Wed, Sep 02, 2020 at 06:48:30AM -0700, Guenter Roeck wrote:
> On 9/2/20 2:12 AM, pet...@infradead.org wrote:
> > On Wed, Sep 02, 2020 at 11:09:35AM +0200, pet...@infradead.org wrote:
> >> On Tue, Sep 01, 2020 at 09:21:37PM -0700, Guenter Roeck wrote:
> >>> [0.00] WARNING: CPU: 0 PID: 0 a
After commit eb1f00237aca ("lockdep,trace: Expose tracepoints") the
lock tracepoints are visible to lockdep and RCU-lockdep is finding a
bunch more RCU violations that were previously hidden.
Switch the idle->seqcount over to using raw_write_*() to avoid the
lockdep annotation and thus the lock
On Tue, Sep 08, 2020 at 08:28:16AM -0700, Mike Travis wrote:
> I didn't. If I could figure out how to convert quilt patches into git
> commits I might be able to do that? (And I didn't know that diffstats were
> needed on the into?)
$ git quiltimport
Or, for the more enterprising person:
$ qui
On Tue, Sep 08, 2020 at 07:40:23AM -0700, Guenter Roeck wrote:
> qemu-system-sparc64 -M sun4u -cpu "TI UltraSparc IIi" -m 512 \
> -initrd rootfs.cpio \
> -kernel arch/sparc/boot/image -no-reboot \
> -append "panic=-1 slub_debug=FZPUA rdinit=/sbin/init console=ttyS0" \
> -nographic -
On Mon, Sep 07, 2020 at 06:01:15PM +0200, pet...@infradead.org wrote:
> On Fri, Aug 21, 2020 at 12:57:54PM -0700, kan.li...@linux.intel.com wrote:
> > diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c
> > index 0f3d01562ded..fa08d810dcd2 100644
> > --- a/arch/x86/events/core.c
> > +++ b/
On Tue, Sep 08, 2020 at 12:50:44PM -0400, Qian Cai wrote:
> > No, you're talking nonsense. We must not free @mm when
> > 'current->active_mm == mm', never.
>
> Yes, you are right. It still trigger this below on powerpc with today's
> linux-next by fuzzing for a while (saw a few times on recent lin
On Fri, Sep 04, 2020 at 05:32:30PM +0200, Ahmed S. Darwish wrote:
> @@ -406,13 +443,20 @@ static inline int read_seqcount_t_retry(const
> seqcount_t *s, unsigned start)
> return __read_seqcount_t_retry(s, start);
> }
>
> +/*
> + * Enforce non-preemptibility for all seqcount_LOCKNAME_t wri
On Fri, Sep 04, 2020 at 05:32:28PM +0200, Ahmed S. Darwish wrote:
> static __always_inline seqcount_t * \
> -__seqcount_##lockname##_ptr(seqcount_##lockname##_t *s)
> \
> +__seqprop_seqcount_##lockname##_ptr(seqcount_##lockname##_t *s)
On Tue, Sep 08, 2020 at 11:15:14AM +, eddy...@trendmicro.com wrote:
> > From: pet...@infradead.org
> >
> > I'm now trying and failing to reproduce I can't seem to make it use
> > int3 today. It seems to want to use ftrace or refuses everything. I'm
> > probably doing it wrong.
> >
>
> You
On Tue, Sep 08, 2020 at 07:12:23PM +1000, Stephen Rothwell wrote:
> Hi all,
>
> After merging the tip tree, today's linux-next build (powerpc
> allyesconfig) failed like this:
>
> ERROR: modpost: too long symbol
> ".__tracepoint_iter_pnfs_mds_fallback_pg_get_mirror_count"
> [fs/nfs/flexfilelayo
On Thu, Sep 03, 2020 at 10:39:54AM +0900, Masami Hiramatsu wrote:
> > There's a bug, that might make it miss it. I have a patch. I'll send it
> > shortly.
>
> OK, I've confirmed that the lockdep warns on kretprobe from INT3
> with your fix.
I'm now trying and failing to reproduce I can't see
On Fri, Sep 04, 2020 at 06:00:31PM +0200, Christian Göttsche wrote:
> sched_setattr(2) does via kernel/sched/core.c:__sched_setscheduler()
> issue a CAP_SYS_NICE audit event unconditionally, even when the requested
> operation does not require that capability / is un-privileged.
>
> Perform privil
On Mon, Sep 07, 2020 at 11:48:45AM +0100, Qais Yousef wrote:
> IMHO the above is a hack. Out-of-tree modules should rely on public headers
> and
> exported functions only. What you propose means that people who want to use
> these tracepoints in meaningful way must have a prebuilt kernel handy. Wh
On Mon, Sep 07, 2020 at 06:29:13PM +0200, Ahmed S. Darwish wrote:
> I've been unsuccessful in reproducing this huge, 200+ bytes, difference.
> Can I please get the defconfig and GCC version?
I think I lost the config and it's either gcc-9.3 or gcc-10, I can't
remember.
I just tried with:
make
On Mon, Sep 07, 2020 at 02:03:09PM +0200, Joerg Vehlow wrote:
>
>
> On 9/7/2020 1:46 PM, pet...@infradead.org wrote:
> > I think it's too complicated for that is needed, did you see my
> > suggestion from a year ago? Did i miss something obvious?
> >
> This one?
> https://lore.kernel.org/linux-
On Fri, Aug 21, 2020 at 12:57:54PM -0700, kan.li...@linux.intel.com wrote:
> diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c
> index 0f3d01562ded..fa08d810dcd2 100644
> --- a/arch/x86/events/core.c
> +++ b/arch/x86/events/core.c
> @@ -1440,7 +1440,10 @@ static void x86_pmu_start(struct
(your mailer broke and forgot to keep lines shorter than 78 chars)
On Tue, Sep 01, 2020 at 12:46:41PM +0200, Frederic Weisbecker wrote:
> == TIF_NOHZ ==
>
> Need to get rid of that in order not to trigger syscall slowpath on
> CPUs that don't want nohz_full. Also we don't want to iterate all
On Mon, Sep 07, 2020 at 12:51:37PM +0200, Joerg Vehlow wrote:
> Hi,
>
> I guess there is currently no other way than to use something like Steven
> proposed. I implemented and tested the attached patch with a module,
> that triggers the soft lockup detection and it works as expected.
> I did not u
On Sat, Aug 22, 2020 at 07:49:28PM -0400, Steven Rostedt wrote:
> From this email:
>
> > The problem happens when that owner is the idle task, this can happen
> > when the irq/softirq hits the idle task, in that case the contending
> > mutex_lock() will try and PI boost the idle task, and that is
On Thu, Sep 03, 2020 at 09:26:04AM +0200, pet...@infradead.org wrote:
> On Thu, Sep 03, 2020 at 11:07:28AM +0900, Masahiro Yamada wrote:
> > Will re-implementing your sorting logic
> > in bash look cleaner?
>
> Possibly, I can try, we'll see.
It is somewhat cleaner, but it is _abysmally_ slow. B
On Fri, Sep 04, 2020 at 11:26:23AM +0200, Daniel Bristot de Oliveira wrote:
> diff --git a/MAINTAINERS b/MAINTAINERS
> index 592467ba3f4d..56d185210a43 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -15363,6 +15363,7 @@ R:Dietmar Eggemann
> (SCHED_NORMAL)
> R: Steven Rostedt (SCHED_
On Thu, Sep 03, 2020 at 03:03:30PM -0700, Sami Tolvanen wrote:
> On Thu, Sep 3, 2020 at 2:51 PM Kees Cook wrote:
> >
> > On Thu, Sep 03, 2020 at 01:30:30PM -0700, Sami Tolvanen wrote:
> > > From: Peter Zijlstra
> > >
> > > Add the --mcount option for generating __mcount_loc sections
> > > needed
Please don't nest series!
Start a new thread for every posting.
1 - 100 of 423 matches
Mail list logo