Hi Steven, Does the patch look good? Can this be picked up in the next rc?
Vaibhav On Fri, Sep 7, 2018 at 3:31 PM Vaibhav Nagarnaik <vnagarn...@google.com> wrote: > > When reducing ring buffer size, pages are removed by scheduling a work > item on each CPU for the corresponding CPU ring buffer. After the pages > are removed from ring buffer linked list, the pages are free()d in a > tight loop. The loop does not give up CPU until all pages are removed. > In a worst case behavior, when lot of pages are to be freed, it can > cause system stall. > > After the pages are removed from the list, the free() can happen while > the work is rescheduled. Call cond_resched() in the loop to prevent the > system hangup. > > Reported-by: Jason Behmer <jbeh...@google.com> > Signed-off-by: Vaibhav Nagarnaik <vnagarn...@google.com> > --- > kernel/trace/ring_buffer.c | 2 ++ > 1 file changed, 2 insertions(+) > > diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c > index 1d92d4a982fd..65bd4616220d 100644 > --- a/kernel/trace/ring_buffer.c > +++ b/kernel/trace/ring_buffer.c > @@ -1546,6 +1546,8 @@ rb_remove_pages(struct ring_buffer_per_cpu *cpu_buffer, > unsigned long nr_pages) > tmp_iter_page = first_page; > > do { > + cond_resched(); > + > to_remove_page = tmp_iter_page; > rb_inc_page(cpu_buffer, &tmp_iter_page); > > -- > 2.19.0.rc2.392.g5ba43deb5a-goog >
smime.p7s
Description: S/MIME Cryptographic Signature