On Thu, Oct 10, 2013 at 08:52:33AM +0200, Geert Uytterhoeven wrote: > On Thu, Oct 10, 2013 at 4:46 AM, Fengguang Wu <fengguang...@intel.com> wrote: > > On Wed, Oct 09, 2013 at 11:47:33PM +0200, Jan Kara wrote: > >> On Wed 09-10-13 20:43:50, Richard Weinberger wrote: > >> > Am 09.10.2013 19:26, schrieb Toralf Förster: > >> > > On 10/08/2013 10:07 PM, Geert Uytterhoeven wrote: > >> > >> On Sun, Oct 6, 2013 at 11:01 PM, Toralf Förster > >> > >> <toralf.foers...@gmx.de> wrote: > >> > >>>> Hmm, now pages_dirtied is zero, according to the backtrace, but the > >> > >>>> BUG_ON() > >> > >>>> asserts its strict positive?!? > >> > >>>> > >> > >>>> Can you please try the following instead of the BUG_ON(): > >> > >>>> > >> > >>>> if (pause < 0) { > >> > >>>> printk("pages_dirtied = %lu\n", pages_dirtied); > >> > >>>> printk("task_ratelimit = %lu\n", task_ratelimit); > >> > >>>> printk("pause = %ld\n", pause); > > >> > >>> I tried it in different ways already - I'm completely unsuccessful > >> > >>> in getting any printk output. > >> > >>> As soon as the issue happens I do have a > >> > >>> > >> > >>> BUG: soft lockup - CPU#0 stuck for 22s! [trinity-child0:1521] > >> > >>> > >> > >>> at stderr of the UML and then no further input is accepted. With > >> > >>> uml_mconsole I'm however able > >> > >>> to run very basic commands like a crash dump, sysrq ond so on. > >> > >> > >> > >> You may get an idea of the magnitude of pages_dirtied by using a > >> > >> chain of > >> > >> BUG_ON()s, like: > >> > >> > >> > >> BUG_ON(pages_dirtied > 2000000000); > >> > >> BUG_ON(pages_dirtied > 1000000000); > >> > >> BUG_ON(pages_dirtied > 100000000); > >> > >> BUG_ON(pages_dirtied > 10000000); > >> > >> BUG_ON(pages_dirtied > 1000000); > >> > >> > >> > >> Probably 1 million is already too much for normal operation? > >> > >> > >> > > period = HZ * pages_dirtied / task_ratelimit; > >> > > BUG_ON(pages_dirtied > 2000000000); > >> > > BUG_ON(pages_dirtied > 1000000000); <-------------- > >> > > this is line 1467 > >> > > >> > Summary for mm people: > >> > > >> > Toralf runs trinty on UML/i386. > >> > After some time pages_dirtied becomes very large. > >> > More than 1000000000 pages in this case. > >> Huh, this is really strange. pages_dirtied is passed into > >> balance_dirty_pages() from current->nr_dirtied. So I wonder how a value > >> over 10^9 can get there. > > > > I noticed aio_setup_ring() in the call trace and find it recently > > added a SetPageDirty() call in a loop by commit 36bc08cc01 ("fs/aio: > > Add support to aio ring pages migration"). So added CC to its authors. > > > >> After all that is over 4TB so I somewhat doubt the > >> task was ever able to dirty that much during its lifetime (but correct me > >> if I'm wrong here, with UML and memory backed disks it is not totally > >> impossible)... I went through the logic of handling ->nr_dirtied but > >> I didn't find any obvious problem there. Hum, maybe one thing - what > >> 'task_ratelimit' values do you see in balance_dirty_pages? If that one was > >> huge, we could possibly accumulate huge current->nr_dirtied. > >> > >> > Thus, period = HZ * pages_dirtied / task_ratelimit overflows > >> > and period/pause becomes extremely large. > > period/pause are signed long, so they become negative instead of > extremely large when overflowing.
Yeah. For that we have underflow detect as well: if (pause < min_pause) { ... break; } So we'll break out of the loop -- but yeah, whether the break is the right behavior on underflow is still questionable. > >> > It looks like io_schedule_timeout() get's called with a very large > >> > timeout. > >> > I don't know why "if (unlikely(pause > max_pause)) {" does not help. > > Because pause is now negative. So here io_schedule_timeout() won't be called with negative pause. And if ever io_schedule_timeout() calls schedule_timeout() with negative timeout, the latter will emit a warning and break out, too: if (timeout < 0) { printk(KERN_ERR "schedule_timeout: wrong timeout " "value %lx\n", timeout); dump_stack(); current->state = TASK_RUNNING; goto out; } Thanks, Fengguang -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/