On Sun, 2012-07-22 at 20:43 +0200, Mike Galbraith wrote:
> On Sat, 2012-07-21 at 09:47 +0200, Mike Galbraith wrote:
> > On Wed, 2012-07-18 at 07:30 +0200, Mike Galbraith wrote:
> > > On Wed, 2012-07-18 at 06:44 +0200, Mike Galbraith wrote:
> > >
> > > > The patch in question for missing Cc. Ma
On Sat, 2012-07-21 at 09:47 +0200, Mike Galbraith wrote:
> On Wed, 2012-07-18 at 07:30 +0200, Mike Galbraith wrote:
> > On Wed, 2012-07-18 at 06:44 +0200, Mike Galbraith wrote:
> >
> > > The patch in question for missing Cc. Maybe should be only mutex, but I
> > > see no reason why IO dependenc
On Wed, 2012-07-18 at 07:30 +0200, Mike Galbraith wrote:
> On Wed, 2012-07-18 at 06:44 +0200, Mike Galbraith wrote:
>
> > The patch in question for missing Cc. Maybe should be only mutex, but I
> > see no reason why IO dependency can only possibly exist for mutexes...
>
> Well that was easy, bo
On Wed, 2012-07-18 at 06:44 +0200, Mike Galbraith wrote:
> The patch in question for missing Cc. Maybe should be only mutex, but I
> see no reason why IO dependency can only possibly exist for mutexes...
Well that was easy, box quickly said "nope, mutex only does NOT cut it".
-Mike
--
To unsub
(adds rather important missing Cc)
On Tue, 2012-07-17 at 15:10 +0200, Mike Galbraith wrote:
> On Mon, 2012-07-16 at 12:19 +0200, Thomas Gleixner wrote:
>
> > > @@ -647,8 +648,11 @@ static inline void rt_spin_lock_fastlock
> > >
> > > if (likely(rt_mutex_cmpxchg(lock, NULL, current)))
> > >
On Mon, 2012-07-16 at 12:19 +0200, Thomas Gleixner wrote:
> > @@ -647,8 +648,11 @@ static inline void rt_spin_lock_fastlock
> >
> > if (likely(rt_mutex_cmpxchg(lock, NULL, current)))
> > rt_mutex_deadlock_account_lock(lock, current);
> > - else
> > + else {
> > + if
On Mon, 2012-07-16 at 13:24 +0200, Mike Galbraith wrote:
> Box disagrees.
>
> Waiting for device /dev/cciss/c0d0p6 to appear: ok
> fsck from util-linux 2.19.1
> [/sbin/fsck.ext3 (1) -- /] fsck.ext3 -a /dev/cciss/c0d0p6
> SLES11: clean, 141001/305824 files, 1053733/1222940 blocks
> fsck succeede
On Mon, 2012-07-16 at 12:19 +0200, Thomas Gleixner wrote:
> On Mon, 16 Jul 2012, Mike Galbraith wrote:
> > ---
> > block/blk-core.c |1 +
> > kernel/rtmutex.c | 11 +--
> > 2 files changed, 10 insertions(+), 2 deletions(-)
> >
> > --- a/block/blk-core.c
> > +++ b/block/blk-core.c
(hohum, gmx server went down, back to random address mode;)
On Mon, 2012-07-16 at 12:19 +0200, Thomas Gleixner wrote:
>
> That should do the trick.
I'll put it to work on 64 core box, and see if it survives. It better.
-Mike
--
To unsubscribe from this list: send the line "unsubscribe linux-
On Mon, 16 Jul 2012, Mike Galbraith wrote:
> Hm, wonder how bad this sucks.. and if I should go hide under a big
> sturdy rock after I poke xmit :)
>
> ---
> block/blk-core.c |1 +
> kernel/rtmutex.c | 11 +--
> 2 files changed, 10 insertions(+), 2 deletions(-)
>
> --- a/block/blk
On Mon, 2012-07-16 at 11:59 +0200, Thomas Gleixner wrote:
> On Mon, 16 Jul 2012, Mike Galbraith wrote:
> > On Mon, 2012-07-16 at 10:59 +0200, Thomas Gleixner wrote:
> > > On Mon, 16 Jul 2012, Mike Galbraith wrote:
> > > > On Sun, 2012-07-15 at 11:14 +0200, Mike Galbraith wrote:
> > > > > On Sun,
Hm, wonder how bad this sucks.. and if I should go hide under a big
sturdy rock after I poke xmit :)
---
block/blk-core.c |1 +
kernel/rtmutex.c | 11 +--
2 files changed, 10 insertions(+), 2 deletions(-)
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -2782,6 +2782,7 @@ void blk_
On Mon, 16 Jul 2012, Mike Galbraith wrote:
> On Mon, 2012-07-16 at 10:59 +0200, Thomas Gleixner wrote:
> > On Mon, 16 Jul 2012, Mike Galbraith wrote:
> > > On Sun, 2012-07-15 at 11:14 +0200, Mike Galbraith wrote:
> > > > On Sun, 2012-07-15 at 10:59 +0200, Thomas Gleixner wrote:
> > >
> > > > >
On Mon, 2012-07-16 at 10:59 +0200, Thomas Gleixner wrote:
> On Mon, 16 Jul 2012, Mike Galbraith wrote:
> > On Sun, 2012-07-15 at 11:14 +0200, Mike Galbraith wrote:
> > > On Sun, 2012-07-15 at 10:59 +0200, Thomas Gleixner wrote:
> >
> > > > Can you figure out on which lock the stuck thread which
On Mon, 16 Jul 2012, Mike Galbraith wrote:
> On Sun, 2012-07-15 at 11:14 +0200, Mike Galbraith wrote:
> > On Sun, 2012-07-15 at 10:59 +0200, Thomas Gleixner wrote:
>
> > > Can you figure out on which lock the stuck thread which did not unplug
> > > due to tsk_is_pi_blocked was blocked?
> >
> >
On Sun, 2012-07-15 at 11:14 +0200, Mike Galbraith wrote:
> On Sun, 2012-07-15 at 10:59 +0200, Thomas Gleixner wrote:
> > Can you figure out on which lock the stuck thread which did not unplug
> > due to tsk_is_pi_blocked was blocked?
>
> I'll take a peek.
Sorry for late reply, took a half day
On Sun, 15 Jul 2012, Mike Galbraith wrote:
> On Sun, 2012-07-15 at 10:59 +0200, Thomas Gleixner wrote:
> > On Fri, 13 Jul 2012, Jan Kara wrote:
> > > On Fri 13-07-12 16:25:05, Thomas Gleixner wrote:
> > > > So the patch below should allow the unplug to take place when blocked
> > > > on mutexes e
On Sun, 2012-07-15 at 10:59 +0200, Thomas Gleixner wrote:
> On Fri, 13 Jul 2012, Jan Kara wrote:
> > On Fri 13-07-12 16:25:05, Thomas Gleixner wrote:
> > > So the patch below should allow the unplug to take place when blocked
> > > on mutexes etc.
> > Thanks for the patch! Mike will give it some
On Fri, 13 Jul 2012, Jan Kara wrote:
> On Fri 13-07-12 16:25:05, Thomas Gleixner wrote:
> > So the patch below should allow the unplug to take place when blocked
> > on mutexes etc.
> Thanks for the patch! Mike will give it some testing.
I just found out that this patch will explode nicely when
On Sat, 2012-07-14 at 13:00 +0200, Mike Galbraith wrote:
> I have your patch burning on my 64 core rt box. If it survives the
> weekend, you should be able to replace my jbd hack with your fix.
As expected, box is still going strong. It would have died by now if
the problem were still lurking.
On Sat, 2012-07-14 at 13:00 +0200, Mike Galbraith wrote:
> I have your patch burning on my 64 core rt box. If it survives the
> weekend, you should be able to replace my jbd hack with your fix..
>
> Tested-by: Mike Galbraith
>
> ..so here, one each chop in advance. It wouldn't dare work ;-)
I have your patch burning on my 64 core rt box. If it survives the
weekend, you should be able to replace my jbd hack with your fix..
Tested-by: Mike Galbraith
..so here, one each chop in advance. It wouldn't dare work ;-)
On Fri, 2012-07-13 at 16:25 +0200, Thomas Gleixner wrote:
> On Fri, 1
On Fri 13-07-12 16:25:05, Thomas Gleixner wrote:
> On Fri, 13 Jul 2012, Jan Kara wrote:
> > On Thu 12-07-12 16:15:29, Thomas Gleixner wrote:
> > > > Ah, I didn't know this. Thanks for the hint. So in the kdump I have I
> > > > can
> > > > see requests queued in tsk->plug despite the process is s
On Fri, 13 Jul 2012, Jan Kara wrote:
> On Thu 12-07-12 16:15:29, Thomas Gleixner wrote:
> > > Ah, I didn't know this. Thanks for the hint. So in the kdump I have I
> > > can
> > > see requests queued in tsk->plug despite the process is sleeping in
> > > TASK_UNINTERRUPTIBLE state. So the only w
On Thu 12-07-12 00:12:44, Thomas Gleixner wrote:
> On Wed, 11 Jul 2012, Jan Kara wrote:
> > On Wed 11-07-12 12:05:51, Jeff Moyer wrote:
> > > This eventually ends in a call to blk_run_queue_async(q) after
> > > submitting the I/O from the plug list. Right? So is the question
> > > really why does
On Thu 12-07-12 16:15:29, Thomas Gleixner wrote:
> On Wed, 11 Jul 2012, Jan Kara wrote:
> > On Wed 11-07-12 12:05:51, Jeff Moyer wrote:
> > > Jan Kara writes:
> > >
> > > > Hello,
> > > >
> > > > we've recently hit a deadlock in our QA runs which is caused by the
> > > > per-process plugging
On Wed, 11 Jul 2012, Jan Kara wrote:
> On Wed 11-07-12 12:05:51, Jeff Moyer wrote:
> > Jan Kara writes:
> >
> > > Hello,
> > >
> > > we've recently hit a deadlock in our QA runs which is caused by the
> > > per-process plugging code. The problem is as follows:
> > > process A
On Thu, 2012-07-12 at 00:12 +0200, Thomas Gleixner wrote:
> On Wed, 11 Jul 2012, Jan Kara wrote:
> > On Wed 11-07-12 12:05:51, Jeff Moyer wrote:
> > > This eventually ends in a call to blk_run_queue_async(q) after
> > > submitting the I/O from the plug list. Right? So is the question
> > > reall
On Wed, 2012-07-11 at 22:16 +0200, Jan Kara wrote:
> On Wed 11-07-12 12:05:51, Jeff Moyer wrote:
> > Jan Kara writes:
> >
> > > Hello,
> > >
> > > we've recently hit a deadlock in our QA runs which is caused by the
> > > per-process plugging code. The problem is as follows:
> > > process A
On Wed, 11 Jul 2012, Jan Kara wrote:
> On Wed 11-07-12 12:05:51, Jeff Moyer wrote:
> > This eventually ends in a call to blk_run_queue_async(q) after
> > submitting the I/O from the plug list. Right? So is the question
> > really why doesn't the kblockd workqueue get scheduled?
> Ah, I didn't k
On Wed 11-07-12 12:05:51, Jeff Moyer wrote:
> Jan Kara writes:
>
> > Hello,
> >
> > we've recently hit a deadlock in our QA runs which is caused by the
> > per-process plugging code. The problem is as follows:
> > process A process B (kjournald)
> > generic
Jan Kara writes:
> Hello,
>
> we've recently hit a deadlock in our QA runs which is caused by the
> per-process plugging code. The problem is as follows:
> process A process B (kjournald)
> generic_file_aio_write()
> blk_start_plug(&plug);
> ...
>
Hello,
we've recently hit a deadlock in our QA runs which is caused by the
per-process plugging code. The problem is as follows:
process A process B (kjournald)
generic_file_aio_write()
blk_start_plug(&plug);
...
somewhere in here we allocate
33 matches
Mail list logo