On 2013/12/2 18:31, William Dauchy wrote:
> Hi Li,
>
> On Mon, Nov 25, 2013 at 2:20 AM, Li Zefan wrote:
>> I'll do this after the patch hits mainline, if Tejun doesn't plan to.
>
> Do you have some news about it?
>
Tejun has already done the backport. :)
It has been included in 3.10.22, which
Hi Li,
On Mon, Nov 25, 2013 at 2:20 AM, Li Zefan wrote:
> I'll do this after the patch hits mainline, if Tejun doesn't plan to.
Do you have some news about it?
--
William
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel
On 2013/11/23 6:54, William Dauchy wrote:
> Hi Tejun,
>
> On Fri, Nov 22, 2013 at 11:18 PM, Tejun Heo wrote:
>> Just applied to cgroup/for-3.13-fixes w/ stable cc'd. Will push to
>> Linus next week.
>
> Thank your for your quick reply. Do you also have a backport for
> v3.10.x already available
Hi Tejun,
On Fri, Nov 22, 2013 at 11:18 PM, Tejun Heo wrote:
> Just applied to cgroup/for-3.13-fixes w/ stable cc'd. Will push to
> Linus next week.
Thank your for your quick reply. Do you also have a backport for
v3.10.x already available?
Best regards,
--
William
--
To unsubscribe from this
On Fri, Nov 22, 2013 at 09:59:37PM +0100, William Dauchy wrote:
> Hugh, Tejun,
>
> Do we have some news about this patch? I'm also hitting this bug on a 3.10.x
Just applied to cgroup/for-3.13-fixes w/ stable cc'd. Will push to
Linus next week.
Thanks.
--
tejun
--
To unsubscribe from this list
On Mon, Nov 18, 2013 at 3:17 AM, Hugh Dickins wrote:
> Sorry for the delay: I was on the point of reporting success last
> night, when I tried a debug kernel: and that didn't work so well
> (got spinlock bad magic report in pwd_adjust_max_active(), and
> tests wouldn't run at all).
>
> Even the no
On Tue, Nov 19, 2013 at 10:55:18AM +0800, Li Zefan wrote:
> > Thanks Tejun and Hugh. Sorry for my late entry in getting around to
> > testing this fix. On the surface it sounds correct however I'd like to
> > test this on top of 3.10.* since that is what we'll likely be running.
> > I've tried to
> Thanks Tejun and Hugh. Sorry for my late entry in getting around to
> testing this fix. On the surface it sounds correct however I'd like to
> test this on top of 3.10.* since that is what we'll likely be running.
> I've tried to apply Hugh's patch above on top of 3.10.19 but it
> appears there
On Sun, Nov 17, 2013 at 06:17:17PM -0800, Hugh Dickins wrote:
> Sorry for the delay: I was on the point of reporting success last
> night, when I tried a debug kernel: and that didn't work so well
> (got spinlock bad magic report in pwd_adjust_max_active(), and
> tests wouldn't run at all).
>
> Ev
On Fri, 15 Nov 2013, Tejun Heo wrote:
> Hello,
>
> Shawn, Hugh, can you please verify whether the attached patch makes
> the deadlock go away?
Thanks a lot, Tejun: report below.
>
> Thanks.
>
> diff --git a/kernel/cgroup.c b/kernel/cgroup.c
> index e0839bc..dc9dc06 100644
> --- a/kernel/cgrou
Hello,
Shawn, Hugh, can you please verify whether the attached patch makes
the deadlock go away?
Thanks.
diff --git a/kernel/cgroup.c b/kernel/cgroup.c
index e0839bc..dc9dc06 100644
--- a/kernel/cgroup.c
+++ b/kernel/cgroup.c
@@ -90,6 +90,14 @@ static DEFINE_MUTEX(cgroup_mutex);
static DEFINE_M
Hello,
On Thu, Nov 14, 2013 at 04:56:49PM -0600, Shawn Bohrer wrote:
> After running both concurrently on 40 machines for about 12 hours I've
> managed to reproduce the issue at least once, possibly more. One
> machine looked identical to this reported issue. It has a bunch of
> stuck cgroup_fre
On Tue, Nov 12, 2013 at 05:55:04PM +0100, Michal Hocko wrote:
> On Tue 12-11-13 09:55:30, Shawn Bohrer wrote:
> > On Tue, Nov 12, 2013 at 03:31:47PM +0100, Michal Hocko wrote:
> > > On Tue 12-11-13 18:17:20, Li Zefan wrote:
> > > > Cc more people
> > > >
> > > > On 2013/11/12 6:06, Shawn Bohrer wr
On Tue 12-11-13 09:55:30, Shawn Bohrer wrote:
> On Tue, Nov 12, 2013 at 03:31:47PM +0100, Michal Hocko wrote:
> > On Tue 12-11-13 18:17:20, Li Zefan wrote:
> > > Cc more people
> > >
> > > On 2013/11/12 6:06, Shawn Bohrer wrote:
> > > > Hello,
> > > >
> > > > This morning I had a machine running
On Tue 12-11-13 18:17:20, Li Zefan wrote:
> Cc more people
>
> On 2013/11/12 6:06, Shawn Bohrer wrote:
> > Hello,
> >
> > This morning I had a machine running 3.10.16 go unresponsive but
> > before we killed it we were able to get the information below. I'm
> > not an expert here but it looks li
Cc more people
On 2013/11/12 6:06, Shawn Bohrer wrote:
> Hello,
>
> This morning I had a machine running 3.10.16 go unresponsive but
> before we killed it we were able to get the information below. I'm
> not an expert here but it looks like most of the tasks below are
> blocking waiting on the c
Hello,
This morning I had a machine running 3.10.16 go unresponsive but
before we killed it we were able to get the information below. I'm
not an expert here but it looks like most of the tasks below are
blocking waiting on the cgroup_mutex. You can see that the
resource_alloca:16502 task is hol
17 matches
Mail list logo