On 01/22/2015 10:59 AM, Michael Ellerman wrote:
> On Fri, 2015-01-16 at 14:40 +0530, Preeti U Murthy wrote:
>> On 01/16/2015 02:26 PM, Preeti U Murthy wrote:
>>> On 01/16/2015 08:34 AM, Michael Ellerman wrote:
On Fri, 2015-01-16 at 13:28 +1300, Alexey Kardashevskiy wrote:
> On 01/16/2015 0
On Fri, 2015-01-16 at 14:40 +0530, Preeti U Murthy wrote:
> On 01/16/2015 02:26 PM, Preeti U Murthy wrote:
> > On 01/16/2015 08:34 AM, Michael Ellerman wrote:
> >> On Fri, 2015-01-16 at 13:28 +1300, Alexey Kardashevskiy wrote:
> >>> On 01/16/2015 02:22 AM, Preeti U Murthy wrote:
> Hi Alexey,
>
On 01/17/2015 07:09 PM, Preeti U Murthy wrote:
> On 01/16/2015 08:34 AM, Michael Ellerman wrote:
>> On Fri, 2015-01-16 at 13:28 +1300, Alexey Kardashevskiy wrote:
>>> On 01/16/2015 02:22 AM, Preeti U Murthy wrote:
Hi Alexey,
Can you let me know if the following patch fixes the issue
On 01/16/2015 08:34 AM, Michael Ellerman wrote:
> On Fri, 2015-01-16 at 13:28 +1300, Alexey Kardashevskiy wrote:
>> On 01/16/2015 02:22 AM, Preeti U Murthy wrote:
>>> Hi Alexey,
>>>
>>> Can you let me know if the following patch fixes the issue for you ?
>>> It did for us on one of our machines tha
On 01/16/2015 02:26 PM, Preeti U Murthy wrote:
> On 01/16/2015 08:34 AM, Michael Ellerman wrote:
>> On Fri, 2015-01-16 at 13:28 +1300, Alexey Kardashevskiy wrote:
>>> On 01/16/2015 02:22 AM, Preeti U Murthy wrote:
Hi Alexey,
Can you let me know if the following patch fixes the issue
On 01/16/2015 08:34 AM, Michael Ellerman wrote:
> On Fri, 2015-01-16 at 13:28 +1300, Alexey Kardashevskiy wrote:
>> On 01/16/2015 02:22 AM, Preeti U Murthy wrote:
>>> Hi Alexey,
>>>
>>> Can you let me know if the following patch fixes the issue for you ?
>>> It did for us on one of our machines tha
On Fri, 2015-01-16 at 13:28 +1300, Alexey Kardashevskiy wrote:
> On 01/16/2015 02:22 AM, Preeti U Murthy wrote:
> > Hi Alexey,
> >
> > Can you let me know if the following patch fixes the issue for you ?
> > It did for us on one of our machines that we were investigating on.
>
> This fixes the is
c that
> we
> have today in the broadcast code on an offline operation. If this logic fails
> to
> move the broadcast hrtimer due to a race condition we have the following
> patch to
> handle it right.
>
> [1]http://linuxppc.10917.n7.nabble.com/offlining-cpus-breakage
fails to
move the broadcast hrtimer due to a race condition we have the following patch
to
handle it right.
[1]http://linuxppc.10917.n7.nabble.com/offlining-cpus-breakage-td88619.html
There is no issue in programming the decrementer as was presumed and stated in
this link.
Signed-off-by: Preeti U
On Wednesday 07 January 2015 03:07 PM, Alexey Kardashevskiy wrote:
> Hi!
>
> "ppc64_cpu --smt=off" produces multiple error on the latest upstream kernel
> (sha1 bdec419):
>
> NMI watchdog: BUG: soft lockup - CPU#20 stuck for 23s! [swapper/20:0]
>
> or
>
> INFO: rcu_sched detected stalls on CP
Hi,
On Wednesday 07 January 2015 03:07 PM, Alexey Kardashevskiy wrote:
> Hi!
>
> "ppc64_cpu --smt=off" produces multiple error on the latest upstream kernel
> (sha1 bdec419):
>
> NMI watchdog: BUG: soft lockup - CPU#20 stuck for 23s! [swapper/20:0]
>
> or
>
> INFO: rcu_sched detected stalls
Hi!
"ppc64_cpu --smt=off" produces multiple error on the latest upstream kernel
(sha1 bdec419):
NMI watchdog: BUG: soft lockup - CPU#20 stuck for 23s! [swapper/20:0]
or
INFO: rcu_sched detected stalls on CPUs/tasks: { 2 7 8 9 10 11 12 13 14 15
16 17 18 19 20 21 22 23 2
4 25 26 27 28 29 30 31} (
12 matches
Mail list logo