On Fri, Jun 25, 2021 at 12:16:52PM +0200, Peter Zijlstra wrote:
> You mean: CONFIG_PREEMPTION=n, what about CONFIG_PREEMPT_COUNT?
>
> Because if both are =n, then I don't see how that warning could trigger.
> in_atomic_preempt_off() would then result in prempt_count() == 0, and
> per the print abo
> > A PowerPC KVM guest gets the following BUG message when booting
> > > > linux-next-20210623:
> > > >
> > > > smp: Bringing up secondary CPUs ...
> > > > BUG: scheduling while atomic: swapper/1/0/0x
> >
> > 'funny', you
On 25/06/21 09:28, Peter Zijlstra wrote:
> On Fri, Jun 25, 2021 at 11:16:08AM +0530, Srikar Dronamraju wrote:
>> Bharata,
>>
>> I think the regression is due to Commit f1a0a376ca0c ("sched/core:
>> Initialize the idle task with preemption disabled")
>
> So that extra preempt_disable() that got remo
x-next-20210623:
> > >
> > > smp: Bringing up secondary CPUs ...
> > > BUG: scheduling while atomic: swapper/1/0/0x
>
> 'funny', your preempt_count is actually too low. The check here is for
> preempt_count() == DISABLE_OFFSET (aka. 1 when PRE
On Fri, Jun 25, 2021 at 11:16:08AM +0530, Srikar Dronamraju wrote:
> * Bharata B Rao [2021-06-24 21:25:09]:
>
> > A PowerPC KVM guest gets the following BUG message when booting
> > linux-next-20210623:
> >
> > smp: Bringing up secondary CPUs ...
> > BUG:
On Fri, Jun 25, 2021 at 11:16:08AM +0530, Srikar Dronamraju wrote:
> * Bharata B Rao [2021-06-24 21:25:09]:
>
> > A PowerPC KVM guest gets the following BUG message when booting
> > linux-next-20210623:
> >
> > smp: Bringing up secondary CPUs ...
> > BUG:
* Bharata B Rao [2021-06-24 21:25:09]:
> A PowerPC KVM guest gets the following BUG message when booting
> linux-next-20210623:
>
> smp: Bringing up secondary CPUs ...
> BUG: scheduling while atomic: swapper/1/0/0x
> no locks held by swapper/1/0.
> Modules linke
Hi,
A PowerPC KVM guest gets the following BUG message when booting
linux-next-20210623:
smp: Bringing up secondary CPUs ...
BUG: scheduling while atomic: swapper/1/0/0x
no locks held by swapper/1/0.
Modules linked in:
CPU: 1 PID: 0 Comm: swapper/1 Not tainted 5.13.0-rc7-next-20210623
Hi Scott,
Below are the logs when I enable the kallsyms.
Any suggestion.
sh-3.1# I am in cpld_irq_handler
end cpld_irq_handler
BUG: scheduling while atomic: IRQ-20/0x0fff/108
Call Trace:
[C3B13EC0] [C0007DD0] show_stack+0x3c/0x194 (unreliable)
[C3B13EF0] [C0017F70] __schedule_bug+0x34
On 10/05/2011 06:24 AM, smitha.va...@wipro.com wrote:
> Hi Scoot,
>
> When my ISR gets exeuted I get a below BUG. Could let me what I am
> doing wrong in the ISR?
>
>
> BUG: scheduling while atomic: IRQ-20/0x0fff/108
> Call Trace:
> [C3AEFEC0] [C0007CCC
Hi Scoot,
When my ISR gets exeuted I get a below BUG. Could let me what I am doing
wrong in the ISR?
BUG: scheduling while atomic: IRQ-20/0x0fff/108
Call Trace:
[C3AEFEC0] [C0007CCC] (unreliable)
[C3AEFEF0] [C0017F10]
[C3AEFF00] [C0268818]
[C3AEFF50] [C0017F44]
[C3AEFF60] [C0018044
I would say it is a general problem by using CONFIG_PREEMPT_VOLUNTARY,
not only Freescale...
Am Donnerstag, den 12.05.2011, 11:30 -0400 schrieb Matthew L. Creech:
> On Thu, May 12, 2011 at 4:37 AM, wrote:
> > Hi Mattheew,
> >
> > such oops you can get also with spi.
> > For such problem helps to
On Thu, May 12, 2011 at 4:37 AM, wrote:
> Hi Mattheew,
>
> such oops you can get also with spi.
> For such problem helps to compile your kernel with other preemption
> model:
> - preempt
> - standard
> - !!! but not voluntary preemption !!!
Thanks Sergej, indeed I'm currently using CONFIG_PRE
, den 11.05.2011, 17:37 -0400 schrieb Matthew L. Creech:
> Hi,
>
> My MPC8313-based board, running a 2.6.37 kernel, is occasionally
> hitting this bug while doing RNDIS-based communication:
>
> BUG: scheduling while atomic: lighttpd/1145/0x1200
> Call Trace:
> [c6a8b91
Hi,
My MPC8313-based board, running a 2.6.37 kernel, is occasionally
hitting this bug while doing RNDIS-based communication:
BUG: scheduling while atomic: lighttpd/1145/0x1200
Call Trace:
[c6a8b910] [c00086c0] show_stack+0x7c/0x194 (unreliable)
[c6a8b950] [c0019e28] __schedule_bug+0x54/0x68
On Mon, 2010-06-28 at 09:49 -0700, Paul E. McKenney wrote:
> And it does appear to be reproducible perhaps 50% of boots running
> CONFIG_PREEMPT on kernel-ml8. I have not yet seen it on any other
> system.
> Do you have a patch to instrument the interrupts so as to get the info
> you need, or shou
On Wed, Jun 09, 2010 at 06:08:54PM -0700, Paul E. McKenney wrote:
> On Thu, Jun 10, 2010 at 09:20:08AM +1000, Benjamin Herrenschmidt wrote:
> > On Wed, 2010-06-09 at 14:52 -0700, Paul E. McKenney wrote:
> > > Hello!
> > >
> > > I get the following during boot on a 16 CPU Power box. Thoughts?
> >
te hash table entries: 2048 (order: 6, 262144 bytes)
> > NET: Registered protocol family 1
> > RPC: Registered udp transport module.
> > RPC: Registered tcp transport module.
> > RPC: Registered tcp NFSv4.1 backchannel transport module.
> > Trying to unpack rootfs im
ily 1
> RPC: Registered udp transport module.
> RPC: Registered tcp transport module.
> RPC: Registered tcp NFSv4.1 backchannel transport module.
> Trying to unpack rootfs image as initramfs...
> Freeing initrd memory: 2455k freed
> BUG: scheduling while atomic: swapper/0/0x0002
>
protocol family 1
RPC: Registered udp transport module.
RPC: Registered tcp transport module.
RPC: Registered tcp NFSv4.1 backchannel transport module.
Trying to unpack rootfs image as initramfs...
Freeing initrd memory: 2455k freed
BUG: scheduling while atomic: swapper/0/0x0002
no locks held by
> LR = 0xff51fcc
> BUG: scheduling while atomic: stress/0x0002/739, CPU#0
> Call Trace:
> [c04fbb80] [c0008650] show_stack+0x50/0x190 (unreliable)
> [c04fbbb0] [c0016fc0] __schedule_bug+0x38/0x48
> [c04fbbc0] [c01d4f7c] __schedule+0x3dc/0x450
> [c04fbbf0] [c01d56c8]
Hi,
I am running linux-2.6.22.1-rt8 on mpc5200 and on running stress
#./stress --cpu 8 --io 4 --vm 2 --vm-bytes 2M --timeout 300s
I get the following stack.But this does not happen with non -rt kernel.
LR = 0xff51fcc
BUG: scheduling while atomic: stress/0x0002/739, CPU#0
Call Trace
Hi,
I am running linux-2.6.22.1-rt8 on mpc5200 and on running stress
./stress --cpu 8 --io 4 --vm 2 --vm-bytes 2M --timeout 300s
I get the following stack.But this does not happen with non rt kernel.
LR = 0xff51fcc
BUG: scheduling while atomic: stress/0x0002/739, CPU#0
Call Trace
23 matches
Mail list logo