On Monday 05 March 2012 05:03:44 Duc Dang wrote:
> This patch includes:
>
> Configure EMAC PHY clock source (clock from PHY or internal clock).
>
> Do not advertise PHY half duplex capability as APM821XX EMAC does not
> support half duplex mode.
>
> Add changes to support configuring jumbo
This patch includes:
Configure EMAC PHY clock source (clock from PHY or internal clock).
Do not advertise PHY half duplex capability as APM821XX EMAC does not
support half duplex mode.
Add changes to support configuring jumbo frame for APM821XX EMAC.
Signed-off-by: Duc Dang
---
v2:
Fix
This compatible value will be used to distinguish some special features of
APM821XX EMAC: no half duplex mode support, configuring jumbo frame.
Signed-off-by: Duc Dang
---
v2:
No change since v1 patch set. Added for completeness.
arch/powerpc/boot/dts/bluestone.dts |2 +-
1 files changed,
On Sun, Mar 4, 2012 at 7:54 PM, Olof Johansson wrote:
> On Thu, Feb 23, 2012 at 08:41:39AM +1100, Benjamin Herrenschmidt wrote:
>> On Wed, 2012-02-22 at 11:19 -0700, Bjorn Helgaas wrote:
>> > int maple_pci_get_legacy_ide_irq(struct pci_dev *pdev, int channel)
>> > diff --git a/arch/powerpc/platfo
On Thu, Feb 23, 2012 at 08:41:39AM +1100, Benjamin Herrenschmidt wrote:
> On Wed, 2012-02-22 at 11:19 -0700, Bjorn Helgaas wrote:
> > int maple_pci_get_legacy_ide_irq(struct pci_dev *pdev, int channel)
> > diff --git a/arch/powerpc/platforms/pasemi/pci.c
> > b/arch/powerpc/platforms/pasemi/pci.c
>
On Sat, 2012-03-03 at 08:52 +1100, Benjamin Herrenschmidt wrote:
> Or give me a chance to dig :-) I'll have a look next week.
This is indeed what bjorn suspected on irc, this patch fixes it:
(Bjorn, please fold it in the original offending patch)
Cheers,
Ben.
diff --git a/arch/powerpc/kernel/p
Hi, Paul,
On Fri, 2012-03-02 at 09:30 -0500, Paul Gortmaker wrote:
> > Signed-off-by: Liu Gang
> > Signed-off-by: Shaohui Xie
> > Signed-off-by: Paul Gortmaker
>
> Hi Liu,
>
> You can't just go adding a "Signed-off-by:" line for me to a patch that
> I've never seen before. Perhaps you meant
Also use local_paca instead of get_paca() to avoid getting into
the smp_processor_id() debugging code from the debugger
Signed-off-by: Benjamin Herrenschmidt
---
arch/powerpc/xmon/xmon.c |7 ---
1 files changed, 4 insertions(+), 3 deletions(-)
diff --git a/arch/powerpc/xmon/xmon.c b/arc
We were using CR0.EQ after EXCEPTION_COMMON, hoping it still
contained whether we came from userspace or kernel space.
However, under some circumstances, EXCEPTION_COMMON will
call C code and clobber non-volatile registers, so we really
need to re-load the previous MSR from the stackframe and
re-t
On 64-bit, the mfmsr instruction can be quite slow, slower
than loading a field from the cache-hot PACA, which happens
to already contain the value we want in most cases.
Signed-off-by: Benjamin Herrenschmidt
---
arch/powerpc/include/asm/exception-64s.h |2 +-
arch/powerpc/include/asm/hw_irq
When running under a hypervisor that supports stolen time accounting,
we may call C code from the macro EXCEPTION_PROLOG_COMMON in the
exception entry path, which clobbers CR0.
However, the FPU and vector traps rely on CR0 indicating whether we
are coming from userspace or kernel to decide what to
Other architectures such as x86 and ARM have been growing
new support for features like retrying page faults after
dropping the mm semaphore to break contention, or being
able to return from a stuck page fault when a SIGKILL is
pending.
This refactors our implementation of do_page_fault() to
move
This removes the various bits of assembly in the kernel entry,
exception handling and SLB management code that were specific
to running under the legacy iSeries hypervisor which is no
longer supported.
Signed-off-by: Benjamin Herrenschmidt
---
arch/powerpc/include/asm/exception-64s.h | 15
The perfmon interrupt is the sole user of a special variant of the
interrupt prolog which differs from the one used by external and timer
interrupts in that it saves the non-volatile GPRs and doesn't turn the
runlatch on.
The former is unnecessary and the later is arguably incorrect, so
let's clea
The current implementation of lazy interrupts handling has some
issues that this tries to address.
We don't do the various workarounds we need to do when re-enabling
interrupts in some cases such as when returning from an interrupt
and thus we may still lose or get delayed decrementer or doorbell
We unconditionally hard enable interrupts. This is unnecessary as
syscalls are expected to always be called with interrupts enabled.
While at it, we add a WARN_ON if that is not the case and
CONFIG_TRACE_IRQFLAGS is enabled (we don't want to add overhead
to the fast path when this is not set thoug
This moves the inlines into system.h and changes the runlatch
code to use the thread local flags (non-atomic) rather than
the TIF flags (atomic) to keep track of the latch state.
The code to turn it back on in an asynchronous interrupt is
now simplified and partially inlined.
Signed-off-by: Benja
We currently turn interrupts back to their previous state before
calling do_page_fault(). This can be annoying when debugging as
a bad fault will potentially have lost some processor state before
getting into the debugger.
We also end up calling some generic code with interrupts enabled
such as no
Some exceptions would unconditionally disable interrupts on entry,
which is fine, but calling lockdep every time not only adds more
overhead than strictly needed, but also means we get quite a few
"redudant" disable logged, which makes it hard to spot the really
bad ones.
So instead, split the mac
This series goes through various bits and pieces of our exception
and interrupts handling, fixin bugs, cleaning up code, adding
missing functionality, etc... The last patch of the series is
a reworked variant of my lazy irq rework.
This needs a good review as it touches pretty nasty bits of code
a
If we get a floating point, altivec or vsx unavaible interrupt in
kernel, we trigger a kernel error. There is no point preserving
the interrupt state, in fact, that can even make debugging harder
as the processor state might change (we may even preempt) between
taking the exception and landing in a
From: Paul Gortmaker
Date: Mon, 27 Feb 2012 07:36:29 -0500
> Factor out the the existing allocation and free operations
> so that they can be used individually.
>
> This is to improve code readability, and also to prepare for
> possible future changes like better error recovery and more
> dynami
On Fri, 2012-03-02 at 20:35 +1100, Benjamin Herrenschmidt wrote:
>
> +static inline may_hard_irq_enable(void) { }
> +
And that works much better with a void :
+static inline void may_hard_irq_enable(void) { }
Or 32-bit won't build ;-) With that fix it builds on both 32 and 64-bit,
now time to
23 matches
Mail list logo