On 23.12.2013, at 07:38, Anton Blanchard wrote:
>
> Hi Alex,
>
>> The ibmveth driver is memcpy()'ing the mac address between a variable
>> (register) and memory. This assumes a certain endianness of the
>> system, so let's make that implicit assumption work again.
>
> Nice catch! I don't like
From: Michael Ellerman
If we enter with xmon_speaker != 0 we skip the first cmpxchg(), we also
skip the while loop because xmon_speaker != last_speaker (0) - meaning we
skip the second cmpxchg() also.
Following that code path the compiler sees no memory barriers and so is
within its rights to ne
As far as I can tell, our 70s era timeout loop in get_output_lock() is
generating no code.
This leads to the hostile takeover happening more or less simultaneously
on all cpus. The result is "interesting", some example output that is
more readable than most:
cpu 0x1: Vector: 100 (Scypsut e0mx
Currently we set our cpu's bit in cpus_in_xmon, and then we take the
output lock and print the exception information.
This can race with the master cpu entering the command loop and printing
the backtrace. The result is that the backtrace gets garbled with
another cpu's exception print out.
Fix i
On Mon, 2013-12-23 at 17:38 +1100, Anton Blanchard wrote:
> The hypervisor expects MAC addresses passed in registers to be big
> endian u64.
So maybe use __be64 declarations?
> +static unsigned long ibmveth_encode_mac_addr(char *mac)
static __be64 ibmveth_encode_mac_addr(const char *mac)
?
etc
Hi Michael,
> > To try and catch any screw ups in our ppc64 memcpy and
> > copy_tofrom_user loops, I wrote a quick test:
> >
> > http://ozlabs.org/~anton/junkcode/validate_kernel_copyloops.tar.gz
>
> Nice! How's this look?
Love it!
At the moment my other copy_to/from_user tests run against the
The hypervisor expects MAC addresses passed in registers to be big
endian u64. Create a helper function called ibmveth_encode_mac_addr
which does the right thing in both big and little endian.
We were storing the MAC address in a long in struct ibmveth_adapter.
It's never used so remove it - we d
On Fri, 2013-12-20 at 16:31 +0530, Anshuman Khandual wrote:
> On 12/09/2013 11:51 AM, Michael Ellerman wrote:
> > On Wed, 2013-04-12 at 10:32:40 UTC, Anshuman Khandual wrote:
> >> +
> >> + if (bhrb_sw_filter & PERF_SAMPLE_BRANCH_IND_CALL) {
> >> + /* XL-form instruction */
> >> +
On Tue, 2013-12-24 at 12:02 +1100, Anton Blanchard wrote:
> Hi Michael,
>
> > > To try and catch any screw ups in our ppc64 memcpy and
> > > copy_tofrom_user loops, I wrote a quick test:
> > >
> > > http://ozlabs.org/~anton/junkcode/validate_kernel_copyloops.tar.gz
> >
> > Nice! How's this look?
On 12/24/2013 08:59 AM, Michael Ellerman wrote:
> On Fri, 2013-12-20 at 16:31 +0530, Anshuman Khandual wrote:
>> On 12/09/2013 11:51 AM, Michael Ellerman wrote:
>>> On Wed, 2013-04-12 at 10:32:40 UTC, Anshuman Khandual wrote:
+
+ if (bhrb_sw_filter & PERF_SAMPLE_BRANCH_IND_CALL) {
+
On Tue, 2013-12-24 at 09:20 +0530, Anshuman Khandual wrote:
> On 12/24/2013 08:59 AM, Michael Ellerman wrote:
> > On Fri, 2013-12-20 at 16:31 +0530, Anshuman Khandual wrote:
> >> On 12/09/2013 11:51 AM, Michael Ellerman wrote:
> >>> On Wed, 2013-04-12 at 10:32:40 UTC, Anshuman Khandual wrote:
> >>>
On Mon, 2013-12-23 at 06:52 -0800, Joe Perches wrote:
> On Mon, 2013-12-23 at 17:38 +1100, Anton Blanchard wrote:
> > The hypervisor expects MAC addresses passed in registers to be big
> > endian u64.
>
> So maybe use __be64 declarations?
>
> > +static unsigned long ibmveth_encode_mac_addr(char *
On 11/21/2013 05:41 PM, Alexey Kardashevskiy wrote:
> Almost every function in include/linux/iommu.h has an empty stub
> but the iommu_group_get_by_id() did not get one by mistake.
>
> This adds an empty stub for iommu_group_get_by_id() for IOMMU_API
> disabled config.
Ping?
> Signed-off-by: Al
The e500v1 doesn't implement the MAS7, so we should avoid to access
this register on that implementations. In the current kernel, the
access to MAS7 are protected by either CONFIG_PHYS_64BIT or
MMU_FTR_BIG_PHYS. Since some code are executed before the code
patching, we have to use CONFIG_PHYS_64BIT
v4:
- Fix the bug when booting above 64M.
- Rebase onto v3.13-rc5
- Pass the following test on a p5020ds board:
boot kernel at 0x500 and 0x900
kdump test with kernel option "crashkernel=64M@80M"
v3:
The main changes include:
* Drop the patch 5 in v2 (memblock: introdu
Move the codes which translate a effective address to physical address
to a separate function. So it can be reused by other code.
Signed-off-by: Kevin Hao
---
v4: No change.
v3: Use ifdef CONFIG_PHYS_64BIT to protect the access to MAS7
v2: A new patch in v2.
arch/powerpc/kernel/head_fsl_booke
This is used to get the address of a variable when the kernel is not
running at the linked or relocated address.
Signed-off-by: Kevin Hao
---
v4: A new patch in v4.
arch/powerpc/include/asm/ppc_asm.h | 13 +
1 file changed, 13 insertions(+)
diff --git a/arch/powerpc/include/asm/ppc
We use the tlb1 entries to map low mem to the kernel space. In the
current code, it assumes that the first tlb entry would cover the
kernel image. But this is not true for some special cases, such as
when we run a relocatable kernel above the 64M or set
CONFIG_KERNEL_START above 64M. So we choose t
This is based on the codes in the head_44x.S. The difference is that
the init tlb size we used is 64M. With this patch we can only load the
kernel at address between memstart_addr ~ memstart_addr + 64M. We will
fix this restriction in the following patches.
Signed-off-by: Kevin Hao
---
v4: Use ma
For a relocatable kernel since it can be loaded at any place, there
is no any relation between the kernel start addr and the memstart_addr.
So we can't calculate the memstart_addr from kernel start addr. And
also we can't wait to do the relocation after we get the real
memstart_addr from device tre
Introduce this function so we can set both the physical and virtual
address for the map in cams. This will be used by the relocation code.
Signed-off-by: Kevin Hao
---
v4: A new patch in v4.
arch/powerpc/mm/fsl_booke_mmu.c | 13 ++---
1 file changed, 10 insertions(+), 3 deletions(-)
di
When booting above the 64M for a secondary cpu, we also face the
same issue as the boot cpu that the PAGE_OFFSET map two different
physical address for the init tlb and the final map. So we have to use
switch_to_as1/restore_to_as0 between the conversion of these two
maps. When restoring to as0 for
This is always true for a non-relocatable kernel. Otherwise the kernel
would get stuck. But for a relocatable kernel, it seems a little
complicated. When booting a relocatable kernel, we just align the
kernel start addr to 64M and map the PAGE_OFFSET from there. The
relocation will base on this vir
The RELOCATABLE is more flexible and without any alignment restriction.
And it is a superset of DYNAMIC_MEMSTART. So use it by default for
a kdump kernel.
Signed-off-by: Kevin Hao
---
v4: No change.
v3: No change.
v2: A new patch in v2.
arch/powerpc/Kconfig | 3 +--
1 file changed, 1 insertio
24 matches
Mail list logo