On the 8xx, no-execute is set via PPP bits in the PTE. Therefore
a no-exec fault generates DSISR_PROTFAULT error bits,
not DSISR_NOEXEC_OR_G.
This patch adds DSISR_PROTFAULT in the test mask.
Fixes: d3ca587404b3 ("powerpc/mm: Fix reporting of kernel execute faults")
Signed-off-by: Christophe Lero
This patch adds a skeleton for Kernel Userspace Protection
functionnalities like Kernel Userspace Access Protection and
Kernel Userspace Execution Prevention
The subsequent implementation of KUAP for radix makes use of a MMU
feature in order to patch out assembly when KUAP is disabled or
unsupport
This patch adds a skeleton for Kernel Userspace Execution Prevention.
Then subarches implementing it have to define CONFIG_PPC_HAVE_KUEP
and provide setup_kuep() function.
Signed-off-by: Christophe Leroy
---
Documentation/admin-guide/kernel-parameters.txt | 2 +-
arch/powerpc/include/asm/kup.h
This patch implements a framework for Kernel Userspace Access
Protection.
Then subarches will have to possibility to provide their own
implementation by providing setup_kuap() and lock/unlock_user_access()
Some platform will need to know the area accessed and whether it is
accessed from read, wri
This patch adds Kernel Userspace Execution Prevention on the 8xx.
When a page is Executable, it is set Executable for Key 0 and NX
for Key 1.
Up to now, the User group is defined with Key 0 for both User and
Supervisor.
By changing the group to Key 0 for User and Key 1 for Supervisor,
this patch
This patch adds Kernel Userspace Access Protection on the 8xx.
When a page is RO or RW, it is set RO or RW for Key 0 and NA
for Key 1.
Up to now, the User group is defined with Key 0 for both User and
Supervisor.
By changing the group to Key 0 for User and Key 1 for Supervisor,
this patch preven
Execution protection already exists on radix, this just refactors
the radix init to provide the KUEP setup function instead.
Thus, the only functional change is that it can now be disabled.
Signed-off-by: Russell Currey
Signed-off-by: Christophe Leroy
---
arch/powerpc/mm/pgtable-radix.c
Kernel Userspace Access Prevention utilises a feature of
the Radix MMU which disallows read and write access to userspace
addresses. By utilising this, the kernel is prevented from accessing
user data from outside of trusted paths that perform proper safety
checks, such as copy_{to/from}_user() an
This patch add an helper which wraps 'mtsrin' instruction
to write into segment registers.
Signed-off-by: Christophe Leroy
---
arch/powerpc/include/asm/reg.h | 5 +
1 file changed, 5 insertions(+)
diff --git a/arch/powerpc/include/asm/reg.h b/arch/powerpc/include/asm/reg.h
index d9598e6790d
This patch prepares Kernel Userspace Access Protection for
book3s/32.
Due to limitations of the processor page protection capabilities,
the protection is only against writing. read protection cannot be
achieved using page protection.
In order to provide the protection, Ku and Ks keys are modified
This patch implements Kernel Userspace Access Protection for
book3s/32.
Due to limitations of the processor page protection capabilities,
the protection is only against writing. read protection cannot be
achieved using page protection.
In order to provide the protection, Ku and Ks keys are modifi
On 11/22/2018 02:04 PM, Russell Currey wrote:
Necessary for subsequent patches that enable KUAP support for radix.
Could plausibly be useful for other platforms too, if similar to the
radix case, reading the register that manages these accesses is
costly.
Has the unfortunate downside of another
On 11/22/2018 02:04 PM, Russell Currey wrote:
The subsequent implementation of KUAP for radix makes use of a MMU
feature in order to patch out assembly when KUAP is disabled or
unsupported. This won't work unless there's an entry point for
KUP support before the feature magic happens, so relo
On 11/22/2018 02:04 PM, Russell Currey wrote:
Kernel Userspace Access Prevention utilises a feature of
the Radix MMU which disallows read and write access to userspace
addresses. By utilising this, the kernel is prevented from accessing
user data from outside of trusted paths that perform pro
Sorry, I forgot to reset the author so the patch appears as coming from
yourself.
Le 28/11/2018 à 10:27, Russell Currey a écrit :
Execution protection already exists on radix, this just refactors
the radix init to provide the KUEP setup function instead.
Thus, the only functional change is tha
Le 22/11/2018 à 02:25, Russell Currey a écrit :
On Wed, 2018-11-21 at 09:32 +0100, Christophe LEROY wrote:
Le 21/11/2018 à 03:26, Russell Currey a écrit :
On Wed, 2018-11-07 at 16:56 +, Christophe Leroy wrote:
This patch implements a framework for Kernel Userspace Access
Protection.
T
Hi Radu,
Radu Rendec writes:
> Hi everyone,
>
> It seems there's an unchecked access to a NULL pointer (to a function)
> in the PowerPC MSI teardown code. I found this on kernel 4.9, but the
> code looks identical in the latest 4.20-rc. I don't see any reason why
> this wouldn't happen on recent
Christoph Hellwig writes:
> Any comments? I'd like to at least get the ball moving on the easy
> bits.
Nothing specific yet.
I'm a bit worried it might break one of the many old obscure platforms
we have that aren't well tested.
There's not much we can do about that, but I'll just try and tes
When disassembling InstructionTLBError we get the following messy code:
c000138c: 7d 84 63 78 mr r4,r12
c0001390: 75 25 58 00 andis. r5,r9,22528
c0001394: 75 2a 40 00 andis. r10,r9,16384
c0001398: 41 a2 00 08 beq c00013a0
c000139c: 7c 00 22
The purpose of this serie is to implement hardware assistance for TLB table walk
on the 8xx.
First part prepares for using HW assistance in TLB routines:
- Reverts a former patch which broke SWAP on the 8xx
- move book3s64 page fragment code in a common part for reusing it by the
8xx as 16k page s
BOOK3S/32 cannot be BOOKE, so remove useless code
Signed-off-by: Christophe Leroy
---
arch/powerpc/include/asm/book3s/32/pgalloc.h | 18 --
arch/powerpc/include/asm/book3s/32/pgtable.h | 14 --
2 files changed, 32 deletions(-)
diff --git a/arch/powerpc/include/asm/bo
In preparation of next patch which generalises the use of
pte_fragment_alloc() for all, this patch moves the related functions
in a place that is common to all subarches.
The 8xx will need that for supporting 16k pages, as in that mode
page tables still have a size of 4k.
Since pte_fragment with
There is no point in taking the page table lock as pte_frag or
pmd_frag are always NULL when we have only one fragment.
Reviewed-by: Aneesh Kumar K.V
Signed-off-by: Christophe Leroy
---
arch/powerpc/mm/pgtable-book3s64.c | 3 +++
arch/powerpc/mm/pgtable-frag.c | 3 +++
2 files changed, 6 in
commit 1bc54c03117b9 ("powerpc: rework 4xx PTE access and TLB miss")
introduced non atomic PTE updates and started the work of removing
PTE updates in TLB miss handlers, but kept PTE_ATOMIC_UPDATES for the
8xx with the following comment:
/* Until my rework is finished, 8xx still needs atomic PTE up
The purpose of this patch is to move platform specific
mmu-xxx.h files in platform directories like pte-xxx.h files.
In the meantime this patch creates common nohash and
nohash/32 + nohash/64 mmu.h files for future common parts.
Reviewed-by: Aneesh Kumar K.V
Signed-off-by: Christophe Leroy
---
This patch move pgtable_t into platform headers.
It gets rid of the CONFIG_PPC_64K_PAGES case for PPC64
as nohash/64 doesn't support CONFIG_PPC_64K_PAGES.
Reviewed-by: Aneesh Kumar K.V
Signed-off-by: Christophe Leroy
---
arch/powerpc/include/asm/book3s/32/mmu-hash.h | 2 ++
arch/powerpc/inclu
In order to handle pte_fragment functions with single fragment
without adding pte_frag in all mm_context_t, this patch creates
two helpers which do nothing on platforms using a single fragment.
Signed-off-by: Christophe Leroy
---
arch/powerpc/include/asm/pgtable.h | 31 ++
In order to simplify time critical exceptions handling 8xx
specific SW perf counters, this patch moves the counters into
the beginning of memory. This is possible because .text is readable
and the counters are never modified outside of the handlers.
By doing this, we avoid having to set a second r
In order to allow the 8xx to handle pte_fragments, this patch
extends the use of pte_fragments to PPC32 platforms.
Signed-off-by: Christophe Leroy
---
arch/powerpc/include/asm/book3s/32/mmu-hash.h | 5 -
arch/powerpc/include/asm/book3s/32/pgalloc.h | 18 ++
arch/powerpc/inc
In preparation of making use of hardware assistance in TLB handlers,
this patch temporarily disables 16K pages and hugepages. The reason
is that when using HW assistance in 4K pages mode, the linux model
fit with the HW model for 4K pages and 8M pages.
However for 16K pages and 512K mode some addi
HW assistance naturally supports 8M huge pages without
further modifications.
Signed-off-by: Christophe Leroy
---
arch/powerpc/mm/tlb_nohash.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/arch/powerpc/mm/tlb_nohash.c b/arch/powerpc/mm/tlb_nohash.c
index 4f79639e432f..8ad7aab150b7 10064
For using 512k pages with hardware assistance, the PTEs have to be spread
every 128 bytes in the L2 table.
Signed-off-by: Christophe Leroy
---
arch/powerpc/include/asm/hugetlb.h | 4 +++-
arch/powerpc/mm/hugetlbpage.c | 13 +
arch/powerpc/mm/tlb_nohash.c | 3 +++
3 files
Today, on the 8xx the TLB handlers do SW tablewalk by doing all
the calculation in ASM, in order to match with the Linux page
table structure.
The 8xx offers hardware assistance which allows significant size
reduction of the TLB handlers, hence also reduces the time spent
in the handlers.
However
Using this HW assistance implies some constraints on the
page table structure:
- Regardless of the main page size used (4k or 16k), the
level 1 table (PGD) contains 1024 entries and each PGD entry covers
a 4Mbytes area which is managed by a level 2 table (PTE) containing
also 1024 entries each desc
This patch reworks the TLB Miss handler in order to not use r12
register, hence avoiding having to save it into SPRN_SPRG_SCRATCH2.
In the DAR Fixup code we can now use SPRN_M_TW, freeing
SPRN_SPRG_SCRATCH2.
Then SPRN_SPRG_SCRATCH2 may be used for something else in the future.
Signed-off-by: Chr
As this is running with MMU off, the CPU only does speculative
fetch for code in the same page.
Following the significant size reduction of TLB handler routines,
the side handlers can be brought back close to the main part,
ie in the same page.
Signed-off-by: Christophe Leroy
---
arch/powerpc/k
The purpose of this serie is to activate CONFIG_THREAD_INFO_IN_TASK which
moves the thread_info into task_struct.
Moving thread_info into task_struct has the following advantages:
- It protects thread_info from corruption in the case of stack
overflows.
- Its address is harder to determine if stac
When activating CONFIG_THREAD_INFO_IN_TASK, linux/sched.h
includes asm/current.h. This generates a circular dependency.
To avoid that, asm/processor.h shall not be included in mmu-hash.h
In order to do that, this patch moves into a new header called
asm/task_size_user64.h the information from asm/
When moving to CONFIG_THREAD_INFO_IN_TASK, the thread_info 'cpu' field
gets moved into task_struct and only defined when CONFIG_SMP is set.
This patch ensures that TI_CPU is only used when CONFIG_SMP is set and
that task_struct 'cpu' field is not used directly out of SMP code.
Signed-off-by: Chri
This patch cleans the powerpc kernel before activating
CONFIG_THREAD_INFO_IN_TASK:
- The purpose of the pointer given to call_do_softirq() and
call_do_irq() is to point the new stack ==> change it to void* and
rename it 'sp'
- Don't use CURRENT_THREAD_INFO() to locate the stack.
- Fix a few comment
This patch activates CONFIG_THREAD_INFO_IN_TASK which
moves the thread_info into task_struct.
Moving thread_info into task_struct has the following advantages:
- It protects thread_info from corruption in the case of stack
overflows.
- Its address is harder to determine if stack addresses are
leak
thread_info is not anymore in the stack, so the entire stack
can now be used.
There is also no risk anymore of corrupting task_cpu(p) with a
stack overflow so the patch removes the test.
When doing this, an explicit test for NULL stack pointer is
needed in validate_sp() as it is not anymore impli
The table of pointers 'current_set' has been used for retrieving
the stack and current. They used to be thread_info pointers as
they were pointing to the stack and current was taken from the
'task' field of the thread_info.
Now, the pointers of 'current_set' table are now both pointers
to task_str
Now that thread_info is similar to task_struct, it's address is in r2
so CURRENT_THREAD_INFO() macro is useless. This patch removes it.
At the same time, as the 'cpu' field is not anymore in thread_info,
this patch renames it to TASK_CPU.
Signed-off-by: Christophe Leroy
---
arch/powerpc/Makefil
Now that current_thread_info is located at the beginning of 'current'
task struct, CURRENT_THREAD_INFO macro is not really needed any more.
This patch replaces it by loads of the value at PACACURRENT(r13).
Signed-off-by: Christophe Leroy
---
arch/powerpc/include/asm/exception-64s.h | 4 +
Some stack pointers used to also be thread_info pointers
and were called tp. Now that they are only stack pointers,
rename them sp.
Signed-off-by: Christophe Leroy
---
arch/powerpc/kernel/irq.c | 17 +++--
arch/powerpc/kernel/setup_64.c | 20 ++--
2 files changed
A new self test that forces MSR[TS] to be set without calling any TM
instruction. This test also tries to cause a page fault at a signal
handler, exactly between MSR[TS] set and tm_recheckpoint(), forcing
thread->texasr to be rewritten with TEXASR[FS] = 0, which will cause a BUG
when tm_recheckpoin
We can upgrade pte access (R -> RW transition) via mprotect. We need
to make sure we follow the recommended pte update sequence as outlined in
commit: bd5050e38aec ("powerpc/mm/radix: Change pte relax sequence to handle
nest MMU hang")
for such updates. This patch series do that.
Changes from V
Some architecture may want to call flush_tlb_range from these helpers.
Signed-off-by: Aneesh Kumar K.V
---
arch/s390/include/asm/pgtable.h | 4 ++--
arch/s390/mm/pgtable.c | 6 --
arch/x86/include/asm/paravirt.h | 7 +--
fs/proc/task_mmu.c | 4 ++--
include/asm-gene
Architectures like ppc64 requires to do a conditional tlb flush based on the old
and new value of pte. Enable that by passing old pte value as the arg.
Signed-off-by: Aneesh Kumar K.V
---
arch/s390/include/asm/pgtable.h | 3 ++-
arch/s390/mm/pgtable.c | 2 +-
arch/x86/include/asm/paravi
NestMMU requires us to mark the pte invalid and flush the tlb when we do a
RW upgrade of pte. We fixed a variant of this in the fault path in commit
Fixes: bd5050e38aec ("powerpc/mm/radix: Change pte relax sequence to handle
nest MMU hang")
Do the same for mprotect upgrades.
Hugetlb is handled i
Signed-off-by: Aneesh Kumar K.V
---
include/linux/hugetlb.h | 18 ++
mm/hugetlb.c| 8 +---
2 files changed, 23 insertions(+), 3 deletions(-)
diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index 087fd5f48c91..e2a3b0c854eb 100644
--- a/include/linux
NestMMU requires us to mark the pte invalid and flush the tlb when we do a
RW upgrade of pte. We fixed a variant of this in the fault path in commit
Fixes: bd5050e38aec ("powerpc/mm/radix: Change pte relax sequence to handle
nest MMU hang")
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/inclu
Hi Michael,
On Wed, Nov 28, 2018 at 6:00 AM Michael Ellerman wrote:
>
> Radu Rendec writes:
> >
> > The assumption in arch_teardown_msi_irqs() is wrong and results in a
> > function call on a NULL pointer. An example of how this can happen is
> > included in the actual patch header. In my case,
On 28 November 2018 at 12:05PM, Michael Ellerman wrote:
Nothing specific yet.
I'm a bit worried it might break one of the many old obscure platforms
we have that aren't well tested.
Please don't apply the new DMA mapping code if you don't be sure if it
works on all supported PowerPC machines.
LDBAR holds the memory address allocated for each cpu. For thread-imc
the mode bit (i.e bit 1) of LDBAR is set to accumulation.
Currently, ldbar is loaded with per cpu memory address and mode set to
accumulation at boot time.
To enable trace-imc, the mode bit of ldbar should be set to 'trace'. So
Add the macros needed for IMC (In-Memory Collection Counters) trace-mode
and data structure to hold the trace-imc record data.
Also, add the new type "OPAL_IMC_COUNTERS_TRACE" in 'opal-api.h', since
there is a new switch case added in the opal-calls for IMC.
Signed-off-by: Anju T Sudhakar
---
ar
Patch detects trace-imc events, does memory initilizations for each online
cpu, and registers cpuhotplug call-backs.
Signed-off-by: Anju T Sudhakar
---
arch/powerpc/perf/imc-pmu.c | 91 +++
arch/powerpc/platforms/powernv/opal-imc.c | 3 +
include/linux/cpuhotpl
Add PMU functions to support trace-imc.
Signed-off-by: Anju T Sudhakar
---
arch/powerpc/perf/imc-pmu.c | 172
1 file changed, 172 insertions(+)
diff --git a/arch/powerpc/perf/imc-pmu.c b/arch/powerpc/perf/imc-pmu.c
index d9ffe7f03f1e..18af7c3e2345 100644
---
IMC (In-Memory collection counters) is a hardware monitoring facility
that collects large number of hardware performance events.
POWER9 support two modes for IMC which are the Accumulation mode and
Trace mode. In Accumulation mode, event counts are accumulated in syste
The 603 doesn't have a HASH table, TLB misses are handled by
software. It is then possible to generate page fault when
_PAGE_EXEC is not set like in nohash/32.
There is one "reserved" PTE bit available, this patch uses
it for _PAGE_EXEC.
In order to support it, set_pte_filter() and
set_access_fla
I will compile and test the kernel from the following Git on my PowerPC
machines.
http://git.infradead.org/users/hch/misc.git
On 28 November 2018 at 12:05PM, Michael Ellerman wrote:
Nothing specific yet.
I'm a bit worried it might break one of the many old obscure platforms
we have that aren't
On Wed, 28 Nov 2018 16:55:30 +0100
Christian Zigotzky wrote:
> On 28 November 2018 at 12:05PM, Michael Ellerman wrote:
> > Nothing specific yet.
> >
> > I'm a bit worried it might break one of the many old obscure platforms
> > we have that aren't well tested.
> >
> Please don't apply the new D
Hi Daniel,
Thank you for the patch! Yet something to improve:
[auto build test ERROR on powerpc/next]
[also build test ERROR on v4.20-rc4 next-20181128]
[if your patch is applied to the wrong git tree, please drop us a note to help
improve the system]
url:
https://github.com/0day-ci/linux
On Mon, 2018-11-26 at 01:59:16 UTC, Michael Ellerman wrote:
> For some configs the build fails with:
>
> arch/powerpc/mm/dump_linuxpagetables.c: In function 'populate_markers':
> arch/powerpc/mm/dump_linuxpagetables.c:306:39: error: 'PKMAP_BASE'
> undeclared (first use in this function)
> a
On Mon, 2018-11-26 at 22:01:54 UTC, Paul Mackerras wrote:
> Commit 6975a783d7b4 ("powerpc/boot: Allow building the zImage wrapper
> as a relocatable ET_DYN", 2011-04-12) changed the procedure descriptor
> at the start of crt0.S to have a hard-coded start address of 0x50
> rather than a referenc
On Wed, 28 Nov 2018 20:04:37 +0530 "Aneesh Kumar K.V"
wrote:
> Signed-off-by: Aneesh Kumar K.V
Some explanation of the motivation would be useful.
> include/linux/hugetlb.h | 18 ++
> mm/hugetlb.c| 8 +---
> 2 files changed, 23 insertions(+), 3 deletions(-)
>
On Wed, 28 Nov 2018 20:04:33 +0530 "Aneesh Kumar K.V"
wrote:
> We can upgrade pte access (R -> RW transition) via mprotect. We need
> to make sure we follow the recommended pte update sequence as outlined in
> commit: bd5050e38aec ("powerpc/mm/radix: Change pte relax sequence to handle
> nest M
Madhavan Srinivasan writes:
> On 28/11/18 9:04 AM, Michael Ellerman wrote:
>> Arnaldo Carvalho de Melo writes:
>>
>>> Em Mon, Nov 26, 2018 at 11:34:08PM +0530, Madhavan Srinivasan escreveu:
On each sample, Sample Instruction Event Register (SIER) content
is saved in pt_regs. SIER does n
There is a plan to build the kernel with -Wimplicit-fallthrough and these
places in the code produced warnings, but because we build arch/powerpc
with -Werror, they became errors. Fix them up.
This patch produces no change in behaviour, but should be reviewed in
case these are actually bugs not i
Ping?
On 28/09/2018 16:48, Alexey Kardashevskiy wrote:
> The macro and few headers are not used so remove them.
>
> Signed-off-by: Alexey Kardashevskiy
> ---
> arch/powerpc/platforms/powernv/npu-dma.c | 14 --
> 1 file changed, 14 deletions(-)
>
> diff --git a/arch/powerpc/platfor
On Wed, 2018-11-28 at 11:23 -0200, Breno Leitao wrote:
> A new self test that forces MSR[TS] to be set without calling any TM
> instruction. This test also tries to cause a page fault at a signal
> handler, exactly between MSR[TS] set and tm_recheckpoint(), forcing
> thread->texasr to be rewritten
Thanks! Looks like a reasonable cleanup. Assuming it compiles I can't see any
reason not to add:
Acked-by: Alistair Popple
On Thursday, 29 November 2018 1:11:34 PM AEDT Alexey Kardashevskiy wrote:
> Ping?
>
> On 28/09/2018 16:48, Alexey Kardashevskiy wrote:
> > The macro and few headers are no
The 'clear_sw_state' parameter for eeh_pe_clear_frozen_state() is
redundant because it has no effect (except in the rare case of a
hardware error part way through unfreezing a tree of PEs, where it
would dangerously allow partial de-isolation before returning
failure).
It is passed down to __eeh_p
Add a parameter to eeh_clear_pe_frozen_state() that allows
passed-through PEs to be excluded. Update callers to always pass true
so that there is no change in behaviour.
This is to prepare for follow-up work for passed-through devices.
Signed-off-by: Sam Bobroff
---
arch/powerpc/kernel/eeh_driv
Hello,
Here are changes that allow EEH to successfully recover after a failure that
affects of both host and guest devices. This happens, for example, when a PHB
containing passed-through devices is fenced. (Failures that include only
passed-through devices are ignored by the host.)
Currently, wh
Currently, eeh_pe_reset_full() will only attempt to reset a PE more
than once if activating the reset state and deactivating it both
succeed, but later polling shows that it hasn't become active.
Change this so that it will try up to three times for any reason other
than an unrecoverable slot erro
Add a parameter to eeh_pe_state_clear() that allows passed-through PEs
to be excluded. Update callers to always pass true so that there is no
change in behaviour.
Also refactor to use direct traversal, to allow the removal of some
boilerplate.
This is to prepare for follow-up work for passed-thro
eeh_unfreeze_pe() performs two operations: unfreezing a PE (which may
cause firmware to unfreeze child PEs as well) and de-isolating the PE
and it's children.
To simplify this and support future work, separate out the
de-isolation and perform it at the call sites (when necessary).
There should be
Currently, the EEH recovery process considers passed-through devices
as if they were not EEH-aware, which can cause them to be removed as
part of recovery. Because device removal requires cooperation from
the guest, this may lead to the process stalling or deadlocking.
Also, if devices are removed
The purpose of this patch series is, we can easily
add/modify/delete system call table support by cha-
nging entry in syscall.tbl file instead of manually
changing many files. The other goal is to unify the
system call table generation support implementation
across all the architectures.
The sy
NR_syscalls macro holds the number of system call exist
in powerpc architecture. We have to change the value of
NR_syscalls, if we add or delete a system call.
One of the patch in this patch series has a script which
will generate a uapi header based on syscall.tbl file.
The syscall.tbl file conta
Move the macro definition for compat_sys_sigsuspend from
asm/systbl.h to the file which it is getting included.
One of the patch in this patch series is generating uapi
header and syscall table files. In order to come up with
a common implimentation across all architecture, we need
to do this chan
The system call tables are in different format in all
architecture and it will be difficult to manually add or
modify the system calls in the respective files. To make
it easy by keeping a script and which will generate the
uapi header and syscall table file. This change will also
help to unify the
System call table generation script must be run to gener-
ate unistd_32/64.h and syscall_table_32/64/c32/spu.h files.
This patch will have changes which will invokes the script.
This patch will generate unistd_32/64.h and syscall_table-
_32/64/c32/spu.h files by the syscall table generation
script
Right, so as both 0-day and snowpatch tell me, this patch is wrong.
It turns out that this:
> $(obj)/serial.c: $(obj)/autoconf.h
> + $(Q)cp $< $@
is identical to:
cp arch/powerpc/boot/autoconf.h arch/powerpc/boot/serial.c
(Clearly my make mastery is inadequate.)
Amusingly this which works f
From: Juliet Kim
[ Upstream commit a5681e20b541a507c7d4fd48ae4a4040d32ee1ef ]
This patch changes to use rtnl_lock only during a reset to avoid
deadlock that could occur when a thread operating close is holding
rtnl_lock and waiting for reset_lock acquired by another thread,
which is waiting for
From: Thomas Falcon
[ Upstream commit b7cdec3d699db2e5985ad39de0f25d3b6111928e ]
The wrong index is used when cleaning up RX buffer objects during release
of RX queues. Update to use the correct index counter.
Signed-off-by: Thomas Falcon
Signed-off-by: David S. Miller
Signed-off-by: Sasha Le
From: Thomas Falcon
[ Upstream commit 5bf032ef08e6a110edc1e3bfb3c66a208fb55125 ]
During device reset, queue memory is not being updated to accommodate
changes in ring buffer sizes supported by backing hardware. Track
any differences in ring buffer sizes following the reset and update
queue memor
From: Thomas Falcon
[ Upstream commit b7cdec3d699db2e5985ad39de0f25d3b6111928e ]
The wrong index is used when cleaning up RX buffer objects during release
of RX queues. Update to use the correct index counter.
Signed-off-by: Thomas Falcon
Signed-off-by: David S. Miller
Signed-off-by: Sasha Le
On 11/29/18 3:40 AM, Andrew Morton wrote:
On Wed, 28 Nov 2018 20:04:37 +0530 "Aneesh Kumar K.V"
wrote:
Signed-off-by: Aneesh Kumar K.V
Some explanation of the motivation would be useful.
I will update the commit message.
include/linux/hugetlb.h | 18 ++
mm/hugetlb
91 matches
Mail list logo