Haren Myneni writes:
> On power9, userspace can send GZIP compression requests directly to NX
> once kernel establishes NX channel / window with VAS. This patch provides
> user space API which allows user space to establish channel using open
> VAS_TX_WIN_OPEN ioctl, mmap and close operations.
>
Changelog v1 -> v2:
- Handled comments from Vlastimil Babka and Bharata B Rao
- Changes only in patch 2 and 4.
Sachin recently reported that linux-next was no more bootable on few
powerpc systems.
https://lore.kernel.org/linux-next/3381cd91-ab3d-4773-ba04-e7a072a63...@linux.vnet.ibm.com/
# numact
Calling a kmalloc_node on a possible node which is not yet onlined can
lead to panic. Currently node_present_pages() doesn't verify the node is
online before accessing the pgdat for the node. However pgdat struct may
not be available resulting in a crash.
NIP [c03d55f4] ___slab_alloc+0x1f4
Currently while allocating a slab for a offline node, we use its
associated node_numa_mem to search for a partial slab. If we don't find
a partial slab, we try allocating a slab from the offline node using
__alloc_pages_node. However this is bound to fail.
NIP [c039a300] __alloc_pages_node
For a memoryless or offline nodes, node_numa_mem refers to a N_MEMORY
fallback node. Currently kernel has an API set_numa_mem that sets
node_numa_mem for memoryless node. However this API cannot be used for
offline nodes. Hence all offline nodes will have their node_numa_mem set
to 0. However syste
Currently fallback nodes for offline nodes aren't set. Hence by default
node 0 ends up being the default node. However node 0 might be offline.
Fix this by explicitly setting fallback node. Ensure first_memory_node
is set before kernel does explicit setting of fallback node.
Cc: Andrew Morton
Cc
* Vlastimil Babka [2020-03-17 16:29:21]:
> > If we pass this node 0 (which is memoryless/cpuless) to
> > alloc_pages_node. Please note I am only setting node_numa_mem only
> > for offline nodes. However we could change this to set for all offline and
> > memoryless nodes.
>
> That would indeed m
On 3/17/20 4:05 PM, Christophe Leroy wrote:
Le 09/03/2020 à 09:57, Ravi Bangoria a écrit :
Instead of disabling only first watchpoint, disable all available
watchpoints while clearing dawr_force_enable.
Signed-off-by: Ravi Bangoria
---
arch/powerpc/kernel/dawr.c | 10 +++---
1 file
* Michal Hocko [2020-03-16 09:54:25]:
> On Sun 15-03-20 14:20:05, Cristopher Lameter wrote:
> > On Wed, 11 Mar 2020, Srikar Dronamraju wrote:
> >
> > > Currently Linux kernel with CONFIG_NUMA on a system with multiple
> > > possible nodes, marks node 0 as online at boot. However in practice,
>
On 3/17/20 4:07 PM, Christophe Leroy wrote:
Le 09/03/2020 à 09:58, Ravi Bangoria a écrit :
So far powerpc hw supported only one watchpoint. But Future Power
architecture is introducing 2nd DAWR. Convert thread_struct->hw_brk
into an array.
Looks like you are doing a lot more than that in
Le 18/03/2020 à 09:36, Ravi Bangoria a écrit :
On 3/17/20 4:07 PM, Christophe Leroy wrote:
Le 09/03/2020 à 09:58, Ravi Bangoria a écrit :
So far powerpc hw supported only one watchpoint. But Future Power
architecture is introducing 2nd DAWR. Convert thread_struct->hw_brk
into an array.
On 3/18/20 2:26 PM, Christophe Leroy wrote:
Le 18/03/2020 à 09:36, Ravi Bangoria a écrit :
On 3/17/20 4:07 PM, Christophe Leroy wrote:
Le 09/03/2020 à 09:58, Ravi Bangoria a écrit :
So far powerpc hw supported only one watchpoint. But Future Power
architecture is introducing 2nd DAWR.
@@ -1628,6 +1628,9 @@ int copy_thread_tls(unsigned long clone_flags, unsigned
long usp,
void (*f)(void);
unsigned long sp = (unsigned long)task_stack_page(p) + THREAD_SIZE;
struct thread_info *ti = task_thread_info(p);
+#ifdef CONFIG_HAVE_HW_BREAKPOINT
+ int i;
+#endif
Coul
From: Philippe Bergheaud
Some opencapi FPGA images allow to control if the FPGA should be reloaded
on the next adapter reset. If it is supported, the image specifies it
through a Vendor Specific DVSEC in the config space of function 0.
Signed-off-by: Philippe Bergheaud
---
Changelog:
v2:
- re
This patch series is against next-20200317. It fixes all references to files
under Documentation/* that were moved, renamed or removed.
After this patch series, this script:
./scripts/documentation-file-ref-check
Doesn't complain about any broken reference.
Mauro Carvalho Chehab (12):
On 3/17/20 3:31 PM, Nicholas Piggin wrote:
Ganesh's on March 16, 2020 9:47 pm:
On 3/14/20 9:18 AM, Nicholas Piggin wrote:
Ganesh Goudar's on March 14, 2020 12:04 am:
MCE handling on pSeries platform fails as recent rework to use common
code for pSeries and PowerNV in machine check error han
On Wed 18-03-20 12:58:07, Srikar Dronamraju wrote:
> Calling a kmalloc_node on a possible node which is not yet onlined can
> lead to panic. Currently node_present_pages() doesn't verify the node is
> online before accessing the pgdat for the node. However pgdat struct may
> not be available result
On 3/18/20 4:20 AM, Srikar Dronamraju wrote:
> * Vlastimil Babka [2020-03-17 17:45:15]:
>>
>> Yes, that Kirill's patch was about the memcg shrinker map allocation. But the
>> patch hunk that Bharata posted as a "hack" that fixes the problem, it follows
>> that there has to be something else that
* Michal Hocko [2020-03-18 11:02:56]:
> On Wed 18-03-20 12:58:07, Srikar Dronamraju wrote:
> > Calling a kmalloc_node on a possible node which is not yet onlined can
> > lead to panic. Currently node_present_pages() doesn't verify the node is
> > online before accessing the pgdat for the node. Ho
On Wed 18-03-20 16:32:15, Srikar Dronamraju wrote:
> * Michal Hocko [2020-03-18 11:02:56]:
>
> > On Wed 18-03-20 12:58:07, Srikar Dronamraju wrote:
[...]
> > > -#define node_present_pages(nid) (NODE_DATA(nid)->node_present_pages)
> > > -#define node_spanned_pages(nid) (NODE_DATA(nid)->node_span
Christophe Leroy writes:
> Le 09/03/2020 à 09:58, Ravi Bangoria a écrit :
>> Currently we assume that we have only one watchpoint supported by hw.
>> Get rid of that assumption and use dynamic loop instead. This should
>> make supporting more watchpoints very easy.
>
> I think using 'we' is to be
On 3/17/20 8:39 PM, Jiri Olsa wrote:
> On Tue, Mar 17, 2020 at 11:53:30AM +0530, Kajol Jain wrote:
>> This patch refactor metricgroup__add_metric function where
>> some part of it move to function metricgroup__add_metric_param.
>> No logic change.
>
> can't compile this change:
>
> [jolsa@krav
On 3/17/20 8:37 PM, Jiri Olsa wrote:
> On Tue, Mar 17, 2020 at 11:53:31AM +0530, Kajol Jain wrote:
>
> SNIP
>
>> diff --git a/tools/perf/arch/powerpc/util/header.c
>> b/tools/perf/arch/powerpc/util/header.c
>> index 3b4cdfc5efd6..dcc3c6ab2e67 100644
>> --- a/tools/perf/arch/powerpc/util/heade
Le 18/03/2020 à 12:35, Michael Ellerman a écrit :
Christophe Leroy writes:
Le 09/03/2020 à 09:58, Ravi Bangoria a écrit :
Currently we assume that we have only one watchpoint supported by hw.
Get rid of that assumption and use dynamic loop instead. This should
make supporting more watchpoin
On 3/18/20 11:02 AM, Michal Hocko wrote:
> On Wed 18-03-20 12:58:07, Srikar Dronamraju wrote:
>> Calling a kmalloc_node on a possible node which is not yet onlined can
>> lead to panic. Currently node_present_pages() doesn't verify the node is
>> online before accessing the pgdat for the node. Howe
> On 13-Mar-2020, at 11:36 PM, Segher Boessenkool
> wrote:
>
> On Fri, Mar 13, 2020 at 01:49:07PM -0400, Athira Rajeev wrote:
>> Sampled instruction address register (SIER), is a PMU register,
>
> SIER stands for "Sampled Instruction Event Register", instead. With that
> change, your patch
Sampled Instruction Event Register (SIER), is a PMU register,
captures architecture state for a given sample. And sier_user_mask
defined in commit 330a1eb7775b ("powerpc/perf: Core EBB support for 64-bit
book3s") defines the architected bits that needs to be saved from the SPR.
Currently all of the
On 3/17/20 4:29 PM, Christophe Leroy wrote:
Le 09/03/2020 à 09:58, Ravi Bangoria a écrit :
Currently we assume that we have only one watchpoint supported by hw.
Get rid of that assumption and use dynamic loop instead. This should
make supporting more watchpoints very easy.
I think using '
On 3/17/20 8:40 PM, Jiri Olsa wrote:
> On Tue, Mar 17, 2020 at 11:53:30AM +0530, Kajol Jain wrote:
>> This patch refactor metricgroup__add_metric function where
>> some part of it move to function metricgroup__add_metric_param.
>> No logic change.
>>
>> Signed-off-by: Kajol Jain
>> ---
>> tool
On 3/17/20 8:40 PM, Jiri Olsa wrote:
> On Tue, Mar 17, 2020 at 11:53:31AM +0530, Kajol Jain wrote:
>
> SBIP
>
>> +static int metricgroup__add_metric_runtime_param(struct strbuf *events,
>> +struct list_head *group_list, struct pmu_event *pe)
>> +{
>> +int i, count;
>> +
On 3/17/20 4:38 PM, Christophe Leroy wrote:
Le 09/03/2020 à 09:58, Ravi Bangoria a écrit :
ptrace and perf watchpoints on powerpc behaves differently. Ptrace
On the 8xx, ptrace generates signal after executing the instruction.
8xx logic is unchanged. I should have mentioned "Book3s DAWR
On 3/17/20 4:40 PM, Christophe Leroy wrote:
Le 09/03/2020 à 09:58, Ravi Bangoria a écrit :
Xmon allows overwriting breakpoints because it's supported by only
one dawr. But with multiple dawrs, overwriting becomes ambiguous
or unnecessary complicated. So let's not allow it.
Could we drop t
+static int free_data_bpt(void)
This names suggests the function frees a breakpoint.
I guess it should be find_free_data_bpt()
Sure.
Thanks,
Ravi
On 3/17/20 3:44 PM, Christophe Leroy wrote:
Le 09/03/2020 à 09:57, Ravi Bangoria a écrit :
Future Power architecture is introducing second DAWR. Rename current
DAWR macros as:
s/SPRN_DAWR/SPRN_DAWR0/
s/SPRN_DAWRX/SPRN_DAWRX0/
I think you should tell that DAWR0 and DAWRX0 is the real n
Michael Ellerman writes:
> The previous commit reduced the amount of code that is run before we
> setup a paca. However there are still a few remaining functions that
> run with no paca, or worse, with an arbitrary value in r13 that will
> be used as a paca pointer.
>
> In particular the stack pr
On 3/17/20 8:36 PM, Jiri Olsa wrote:
> On Tue, Mar 17, 2020 at 11:53:31AM +0530, Kajol Jain wrote:
>
> SNIP
>
>> diff --git a/tools/perf/util/expr.h b/tools/perf/util/expr.h
>> index 0938ad166ece..7786829b048b 100644
>> --- a/tools/perf/util/expr.h
>> +++ b/tools/perf/util/expr.h
>> @@ -17,12
On Wed 18-03-20 12:53:32, Vlastimil Babka wrote:
[...]
> Yes. So here's an alternative proposal for fixing the current situation in
> SLUB,
> before the long-term solution of having all possible nodes provide valid pgdat
> with zonelists:
>
> - fix SLUB with the hunk at the end of this mail - the
On 3/16/20 8:35 PM, Christophe Leroy wrote:
Le 09/03/2020 à 09:57, Ravi Bangoria a écrit :
So far, powerpc Book3S code has been written with an assumption of only
one watchpoint. But future power architecture is introducing second
watchpoint register (DAWR). Even though this patchset does n
On 03/17/20 at 11:49am, David Hildenbrand wrote:
> Distributions nowadays use udev rules ([1] [2]) to specify if and
> how to online hotplugged memory. The rules seem to get more complex with
> many special cases. Due to the various special cases,
> CONFIG_MEMORY_HOTPLUG_DEFAULT_ONLINE cannot be us
Thanks for the reviews Daniel, I'll use your testcases and address the
issues you found, I still have some questions bellow:
On 18/03/2020 03:18, Daniel Axtens wrote:
Raphael Moreira Zinsly writes:
Include a decompression testcase for the powerpc NX-GZIP
engine.
I compiled gzip with the AF
Le 18/03/2020 à 13:37, Ravi Bangoria a écrit :
On 3/17/20 4:40 PM, Christophe Leroy wrote:
Le 09/03/2020 à 09:58, Ravi Bangoria a écrit :
Xmon allows overwriting breakpoints because it's supported by only
one dawr. But with multiple dawrs, overwriting becomes ambiguous
or unnecessary com
On 18.03.20 14:05, Baoquan He wrote:
> On 03/17/20 at 11:49am, David Hildenbrand wrote:
>> Distributions nowadays use udev rules ([1] [2]) to specify if and
>> how to online hotplugged memory. The rules seem to get more complex with
>> many special cases. Due to the various special cases,
>> CONFIG
On Wed 18-03-20 21:05:17, Baoquan He wrote:
> On 03/17/20 at 11:49am, David Hildenbrand wrote:
> > Distributions nowadays use udev rules ([1] [2]) to specify if and
> > how to online hotplugged memory. The rules seem to get more complex with
> > many special cases. Due to the various special cases,
Baoquan He writes:
> On 03/17/20 at 11:49am, David Hildenbrand wrote:
>> Distributions nowadays use udev rules ([1] [2]) to specify if and
>> how to online hotplugged memory. The rules seem to get more complex with
>> many special cases. Due to the various special cases,
>> CONFIG_MEMORY_HOTPLUG_
On 03/18/20 at 02:58pm, Vitaly Kuznetsov wrote:
> Baoquan He writes:
>
> > On 03/17/20 at 11:49am, David Hildenbrand wrote:
> >> Distributions nowadays use udev rules ([1] [2]) to specify if and
> >> how to online hotplugged memory. The rules seem to get more complex with
> >> many special cases.
On 03/18/20 at 02:54pm, Michal Hocko wrote:
> On Wed 18-03-20 21:05:17, Baoquan He wrote:
> > On 03/17/20 at 11:49am, David Hildenbrand wrote:
> > > Distributions nowadays use udev rules ([1] [2]) to specify if and
> > > how to online hotplugged memory. The rules seem to get more complex with
> > >
Sachin reports [1] a crash in SLUB __slab_alloc():
BUG: Kernel NULL pointer dereference on read at 0x73b0
Faulting instruction address: 0xc03d55f4
Oops: Kernel access of bad area, sig: 11 [#1]
LE PAGE_SIZE=64K MMU=Hash SMP NR_CPUS=2048 NUMA pSeries
Modules linked in:
CPU: 19 PID: 1 Com
On 03/18/20 at 02:50pm, David Hildenbrand wrote:
> On 18.03.20 14:05, Baoquan He wrote:
> > On 03/17/20 at 11:49am, David Hildenbrand wrote:
> >> Distributions nowadays use udev rules ([1] [2]) to specify if and
> >> how to online hotplugged memory. The rules seem to get more complex with
> >> many
Baoquan He writes:
> Is there a reason hyperV need boot with small memory, then enlarge it
> with huge memory? Since it's a real case in hyperV, I guess there must
> be reason, I am just curious.
>
It doesn't really *need* to but this can be utilized in e.g. 'hot
standby' schemes I believe. Also
* Pratik Rajesh Sampat [2020-03-17 19:40:16]:
> Define a bitmask interface to determine support for the Self Restore,
> Self Save or both.
>
> Also define an interface to determine the preference of that SPR to
> be strictly saved or restored or encapsulated with an order of preference.
>
> The
* Pratik Rajesh Sampat [2020-03-17 19:40:17]:
> This commit introduces and leverages the Self save API which OPAL now
> supports.
>
> Add the new Self Save OPAL API call in the list of OPAL calls.
> Implement the self saving of the SPRs based on the support populated
> while respecting it's pref
* Pratik Rajesh Sampat [2020-03-17 19:40:18]:
> Parse the device tree for nodes self-save, self-restore and populate
> support for the preferred SPRs based what was advertised by the device
> tree.
>
> Signed-off-by: Pratik Rajesh Sampat
> Reviewed-by: Ram Pai
Reviewed-by: Vaidyanathan Sriniv
On Wed, Mar 18, 2020 at 03:42:19PM +0100, Vlastimil Babka wrote:
> This is a PowerPC platform with following NUMA topology:
>
> available: 2 nodes (0-1)
> node 0 cpus:
> node 0 size: 0 MB
> node 0 free: 0 MB
> node 1 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
> 25 26 2
On 3/18/20 5:06 PM, Bharata B Rao wrote:
> On Wed, Mar 18, 2020 at 03:42:19PM +0100, Vlastimil Babka wrote:
>> This is a PowerPC platform with following NUMA topology:
>>
>> available: 2 nodes (0-1)
>> node 0 cpus:
>> node 0 size: 0 MB
>> node 0 free: 0 MB
>> node 1 cpus: 0 1 2 3 4 5 6 7 8 9 10 11
On 3/18/20 5:06 PM, Bharata B Rao wrote:
>> @@ -2568,12 +2566,15 @@ static void *___slab_alloc(struct kmem_cache *s,
>> gfp_t gfpflags, int node,
>> redo:
>>
>> if (unlikely(!node_match(page, node))) {
>> -int searchnode = node;
>> -
>> -if (node != NUMA_NO_NODE &
el.org/pub/scm/linux/kernel/git/powerpc/linux.git next
config: powerpc-randconfig-a001-20200318 (attached as .config)
compiler: powerpc-linux-gcc (GCC) 9.2.0
reproduce:
wget
https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O
~/bin/make.cross
chmod +x ~/bin/
Hi Maddy,
On 3/17/20 1:50 AM, maddy wrote:
> On 3/13/20 4:08 AM, Kim Phillips wrote:
>> On 3/11/20 11:00 AM, Ravi Bangoria wrote:
>>> On 3/6/20 3:36 AM, Kim Phillips wrote:
> On 3/3/20 3:55 AM, Kim Phillips wrote:
>> On 3/2/20 2:21 PM, Stephane Eranian wrote:
>>> On Mon, Mar 2, 2020 at
Recent cleanup from Sean Christopherson introduced a use-after-free
condition that crashes the kernel when shutting down the VM with
PR KVM. It went unnoticed so far because PR isn't tested/used much
these days (mostly used for nested on POWER8, not supported on POWER9
where HV should be used for n
With PR KVM, shutting down a VM causes the host kernel to crash:
[ 314.219284] BUG: Unable to handle kernel data access on read at
0xc0080176c638
[ 314.219299] Faulting instruction address: 0xc00800d4ddb0
cpu 0x0: Vector: 300 (Data Access) at [c0036da077a0]
pc: c00800d4ddb0:
This is only relevant to PR KVM. Make it obvious by moving the
function declaration to the Book3s header and rename it with
a _pr suffix.
Signed-off-by: Greg Kurz
---
arch/powerpc/include/asm/kvm_ppc.h|1 -
arch/powerpc/kvm/book3s.h |1 +
arch/powerpc/kvm/book3s_32_mmu_ho
These are only used by HV KVM and BookE, and in both cases they are
nops.
Signed-off-by: Greg Kurz
---
arch/powerpc/include/asm/kvm_ppc.h |2 --
arch/powerpc/kvm/book3s.c |5 -
arch/powerpc/kvm/book3s_hv.c |6 --
arch/powerpc/kvm/book3s_pr.c |1 -
arc
On Wed, Mar 18, 2020 at 06:43:30PM +0100, Greg Kurz wrote:
> It turns out that this is only relevant to PR KVM actually. And both
> 32 and 64 backends need vcpu->arch.book3s to be valid when calling
> kvmppc_mmu_destroy_pr(). So instead of calling kvmppc_mmu_destroy()
> from kvm_arch_vcpu_destroy()
On Tue, Mar 17, 2020 at 11:16:34AM +0100, Christophe Leroy wrote:
>
>
> Le 09/03/2020 à 09:57, Ravi Bangoria a écrit :
> >Future Power architecture is introducing second DAWR. Add SPRN_ macros
> >for the same.
>
> I'm not sure this is called 'macros'. For me a macro is something more
> complex.
On Mon, 16 Mar 2020, Michal Hocko wrote:
> > We can dynamically number the nodes right? So just make sure that the
> > firmware properly creates memory on node 0?
>
> Are you suggesting that the OS would renumber NUMA nodes coming
> from FW just to satisfy node 0 existence? If yes then I believe t
On Wed, 18 Mar 2020, Srikar Dronamraju wrote:
> For a memoryless or offline nodes, node_numa_mem refers to a N_MEMORY
> fallback node. Currently kernel has an API set_numa_mem that sets
> node_numa_mem for memoryless node. However this API cannot be used for
> offline nodes. Hence all offline node
On Tue, Mar 17, 2020 at 09:08:19AM -0400, Stefan Berger wrote:
> From: Stefan Berger
>
> This patch fixes the following problem when the ibmvtpm driver
> is built as a module:
>
> ERROR: modpost: "tpm2_get_cc_attrs_tbl" [drivers/char/tpm/tpm_ibmvtpm.ko]
> undefined!
> make[1]: *** [scripts/Make
On 3/18/20 3:42 PM, Jarkko Sakkinen wrote:
On Tue, Mar 17, 2020 at 09:08:19AM -0400, Stefan Berger wrote:
From: Stefan Berger
This patch fixes the following problem when the ibmvtpm driver
is built as a module:
ERROR: modpost: "tpm2_get_cc_attrs_tbl" [drivers/char/tpm/tpm_ibmvtpm.ko]
undefin
From: Logan Gunthorpe
The call to init_completion() in mrpc_queue_cmd() can theoretically
race with the call to poll_wait() in switchtec_dev_poll().
poll()write()
switchtec_dev_poll() switchtec_dev_write()
poll_wait(&s->comp.wait); mrpc_queue_cmd
The PS3 notification interrupt and kthread use a hacked up completion to
communicate. Since we're wanting to change the completion implementation and
this is abuse anyway, replace it with a simple rcuwait since there is only ever
the one waiter.
AFAICT the kthread uses TASK_INTERRUPTIBLE to not in
This is the second version of this work. The first one can be found here:
https://lore.kernel.org/r/20200313174701.148376-1-bige...@linutronix.de
Changes since V1:
- Split the PCI/switchtec patch (picked up the fix from Logan) and
reworked the change log.
- Addressed Linus feedback v
Extend rcuwait_wait_event() with a state variable so that it is not
restricted to UNINTERRUPTIBLE waits.
Signed-off-by: Peter Zijlstra (Intel)
Signed-off-by: Thomas Gleixner
Cc: Oleg Nesterov
Cc: Davidlohr Bueso
---
include/linux/rcuwait.h | 12 ++--
kernel/locking/percpu-rwse
From: Thomas Gleixner
seqlock consists of a sequence counter and a spinlock_t which is used to
serialize the writers. spinlock_t is substituted by a "sleeping" spinlock
on PREEMPT_RT enabled kernels which breaks the usage in the timekeeping
code as the writers are executed in hard interrupt and t
From: Thomas Gleixner
The completion usage in this driver is interesting:
- it uses a magic complete function which according to the comment was
implemented by invoking complete() four times in a row because
complete_all() was not exported at that time.
- it uses an open coded wait/
From: Sebastian Andrzej Siewior
The poll callback is using the completion wait queue and sticks it into
poll_wait() to wake up pollers after a command has completed.
This works to some extent, but cannot provide EPOLLEXCLUSIVE support
because the waker side uses complete_all() which unconditiona
In order to avoid future header hell, remove the inclusion of
proc_fs.h from acpi_bus.h. All it needs is a forward declaration of a
struct.
Signed-off-by: Peter Zijlstra (Intel)
Signed-off-by: Thomas Gleixner
---
drivers/platform/x86/dell-smo8800.c |1 +
drivers/platfor
ep_io() uses a completion on stack and open codes the waiting with:
wait_event_interruptible (done.wait, done.done);
and
wait_event (done.wait, done.done);
This waits in non-exclusive mode for complete(), but there is no reason to
do so because the completion can only be waited for by the tas
From: Thomas Gleixner
The kernel provides a variety of locking primitives. The nesting of these
lock types and the implications of them on RT enabled kernels is nowhere
documented.
Add initial documentation.
Signed-off-by: Thomas Gleixner
---
V2: Addressed review comments from Randy
---
Docum
From: Thomas Gleixner
As a preparation to use simple wait queues for completions:
- Provide swake_up_all_locked() to support complete_all()
- Make __prepare_to_swait() public available
This is done to enable the usage of complete() within truly atomic contexts
on a PREEMPT_RT enabled kernel
From: Thomas Gleixner
completion uses a wait_queue_head_t to enqueue waiters.
wait_queue_head_t contains a spinlock_t to protect the list of waiters
which excludes it from being used in truly atomic context on a PREEMPT_RT
enabled kernel.
The spinlock in the wait queue head cannot be replaced b
From: Madalin Bucur
[ Upstream commit 26d5bb9e4c4b541c475751e015072eb2cbf70d15 ]
FMAN DMA read or writes under heavy traffic load may cause FMAN
internal resource leak; thus stopping further packet processing.
The FMAN internal queue can overflow when FMAN splits single
read or write transactio
From: Madalin Bucur
[ Upstream commit 26d5bb9e4c4b541c475751e015072eb2cbf70d15 ]
FMAN DMA read or writes under heavy traffic load may cause FMAN
internal resource leak; thus stopping further packet processing.
The FMAN internal queue can overflow when FMAN splits single
read or write transactio
On Wed, Mar 18, 2020 at 09:43:03PM +0100, Thomas Gleixner wrote:
> From: Logan Gunthorpe
>
> The call to init_completion() in mrpc_queue_cmd() can theoretically
> race with the call to poll_wait() in switchtec_dev_poll().
>
> poll() write()
> switchtec_dev_poll()
On Wed, Mar 18, 2020 at 09:43:04PM +0100, Thomas Gleixner wrote:
> From: Sebastian Andrzej Siewior
>
> The poll callback is using the completion wait queue and sticks it into
> poll_wait() to wake up pollers after a command has completed.
>
> This works to some extent, but cannot provide EPOLLEX
On Wed, Mar 18, 2020 at 12:44:52PM +0100, Christophe Leroy wrote:
> Le 18/03/2020 à 12:35, Michael Ellerman a écrit :
> >Christophe Leroy writes:
> >>Le 09/03/2020 à 09:58, Ravi Bangoria a écrit :
> >>>Currently we assume that we have only one watchpoint supported by hw.
> >>>Get rid of that assum
The routine hugetlb_add_hstate prints a warning if the hstate already
exists. This was originally done as part of kernel command line
parsing. If 'hugepagesz=' was specified more than once, the warning
pr_warn("hugepagesz= specified twice, ignoring\n");
would be printed.
Some architectur
Now that architectures provide arch_hugetlb_valid_size(), parsing
of "hugepagesz=" can be done in architecture independent code.
Create a single routine to handle hugepagesz= parsing and remove
all arch specific routines. We can also remove the interface
hugetlb_bad_size() as this is no longer use
The architecture independent routine hugetlb_default_setup sets up
the default huge pages size. It has no way to verify if the passed
value is valid, so it accepts it and attempts to validate at a later
time. This requires undocumented cooperation between the arch specific
and arch independent co
Longpeng(Mike) reported a weird message from hugetlb command line processing
and proposed a solution [1]. While the proposed patch does address the
specific issue, there are other related issues in command line processing.
As hugetlbfs evolved, updates to command line processing have been made to
With all hugetlb page processing done in a single file clean up code.
- Make code match desired semantics
- Update documentation with semantics
- Make all warnings and errors messages start with 'HugeTLB:'.
- Consistently name command line parsing routines.
- Add comments to code
- Describe som
On Wed, Mar 18, 2020 at 03:06:31PM -0700, Mike Kravetz wrote:
> The architecture independent routine hugetlb_default_setup sets up
> the default huge pages size. It has no way to verify if the passed
> value is valid, so it accepts it and attempts to validate at a later
> time. This requires undo
On 2020-03-18 2:43 p.m., Thomas Gleixner wrote:
> From: Sebastian Andrzej Siewior
>
> The poll callback is using the completion wait queue and sticks it into
> poll_wait() to wake up pollers after a command has completed.
>
> This works to some extent, but cannot provide EPOLLEXCLUSIVE suppor
Hi Mike,
The series looks like a great idea to me. One nit on the x86 bits,
though...
> diff --git a/arch/x86/mm/hugetlbpage.c b/arch/x86/mm/hugetlbpage.c
> index 5bfd5aef5378..51e6208fdeec 100644
> --- a/arch/x86/mm/hugetlbpage.c
> +++ b/arch/x86/mm/hugetlbpage.c
> @@ -181,16 +181,25 @@ hugetlb
Raphael M Zinsly writes:
> Thanks for the reviews Daniel, I'll use your testcases and address the
> issues you found, I still have some questions bellow:
>
> On 18/03/2020 03:18, Daniel Axtens wrote:
>> Raphael Moreira Zinsly writes:
>>
>>> Include a decompression testcase for the powerpc NX-G
On 2020-03-18 2:43 p.m., Thomas Gleixner wrote:
> There is no semantical or functional change:
>
> - completions use the exclusive wait mode which is what swait provides
>
> - complete() wakes one exclusive waiter
>
> - complete_all() wakes all waiters while holding the lock which prote
Raphael Moreira Zinsly writes:
> Add files to be able to compress and decompress files using the
> powerpc NX-GZIP engine.
>
> Signed-off-by: Bulent Abali
> Signed-off-by: Raphael Moreira Zinsly
> ---
> .../powerpc/nx-gzip/inc/copy-paste.h | 54 ++
> .../selftests/powerpc/nx-gzip/inc
On Wed, Mar 18, 2020 at 09:43:10PM +0100, Thomas Gleixner wrote:
> From: Thomas Gleixner
>
> The kernel provides a variety of locking primitives. The nesting of these
> lock types and the implications of them on RT enabled kernels is nowhere
> documented.
>
> Add initial documentation.
>
> Sign
On 3/18/20 3:09 PM, Will Deacon wrote:
> On Wed, Mar 18, 2020 at 03:06:31PM -0700, Mike Kravetz wrote:
>> The architecture independent routine hugetlb_default_setup sets up
>> the default huge pages size. It has no way to verify if the passed
>> value is valid, so it accepts it and attempts to val
On 3/18/20 3:15 PM, Dave Hansen wrote:
> Hi Mike,
>
> The series looks like a great idea to me. One nit on the x86 bits,
> though...
>
>> diff --git a/arch/x86/mm/hugetlbpage.c b/arch/x86/mm/hugetlbpage.c
>> index 5bfd5aef5378..51e6208fdeec 100644
>> --- a/arch/x86/mm/hugetlbpage.c
>> +++ b/arch
Segher Boessenkool writes:
> On Wed, Mar 18, 2020 at 12:44:52PM +0100, Christophe Leroy wrote:
>> Le 18/03/2020 à 12:35, Michael Ellerman a écrit :
>> >Christophe Leroy writes:
>> >>Le 09/03/2020 à 09:58, Ravi Bangoria a écrit :
>> >>>Currently we assume that we have only one watchpoint supported
On 3/18/20 3:52 PM, Mike Kravetz wrote:
> Sounds good. I'll incorporate those changes into a v2, unless someone
> else with has a different opinion.
>
> BTW, this patch should not really change the way the code works today.
> It is mostly a movement of code. Unless I am missing something, the
>
1 - 100 of 156 matches
Mail list logo