On Wed, Aug 12, 2015 at 03:40:56PM +0200, Christophe Leroy wrote:
> /* Insert level 1 index */
> rlwimi r11, r10, 32 - ((PAGE_SHIFT - 2) << 1), (PAGE_SHIFT - 2) << 1,
> 29
> lwz r11, (swapper_pg_dir-PAGE_OFFSET)@l(r11)/* Get the
> level 1 entry */
> + mtcrr1
From: Anshuman Khandual
This patch defines macros for the three bolted SLB indexes we use.
Switch the functions that take the indexes as an argument to use the
enum.
Signed-off-by: Anshuman Khandual
Signed-off-by: Michael Ellerman
---
v2: Use index rather than slot as that's what the ISA docs
For no reason other than it looks ugly.
Signed-off-by: Michael Ellerman
---
arch/powerpc/mm/slb.c | 10 +-
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/arch/powerpc/mm/slb.c b/arch/powerpc/mm/slb.c
index 0c7115fd314b..515730e499fe 100644
--- a/arch/powerpc/mm/slb.c
+++ b
Since we moved the "lock" to be the first element of
struct tlb_core_data in commit 82d86de25b9c ("powerpc/e6500: Make TLB
lock recursive), this macro is not used by any code. Just delete it.
Signed-off-by: Kevin Hao
---
arch/powerpc/kernel/asm-offsets.c | 1 -
1 file changed, 1 deletion(-)
dif
It makes no sense to put the instructions for calculating the lock
value (cpu number + 1) and the clearing of eq bit of cr1 in lbarx/stbcx
loop. And when the lock is acquired by the other thread, the current
lock value has no chance to equal with the lock value used by current
cpu. So we can skip t
I didn't find anything unusual. But I think we do need to order the
load/store of esel_next when acquire/release tcd lock. For acquire,
add a data dependency to order the loads of lock and esel_next.
For release, even there already have a "isync" here, but it doesn't
guarantee any memory access ord
In original design, it tries to group VFs to enable more number of VFs in the
system, when VF BAR is bigger than 64MB. This design has a flaw in which one
error on a VF will interfere other VFs in the same group.
This patch series change this design by using M64 BAR in Single PE mode to
cover only
The alignment of IOV BAR on PowerNV platform is the total size of the IOV
BAR. No matter whether the IOV BAR is extended with number of
roundup_pow_of_two(total_vfs) or number of max PE number (256), the total
size could be calculated by (vfs_expanded * VF_BAR_size).
This patch simplifies the pnv_
On PHB_IODA2, we enable SRIOV devices by mapping IOV BAR with M64 BARs. If
a SRIOV device's IOV BAR is not 64bit-prefetchable, this is not assigned
from 64bit prefetchable window, which means M64 BAR can't work on it.
This patch makes this explicit.
Signed-off-by: Wei Yang
---
arch/powerpc/plat
In current implementation, when VF BAR is bigger than 64MB, it uses 4 M64
BARs in Single PE mode to cover the number of VFs required to be enabled.
By doing so, several VFs would be in one VF Group and leads to interference
between VFs in the same group.
This patch changes the design by using one
Each VF could have 6 BARs at most. When the total BAR size exceeds the
gate, after expanding it will also exhaust the M64 Window.
This patch limits the boundary by checking the total VF BAR size instead of
the individual BAR.
Signed-off-by: Wei Yang
---
arch/powerpc/platforms/powernv/pci-ioda.c
At the moment 64bit-prefetchable window can be maximum 64GB, which is
currently got from device tree. This means that in shared mode the maximum
supported VF BAR size is 64GB/256=256MB. While this size could exhaust the
whole 64bit-prefetchable window. This is a design decision to set a
boundary to
When M64 BAR is set to Single PE mode, the PE# assigned to VF could be
sparse.
This patch restructures the patch to allocate sparse PE# for VFs when M64
BAR is set to Single PE mode.
Signed-off-by: Wei Yang
---
arch/powerpc/include/asm/pci-bridge.h |2 +-
arch/powerpc/platforms/powernv/
On Wed, Aug 12, 2015 at 03:42:47PM +0300, Boaz Harrosh wrote:
> The support I have suggested and submitted for zone-less sections.
> (In my add_persistent_memory() patchset)
>
> Would work perfectly well and transparent for all such multimedia cases.
> (All hacks removed). In fact I have loaded pme
On Wed, Aug 12, 2015 at 09:01:02AM -0700, Linus Torvalds wrote:
> I'm assuming that anybody who wants to use the page-less
> scatter-gather lists always does so on memory that isn't actually
> virtually mapped at all, or only does so on sane architectures that
> are cache coherent at a physical lev
On Wed, Aug 12, 2015 at 09:05:15AM -0700, Linus Torvalds wrote:
> [ Again, I'm responding to one random patch - this pattern was in
> other patches too. ]
>
> A question: do we actually expect to mix page-less and pageful SG
> entries in the same SG list?
>
> How does that happen?
Both for DAX
On Thu, Aug 13, 2015 at 09:37:37AM +1000, Julian Calaby wrote:
> I.e. ~90% of this patch set seems to be just mechanically dropping
> BUG_ON()s and converting open coded stuff to use accessor functions
> (which should be macros or get inlined, right?) - and the remaining
> bit is not flushing if we
Most architectures just call into ->dma_supported, but some also return 1
if the method is not present, or 0 if no dma ops are present (although
that should never happeb). Consolidate this more broad version into
common code.
Also fix h8300 which inorrectly always returned 0, which would have been
Since 2009 we have a nice asm-generic header implementing lots of DMA API
functions for architectures using struct dma_map_ops, but unfortunately
it's still missing a lot of APIs that all architectures still have to
duplicate.
This series consolidates the remaining functions, although we still
nee
The coherent DMA allocator works the same over all architectures supporting
dma_map operations.
This patch consolidates them and converges the minor differences:
- the debug_dma helpers are now called from all architectures, including
those that were previously missing them
- dma_alloc_from_
Most architectures do not support non-coherent allocations and either
define dma_{alloc,free}_noncoherent to their coherent versions or stub
them out.
Openrisc uses dma_{alloc,free}_attrs to implement them, and only Mips
implements them directly.
This patch moves the Openrisc version to common co
Currently there are three valid implementations of dma_mapping_error:
(1) call ->mapping_error
(2) check for a hardcoded error code
(3) always return 0
This patch provides a common implementation that calls ->mapping_error
if present, then checks for DMA_ERROR_CODE if defined or otherwise
retu
Almost everyone implements dma_set_mask the same way, although some time
that's hidden in ->set_dma_mask methods.
Move this implementation to common code, including a callout to override
the post-check action, and remove duplicate instaces in methods as well.
Unfortunately some architectures over
On Thu, Aug 13, 2015 at 05:04:05PM +0200, Christoph Hellwig wrote:
> diff --git a/arch/arm/include/asm/dma-mapping.h
> b/arch/arm/include/asm/dma-mapping.h
> index 2ae3424..ab521d5 100644
> --- a/arch/arm/include/asm/dma-mapping.h
> +++ b/arch/arm/include/asm/dma-mapping.h
> @@ -175,21 +175,6 @@ s
On Thu, Aug 13, 2015 at 05:04:08PM +0200, Christoph Hellwig wrote:
> diff --git a/arch/arm/common/dmabounce.c b/arch/arm/common/dmabounce.c
> index 1143c4d..260f52a 100644
> --- a/arch/arm/common/dmabounce.c
> +++ b/arch/arm/common/dmabounce.c
> @@ -440,14 +440,6 @@ static void dmabounce_sync_for_d
On 08/13/2015 05:40 PM, Christoph Hellwig wrote:
> On Wed, Aug 12, 2015 at 03:42:47PM +0300, Boaz Harrosh wrote:
>> The support I have suggested and submitted for zone-less sections.
>> (In my add_persistent_memory() patchset)
>>
>> Would work perfectly well and transparent for all such multimedia
On Thu, Aug 13, 2015 at 04:20:40PM +0100, Russell King - ARM Linux wrote:
> > -/*
> > - * Dummy noncoherent implementation. We don't provide a dma_cache_sync
> > - * function so drivers using this API are highlighted with build warnings.
> > - */
>
> I'd like a similar comment to remain after thi
On Thu, Aug 13, 2015 at 04:25:05PM +0100, Russell King - ARM Linux wrote:
> On Thu, Aug 13, 2015 at 05:04:08PM +0200, Christoph Hellwig wrote:
> > diff --git a/arch/arm/common/dmabounce.c b/arch/arm/common/dmabounce.c
> > index 1143c4d..260f52a 100644
> > --- a/arch/arm/common/dmabounce.c
> > +++ b
On Thu, 2015-08-13 at 19:51 +0800, Kevin Hao wrote:
> It makes no sense to put the instructions for calculating the lock
> value (cpu number + 1) and the clearing of eq bit of cr1 in lbarx/stbcx
> loop. And when the lock is acquired by the other thread, the current
> lock value has no chance to equ
Peter Zijlstra [pet...@infradead.org] wrote:
| On Tue, Aug 11, 2015 at 09:14:00PM -0700, Sukadev Bhattiprolu wrote:
| > | +static void __perf_read_group_add(struct perf_event *leader, u64
read_format, u64 *values)
| > | {
| > | + struct perf_event *sub;
| > | + int n = 1; /* skip @nr */
| >
| >
On Thu, Aug 13, 2015 at 01:04:28PM -0700, Sukadev Bhattiprolu wrote:
> | > | +static int perf_read_group(struct perf_event *event,
> | > | + u64 read_format, char __user *buf)
> | > | +{
> | > | + struct perf_event *leader = event->group_leader, *child;
> | >
Hi,
Here is another instruction trace from a kernel context switch trace.
Quite a lot of register and CR save/restore code.
Regards,
Anton
c02943d8 mfcrr12
c02943dc std r20,-96(r1)
c02943e0 std r21,-88(r1)
c02943e4 rldicl. r9,r4,63,63
c0294
The goal of this series is to enhance the DAX I/O path so that all operations
that store data (I/O writes, zeroing blocks, punching holes, etc.) properly
synchronize the stores to media using the PMEM API. This ensures that the data
DAX is writing is durable on media before the operation completes
Update the annotation for the kaddr pointer returned by direct_access()
so that it is a __pmem pointer. This is consistent with the PMEM driver
and with how this direct_access() pointer is used in the DAX code.
Signed-off-by: Ross Zwisler
---
Documentation/filesystems/Locking | 3 ++-
arch/pow
On Thu, Aug 13, 2015 at 9:51 AM, Ross Zwisler
wrote:
> Update the annotation for the kaddr pointer returned by direct_access()
> so that it is a __pmem pointer. This is consistent with the PMEM driver
> and with how this direct_access() pointer is used in the DAX code.
>
> Signed-off-by: Ross Zwi
Hi Christoph,
On Fri, Aug 14, 2015 at 12:35 AM, Christoph Hellwig wrote:
> On Thu, Aug 13, 2015 at 09:37:37AM +1000, Julian Calaby wrote:
>> I.e. ~90% of this patch set seems to be just mechanically dropping
>> BUG_ON()s and converting open coded stuff to use accessor functions
>> (which should b
On Thu, Aug 13, 2015 at 10:11:06PM +0800, Wei Yang wrote:
>On PHB_IODA2, we enable SRIOV devices by mapping IOV BAR with M64 BARs. If
>a SRIOV device's IOV BAR is not 64bit-prefetchable, this is not assigned
>from 64bit prefetchable window, which means M64 BAR can't work on it.
>
>This patch makes
On Thu, Aug 13, 2015 at 10:11:08PM +0800, Wei Yang wrote:
>In current implementation, when VF BAR is bigger than 64MB, it uses 4 M64
>BARs in Single PE mode to cover the number of VFs required to be enabled.
>By doing so, several VFs would be in one VF Group and leads to interference
>between VFs i
On Thu, Aug 13, 2015 at 10:11:09PM +0800, Wei Yang wrote:
>At the moment 64bit-prefetchable window can be maximum 64GB, which is
>currently got from device tree. This means that in shared mode the maximum
>supported VF BAR size is 64GB/256=256MB. While this size could exhaust the
>whole 64bit-prefe
On Thu, Aug 13, 2015 at 10:11:10PM +0800, Wei Yang wrote:
>Each VF could have 6 BARs at most. When the total BAR size exceeds the
>gate, after expanding it will also exhaust the M64 Window.
>
>This patch limits the boundary by checking the total VF BAR size instead of
>the individual BAR.
>
>Signed
On Thu, Aug 13, 2015 at 10:11:11PM +0800, Wei Yang wrote:
>When M64 BAR is set to Single PE mode, the PE# assigned to VF could be
>sparse.
>
>This patch restructures the patch to allocate sparse PE# for VFs when M64
>BAR is set to Single PE mode.
>
>Signed-off-by: Wei Yang
>---
> arch/powerpc/incl
On Thu, Aug 13, 2015 at 10:11:07PM +0800, Wei Yang wrote:
>The alignment of IOV BAR on PowerNV platform is the total size of the IOV
>BAR. No matter whether the IOV BAR is extended with number of
>roundup_pow_of_two(total_vfs) or number of max PE number (256), the total
>size could be calculated by
On Wed, Aug 12, 2015 at 09:55:25PM +1000, Michael Ellerman wrote:
> The paca display is already more than 24 lines, which can be problematic
> if you have an old school 80x24 terminal, or more likely you are on a
> virtual terminal which does not scroll for whatever reason.
>
> We'd like to expand
On Wed, 2015-08-05 at 14:03 +1000, Anton Blanchard wrote:
> Hi,
>
> While looking at traces of kernel workloads, I noticed places where gcc
> used a large number of non volatiles. Some of these functions
> did very little work, and we spent most of our time saving the
> non volatiles to the stack
On Wed, 2015-08-12 at 21:06 +0200, Alexander Graf wrote:
>
> On 10.08.15 17:27, Nicholas Krause wrote:
> > This fixes the wrapper functions kvm_umap_hva_hv and the function
> > kvm_unmap_hav_range_hv to return the return value of the function
> > kvm_handle_hva or kvm_handle_hva_range that they ar
On Thu, 2015-08-06 at 18:54 +0530, Anshuman Khandual wrote:
> On 08/04/2015 03:27 PM, Michael Ellerman wrote:
> > On Mon, 2015-13-07 at 08:16:06 UTC, Anshuman Khandual wrote:
> >> This patch enables facility unavailable exceptions for generic facility,
> >> FPU, ALTIVEC and VSX in /proc/interrupts
The paca display is already more than 24 lines, which can be problematic
if you have an old school 80x24 terminal, or more likely you are on a
virtual terminal which does not scroll for whatever reason.
This adds an optional letter to the "dp" and "dpa" xmon commands
("dpp" and "dppa"), which will
Hi Eduardo,
In previous mail I asked questions about including header files in device tree.
Don't bother, I have already figured out the solution.
Another questions is about cpu cooling:
I found out that there is no explicit calling for registering cpu cooling
device in the of-thermal style drive
On Thu, Aug 13, 2015 at 7:31 AM, Christoph Hellwig wrote:
> On Wed, Aug 12, 2015 at 09:01:02AM -0700, Linus Torvalds wrote:
>> I'm assuming that anybody who wants to use the page-less
>> scatter-gather lists always does so on memory that isn't actually
>> virtually mapped at all, or only does so o
On Thu, 2015-08-13 at 19:51 +0800, Kevin Hao wrote:
> I didn't find anything unusual. But I think we do need to order the
> load/store of esel_next when acquire/release tcd lock. For acquire,
> add a data dependency to order the loads of lock and esel_next.
> For release, even there already have a
On Fri, Aug 14, 2015 at 11:04:58AM +1000, Gavin Shan wrote:
>On Thu, Aug 13, 2015 at 10:11:07PM +0800, Wei Yang wrote:
>>The alignment of IOV BAR on PowerNV platform is the total size of the IOV
>>BAR. No matter whether the IOV BAR is extended with number of
>>roundup_pow_of_two(total_vfs) or numbe
From: Wang Dongsheng
Signed-off-by: Wang Dongsheng
---
*V2*
No changes.
diff --git a/arch/powerpc/boot/dts/fsl/t1040si-post.dtsi
b/arch/powerpc/boot/dts/fsl/t1040si-post.dtsi
index 9e9f7e2..9770d02 100644
--- a/arch/powerpc/boot/dts/fsl/t1040si-post.dtsi
+++ b/arch/powerpc/boot/dts/fsl/t1040si-
On Fri, Aug 14, 2015 at 10:52:21AM +1000, Gavin Shan wrote:
>On Thu, Aug 13, 2015 at 10:11:08PM +0800, Wei Yang wrote:
>>In current implementation, when VF BAR is bigger than 64MB, it uses 4 M64
>>BARs in Single PE mode to cover the number of VFs required to be enabled.
>>By doing so, several VFs w
On Fri, Aug 14, 2015 at 11:03:00AM +1000, Gavin Shan wrote:
>On Thu, Aug 13, 2015 at 10:11:11PM +0800, Wei Yang wrote:
>>When M64 BAR is set to Single PE mode, the PE# assigned to VF could be
>>sparse.
>>
>>This patch restructures the patch to allocate sparse PE# for VFs when M64
>>BAR is set to Si
On Thu, 2015-08-13 at 20:30 -0700, Dan Williams wrote:
> On Thu, Aug 13, 2015 at 7:31 AM, Christoph Hellwig wrote:
> > On Wed, Aug 12, 2015 at 09:01:02AM -0700, Linus Torvalds wrote:
> >> I'm assuming that anybody who wants to use the page-less
> >> scatter-gather lists always does so on memory th
From: James Bottomley
Date: Thu, 13 Aug 2015 20:59:20 -0700
> On Thu, 2015-08-13 at 20:30 -0700, Dan Williams wrote:
>> On Thu, Aug 13, 2015 at 7:31 AM, Christoph Hellwig wrote:
>> > On Wed, Aug 12, 2015 at 09:01:02AM -0700, Linus Torvalds wrote:
>> >> I'm assuming that anybody who wants to use
Hello Hongtao,
On Fri, Aug 14, 2015 at 03:15:22AM +, Hongtao Jia wrote:
> Hi Eduardo,
>
> In previous mail I asked questions about including header files in device
> tree.
> Don't bother, I have already figured out the solution.
>
> Another questions is about cpu cooling:
> I found out that
From: Wang Dongsheng
SCFG provides SoC specific configuration and status registers for
the chip. Add this for powerpc platform.
Signed-off-by: Wang Dongsheng
---
*V2*
- Remove scfg description in board.txt and create scfg.txt for scfg.
- Change "fsl,-scfg" to "fsl,-scfg"
diff --git a/Documenta
On Wed, Aug 05, 2015 at 12:38:31PM +0530, Gautham R. Shenoy wrote:
> Section 3.7 of Version 1.2 of the Power8 Processor User's Manual
> prescribes that updates to HID0 be preceded by a SYNC instruction and
> followed by an ISYNC instruction (Page 91).
>
> Create an inline function name update_powe
On Thu, May 21, 2015 at 01:57:04PM +0530, Gautham R. Shenoy wrote:
> In guest_exit_cont we call kvmhv_commence_exit which expects the trap
> number as the argument. However r3 doesn't contain the trap number at
> this point and as a result we would be calling the function with a
> spurious trap num
Acked-by: Ian Munsie
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev
Acked-by: Ian Munsie
Excerpts from Daniel Axtens's message of 2015-08-13 14:11:20 +1000:
> +/* Only warn if we detached while the link was OK.
Only because mpe is sure to pick this up (I personally don't mind) -
block comments should start with /* on a line by itself.
> +/* If the a
Excerpts from Daniel Axtens's message of 2015-08-13 14:11:21 +1000:
> Previously the SPA was allocated and freed upon entering and leaving
> AFU-directed mode. This causes some issues for error recovery - contexts
> hold a pointer inside the SPA, and they may persist after the AFU has
> been detach
Acked-by: Ian Munsie
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev
Acked-by: Ian Munsie
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev
Hi Tabi,
> -Original Message-
> From: Timur Tabi [mailto:ti...@tabi.org]
> Sent: Tuesday, March 25, 2014 11:55 PM
> To: Wang Dongsheng-B40534
> Cc: Wood Scott-B07421; Jin Zhengxiong-R64188; Li Yang-Leo-R58472; linuxppc-
> d...@lists.ozlabs.org; linux-fb...@vger.kernel.org
> Subject: Re: [P
From: Jason Jin
For deep sleep, the diu module will power off, when wake up
from the deep sleep, the registers need to be reinitialized.
Signed-off-by: Jason Jin
Signed-off-by: Wang Dongsheng
---
*v2*
Changes:
- int i -> unsigned int i.
Rmove:
- struct mfb_info *mfbi;
diff --git a/drivers/vi
This patch plugs the leak of irq_bitmap, allocated as part of
initialization of cxl_context struct; during the call to
afu_allocate_irqs. The bitmap is now release during the call to function
afu_release_irqs.
Reported-by: Matthew R. Ochs
Signed-off-by: Vaibhav Jain
---
drivers/misc/cxl/irq.c |
In the complete hotplug case, EEH PEs are supposed to be released
and set to NULL. Normally, this is done by eeh_remove_device(),
which is called from pcibios_release_device().
However, if something is holding a kref to the device, it will not
be released, and the PE will remain. eeh_add_device_la
Excerpts from Daniel Axtens's message of 2015-08-13 14:11:24 +1000:
> +/* This should contain *only* operations that can safely be done in
> + * both creation and recovery.
> + */
> +static int cxl_configure_adapter(struct cxl *adapter, struct pci_dev *dev)
> {
> -struct cxl *adapter;
> -b
Excerpts from Daniel Axtens's message of 2015-08-13 14:11:25 +1000:
> +rc = cxl_map_slice_regs(afu, adapter, dev);
> +if (rc)
> +return rc;
>
> -if ((rc = cxl_map_slice_regs(afu, adapter, dev)))
Like the previous patch, mixing this coding style change in with this
patch makes
Acked-by: Ian Munsie
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev
Acked-by: Ian Munsie
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev
Acked-by: Ian Munsie
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev
Once cxlflash has been merged we might drop this, but until then:
Acked-by: Ian Munsie
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev
Unlike normal hardware PMCs, the 24x7 counters in Power8 are stored in
memory and accessed via a hypervisor call (HCALL). A major aspect of the
HCALL is that it allows retireving _several_ counters at once (unlike
regular PMCs, which are read one at a time). By reading several counters
at once, we
Currently, the PMU interface allows reading only one counter at a time.
But some PMUs like the 24x7 counters in Power, support reading several
counters at once. To leveage this functionality, extend the transaction
interface to support a "transaction type".
The first type, PERF_PMU_TXN_ADD, refers
From: "Peter Zijlstra (Intel)"
In order to free up the perf_event_read_group() name:
s/perf_event_read_\(one\|group\)/perf_read_\1/g
s/perf_read_hw/__perf_read/g
Signed-off-by: Peter Zijlstra (Intel)
---
kernel/events/core.c | 14 +++---
1 file changed, 7 insertions(+), 7 deletion
When we implement the ability to read several counters at once (using
the PERF_PMU_TXN_READ transaction interface), perf_event_read() can
fail when the 'group' parameter is true (eg: trying to read too many
events at once).
For now, have perf_event_read() return an integer. Ignore the return
value
perf_event_read() does two things:
- call the PMU to read/update the counter value, and
- compute the total count of the event and its children
Not all callers need both. perf_event_reset() for instance needs the
first piece but doesn't need the second. Similarly, when we impleme
From: Peter Zijlstra
In order to enable the use of perf_event_read(.group = true), we need
to invert the sibling-child loop nesting of perf_read_group().
Currently we iterate the child list for each sibling, this precludes
using group reads. Flip things around so we iterate each group for
each c
Define a new PERF_PMU_TXN_READ interface to read a group of counters
at once.
pmu->start_txn()// Initialize before first event
for each event in group
pmu->read(event); // Queue each event to be read
rc = pmu->commit_txn() //
82 matches
Mail list logo