On Mon, Jun 27, 2022 at 09:25:42PM +0800, Yicong Yang wrote:
> On 2022/6/27 21:12, Greg KH wrote:
> > On Mon, Jun 27, 2022 at 07:18:12PM +0800, Yicong Yang wrote:
> >> Hi Greg,
> >>
> >> Since the kernel side of this device has been reviewed for 8 versions with
> >> all comments addressed and no mo
On Fri, Sep 24, 2021 at 04:03:53PM -0700, Andy Lutomirski wrote:
> I think the perfect and the good are a bit confused here. If we go for
> "good", then we have an mm owning a PASID for its entire lifetime. If
> we want "perfect", then we should actually do it right: teach the
> kernel to update a
On Fri, Sep 24, 2021 at 08:39:24AM -0700, Luck, Tony wrote:
> If you have ctags installed then a ctrl-] on that
> __fixup_pasid_exception() will take you to the function with
> the comments. No electron microscope needed.
I to use ctags, but when reading the #GP handler, this is a whole
different
On Thu, Sep 23, 2021 at 10:14:42AM -0700, Luck, Tony wrote:
> On Wed, Sep 22, 2021 at 11:07:22PM +0200, Peter Zijlstra wrote:
> > On Mon, Sep 20, 2021 at 07:23:45PM +, Fenghua Yu wrote:
> > > @@ -538,6 +547,9 @@ DEFINE_IDTENTRY_ERRORCODE(exc_
On Wed, Sep 22, 2021 at 11:44:41PM +, Fenghua Yu wrote:
> > Since you're making it a fatal error, before doing much of anything
> > else, you might at well fail decode and keep it all in the x86/decode.c
> > file, no need to spread this 'knowledge' any further.
> Is the following updated patc
On Wed, Sep 22, 2021 at 02:33:09PM -0700, Dave Hansen wrote:
> On 9/22/21 2:11 PM, Peter Zijlstra wrote:
> >>> +static bool fixup_pasid_exception(void)
> >>> +{
> >>> + if (!cpu_feature_enabled(X86_FEATURE_ENQCMD))
> >>> + return fa
On Wed, Sep 22, 2021 at 09:26:10PM +, Luck, Tony wrote:
> >> > +static bool fixup_pasid_exception(void)
> >> > +{
> >> > +if (!cpu_feature_enabled(X86_FEATURE_ENQCMD))
> >> > +return false;
> >> > +
> >> > +return __fixup_pasid_exception();
> >> > +}
> >
> > That
On Wed, Sep 22, 2021 at 11:07:22PM +0200, Peter Zijlstra wrote:
> On Mon, Sep 20, 2021 at 07:23:45PM +, Fenghua Yu wrote:
> > diff --git a/arch/x86/kernel/traps.c b/arch/x86/kernel/traps.c
> > index a58800973aed..a25d738ae839 100644
> > --- a/arch/x86/kernel/traps.c
>
On Mon, Sep 20, 2021 at 07:23:45PM +, Fenghua Yu wrote:
> diff --git a/arch/x86/kernel/traps.c b/arch/x86/kernel/traps.c
> index a58800973aed..a25d738ae839 100644
> --- a/arch/x86/kernel/traps.c
> +++ b/arch/x86/kernel/traps.c
> @@ -61,6 +61,7 @@
> #include
> #include
> #include
> +#inclu
On Mon, Sep 20, 2021 at 07:23:48PM +, Fenghua Yu wrote:
> diff --git a/tools/objtool/check.c b/tools/objtool/check.c
> index e5947fbb9e7a..91d13521d9d6 100644
> --- a/tools/objtool/check.c
> +++ b/tools/objtool/check.c
> @@ -3133,6 +3133,21 @@ static int validate_reachable_instructions(struct
On Thu, Aug 05, 2021 at 10:05:17PM +0800, Tianyu Lan wrote:
> static int __set_memory_enc_dec(unsigned long addr, int numpages, bool enc)
> {
> + return static_call(x86_set_memory_enc)(addr, numpages, enc);
> }
Hurpmh... So with a bit of 'luck' you get code-gen like:
__set_memory_enc_dec:
On Thu, Aug 05, 2021 at 10:05:17PM +0800, Tianyu Lan wrote:
> +static int default_set_memory_enc(unsigned long addr, int numpages, bool
> enc)
> +{
> + return 0;
> +}
> +
> +DEFINE_STATIC_CALL(x86_set_memory_enc, default_set_memory_enc);
That's spelled:
DEFINE_STATIC_CALL_RET0(x86_set_memory_
On Wed, May 05, 2021 at 07:39:14PM +0700, Suthikulpanit, Suravee wrote:
> Peter,
>
> On 5/4/2021 7:13 PM, Peter Zijlstra wrote:
> > On Tue, May 04, 2021 at 06:58:29PM +0700, Suthikulpanit, Suravee wrote:
> > > Peter,
> > >
> > > On 5/4/2021 4:39 PM, Pe
On Tue, May 04, 2021 at 06:58:29PM +0700, Suthikulpanit, Suravee wrote:
> Peter,
>
> On 5/4/2021 4:39 PM, Peter Zijlstra wrote:
> > On Tue, May 04, 2021 at 01:52:36AM -0500, Suravee Suthikulpanit wrote:
> >
> > > 2. Since AMD IOMMU PMU does not support interrupt mo
On Tue, May 04, 2021 at 01:52:36AM -0500, Suravee Suthikulpanit wrote:
> 2. Since AMD IOMMU PMU does not support interrupt mode, the logic
>can be simplified to always start counting with value zero,
>and accumulate the counter value when stopping without the need
>to keep track and re
On Fri, Sep 25, 2020 at 11:29:13AM -0400, Qian Cai wrote:
> It looks like the crashes happen in the interrupt remapping code where they
> are
> only able to to generate partial call traces.
> [8.466614][T0] BUG: kernel NULL pointer dereference, address:
>
> [8.47429
On Thu, Jun 25, 2020 at 01:17:22PM -0700, Fenghua Yu wrote:
> +static bool fixup_pasid_exception(void)
> +{
> + if (!IS_ENABLED(CONFIG_INTEL_IOMMU_SVM))
> + return false;
> + if (!static_cpu_has(X86_FEATURE_ENQCMD))
> + return false;
elsewhere you had another varia
On Tue, Jun 16, 2020 at 04:23:46PM -0700, Fenghua Yu wrote:
> Hi, Peter,
>
> On Mon, Jun 15, 2020 at 09:09:28PM +0200, Peter Zijlstra wrote:
> > On Mon, Jun 15, 2020 at 11:55:29AM -0700, Fenghua Yu wrote:
> >
> > > Or do you suggest to add a random new flag in struc
On Mon, Jun 15, 2020 at 01:17:35PM -0700, Fenghua Yu wrote:
> Hi, Peter,
>
> On Mon, Jun 15, 2020 at 09:09:28PM +0200, Peter Zijlstra wrote:
> > On Mon, Jun 15, 2020 at 11:55:29AM -0700, Fenghua Yu wrote:
> >
> > > Or do you suggest to add a random new flag in struc
On Mon, Jun 15, 2020 at 11:55:29AM -0700, Fenghua Yu wrote:
> Or do you suggest to add a random new flag in struct thread_info instead
> of a TIF flag?
Why thread_info? What's wrong with something simple like the below. It
takes a bit from the 'strictly current' flags word.
diff --git a/include
On Mon, Jun 15, 2020 at 11:12:59AM -0700, Fenghua Yu wrote:
> > I don't get why you need a rdmsr here, or why not having one would
> > require a TIF flag. Is that because this MSR is XSAVE/XRSTOR managed?
>
> My concern is TIF flags are precious (only 3 slots available). Defining
> a new TIF flag
On Mon, Jun 15, 2020 at 11:19:21AM -0700, Raj, Ashok wrote:
> On Mon, Jun 15, 2020 at 06:03:57PM +0200, Peter Zijlstra wrote:
> >
> > I don't get why you need a rdmsr here, or why not having one would
> > require a TIF flag. Is that because this M
On Mon, Jun 15, 2020 at 08:48:54AM -0700, Fenghua Yu wrote:
> Hi, Peter,
> On Mon, Jun 15, 2020 at 09:56:49AM +0200, Peter Zijlstra wrote:
> > On Fri, Jun 12, 2020 at 05:41:33PM -0700, Fenghua Yu wrote:
> > > +/*
> > > + * Apply some heuristics to see if the #G
On Fri, Jun 12, 2020 at 05:41:33PM -0700, Fenghua Yu wrote:
> +/*
> + * Apply some heuristics to see if the #GP fault was caused by a thread
> + * that hasn't had the IA32_PASID MSR initialized. If it looks like that
> + * is the problem, try initializing the IA32_PASID MSR. If the heuristic
> + *
On Fri, Jun 12, 2020 at 05:41:33PM -0700, Fenghua Yu wrote:
> @@ -447,6 +458,18 @@ dotraplinkage void do_general_protection(struct pt_regs
> *regs, long error_code)
> int ret;
>
> RCU_LOCKDEP_WARN(!rcu_is_watching(), "entry code didn't wake RCU");
> +
> + /*
> + * Perform th
On Fri, Jun 12, 2020 at 05:41:21PM -0700, Fenghua Yu wrote:
> This series only provides simple and basic support for ENQCMD and the MSR:
> 1. Clean up type definitions (patch 1-3). These patches can be in a
>separate series.
>- Define "pasid" as "unsigned int" consistently (patch 1 and 2).
On Thu, Apr 09, 2020 at 09:08:26AM -0700, Minchan Kim wrote:
> On Wed, Apr 08, 2020 at 01:59:08PM +0200, Christoph Hellwig wrote:
> > This allows to unexport map_vm_area and unmap_kernel_range, which are
> > rather deep internal and should not be available to modules.
>
> Even though I don't know
On Wed, Apr 08, 2020 at 08:01:00AM -0700, Randy Dunlap wrote:
> Hi,
>
> On 4/8/20 4:59 AM, Christoph Hellwig wrote:
> > diff --git a/mm/Kconfig b/mm/Kconfig
> > index 36949a9425b8..614cc786b519 100644
> > --- a/mm/Kconfig
> > +++ b/mm/Kconfig
> > @@ -702,7 +702,7 @@ config ZSMALLOC
> >
> > conf
ore systematic. This also removes any chance to create vmalloc
> mappings outside the designated areas or using executable permissions
> from modules. Besides that it removes more than 300 lines of code.
>
Looks great, thanks for doing this!
A
On Wed, Apr 08, 2020 at 01:59:15PM +0200, Christoph Hellwig wrote:
> This is always GFP_KERNEL - for long term mappings with other properties
> vmap should be used.
PAGE_KERNEL != GFP_KERNEL :-)
> - return vm_map_ram(mock->pages, mock->npages, 0, PAGE_KERNEL);
> + return vm_map_ram(mock-
On Thu, Apr 25, 2019 at 11:45:11AM +0200, Thomas Gleixner wrote:
> There is only one caller which hands in save_trace as function pointer.
>
> Signed-off-by: Thomas Gleixner
Acked-by: Peter Zijlstra (Intel)
___
iommu mailing list
iommu@li
On Thu, Apr 18, 2019 at 10:41:38AM +0200, Thomas Gleixner wrote:
> Replace the indirection through struct stack_trace by using the storage
> array based interfaces and storing the information is a small lockdep
> specific data structure.
>
Acked-by: Peter Zij
On Thu, Apr 18, 2019 at 10:41:37AM +0200, Thomas Gleixner wrote:
> There is only one caller of check_prev_add() which hands in a zeroed struct
> stack trace and a function pointer to save_stack(). Inside check_prev_add()
> the stack_trace struct is checked for being empty, which is always
> true. B
On Tue, Apr 23, 2019 at 11:38:16PM +0200, Borislav Petkov wrote:
> If that is all the changes it would need, then I guess that's ok. Btw,
> those rst-conversion patches don't really show what got changed. Dunno
> if git can even show that properly. I diffed the two files by hand to
> see what got c
On Tue, Apr 23, 2019 at 11:53:49AM -0600, Jonathan Corbet wrote:
> > Look at crap like this:
> >
> > "The memory allocations via :c:func:`kmalloc`, :c:func:`vmalloc`,
> > :c:func:`kmem_cache_alloc` and"
> >
> > That should've been written like:
> >
> > "The memory allocations via kmalloc(), vmal
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?
A: Top-posting.
Q: What is the most annoying thing in e-mail?
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfounda
On Tue, Apr 23, 2019 at 10:30:53AM -0600, Jonathan Corbet wrote:
> On Tue, 23 Apr 2019 15:01:32 +0200
> Peter Zijlstra wrote:
>
> > But yes, I have 0 motivation to learn or abide by rst. It simply doesn't
> > give me anything in return. There is no upside, only wor
On Tue, Apr 23, 2019 at 08:55:19AM -0400, Mike Snitzer wrote:
> On Tue, Apr 23 2019 at 4:31am -0400,
> Peter Zijlstra wrote:
>
> > On Mon, Apr 22, 2019 at 10:27:45AM -0300, Mauro Carvalho Chehab wrote:
> >
> > > .../{atomic_bitops.txt => atomic_bitops.rst}
On Mon, Apr 22, 2019 at 10:27:45AM -0300, Mauro Carvalho Chehab wrote:
> .../{atomic_bitops.txt => atomic_bitops.rst} | 2 +
What's happend to atomic_t.txt, also NAK, I still occationally touch
these files.
___
iommu mailing list
iommu@lists.linux-fou
On Fri, Apr 19, 2019 at 10:32:30AM +0200, Thomas Gleixner wrote:
> On Fri, 19 Apr 2019, Peter Zijlstra wrote:
> > On Thu, Apr 18, 2019 at 10:41:47AM +0200, Thomas Gleixner wrote:
> >
> > > +typedef bool (*stack_trace_consume_fn)(void *co
On Thu, Apr 18, 2019 at 10:41:47AM +0200, Thomas Gleixner wrote:
> +typedef bool (*stack_trace_consume_fn)(void *cookie, unsigned long addr,
> + bool reliable);
> +void arch_stack_walk(stack_trace_consume_fn consume_entry, void *cookie,
> + st
On Thu, Apr 18, 2019 at 05:42:55PM +0200, Thomas Gleixner wrote:
> On Thu, 18 Apr 2019, Josh Poimboeuf wrote:
> > Another idea I had (but never got a chance to work on) was to extend the
> > x86 unwind interface to all arches. So instead of the callbacks, each
> > arch would implement something l
On Fri, Apr 05, 2019 at 10:27:05AM -0600, Andy Lutomirski wrote:
> At the risk of asking stupid questions: we already have a mechanism
> for this: highmem. Can we enable highmem on x86_64, maybe with some
> heuristics to make it work well?
That's what I said; but note that I'm still not convinced
On Thu, Apr 04, 2019 at 09:15:46AM -0600, Khalid Aziz wrote:
> Thanks Peter. I really appreciate your review. Your feedback helps make
> this code better and closer to where I can feel comfortable not calling
> it RFC any more.
>
> The more I look at xpfo_kmap()/xpfo_kunmap() code, the more I get
On Thu, Apr 04, 2019 at 09:21:52AM +0200, Peter Zijlstra wrote:
> On Wed, Apr 03, 2019 at 11:34:04AM -0600, Khalid Aziz wrote:
> > diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
> > index 2c471a2c43fa..d17d33f36a01 100644
> > --- a/include/linux/mm_types.h
&g
On Wed, Apr 03, 2019 at 11:34:13AM -0600, Khalid Aziz wrote:
> diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
> index 999d6d8f0bef..cc806a01a0eb 100644
> --- a/arch/x86/mm/tlb.c
> +++ b/arch/x86/mm/tlb.c
> @@ -37,6 +37,20 @@
> */
> #define LAST_USER_MM_IBPB0x1UL
>
> +/*
> + * A TLB flu
On Wed, Apr 03, 2019 at 11:34:12AM -0600, Khalid Aziz wrote:
> From: Julian Stecklina
>
> Only the xpfo_kunmap call that needs to actually unmap the page
> needs to be serialized. We need to be careful to handle the case,
> where after the atomic decrement of the mapcount, a xpfo_kmap
> increased
On Wed, Apr 03, 2019 at 11:34:05AM -0600, Khalid Aziz wrote:
> diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
> index 2779ace16d23..5c0e1581fa56 100644
> --- a/arch/x86/include/asm/pgtable.h
> +++ b/arch/x86/include/asm/pgtable.h
> @@ -1437,6 +1437,32 @@ static inline
You must be so glad I no longer use kmap_atomic from NMI context :-)
On Wed, Apr 03, 2019 at 11:34:04AM -0600, Khalid Aziz wrote:
> +static inline void xpfo_kmap(void *kaddr, struct page *page)
> +{
> + unsigned long flags;
> +
> + if (!static_branch_unlikely(&xpfo_inited))
> +
On Wed, Apr 03, 2019 at 11:34:04AM -0600, Khalid Aziz wrote:
> diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
> index 2c471a2c43fa..d17d33f36a01 100644
> --- a/include/linux/mm_types.h
> +++ b/include/linux/mm_types.h
> @@ -204,6 +204,14 @@ struct page {
> #ifdef LAST_CPUPID_NOT_
On Fri, Dec 07, 2018 at 10:22:52AM +0100, Peter Zijlstra wrote:
> On Mon, Nov 19, 2018 at 01:55:17PM -0500, Waiman Long wrote:
> > There are use cases where we want to allow nesting of one terminal lock
> > underneath another terminal-like lock. That new lock type is called
> &
On Mon, Nov 19, 2018 at 01:55:18PM -0500, Waiman Long wrote:
> By making the object hash locks nestable terminal locks, we can avoid
> a bunch of unnecessary lockdep validations as well as saving space
> in the lockdep tables.
So the 'problem'; which you've again not explained; is that debugobject
On Mon, Nov 19, 2018 at 01:55:17PM -0500, Waiman Long wrote:
> There are use cases where we want to allow nesting of one terminal lock
> underneath another terminal-like lock. That new lock type is called
> nestable terminal lock which can optionally allow the acquisition of
> no more than one regu
On Mon, Nov 19, 2018 at 01:55:16PM -0500, Waiman Long wrote:
> The db->lock is a raw spinlock and so the lock hold time is supposed
> to be short. This will not be the case when printk() is being involved
> in some of the critical sections. In order to avoid the long hold time,
> in case some messa
at
wanted to use many of these same holes you took.
I think we can easily fit the lot together in bitfields though, since
you really don't need that many flags.
I refreshed the below patch a number of months ago (no idea if it still
applies, I think it was before Paul munged all of RCU). You ne
On Thu, Nov 22, 2018 at 11:04:22AM +0900, Sergey Senozhatsky wrote:
> Some serial consoles call mod_timer(). So what we could have with the
> debug objects enabled was
>
> mod_timer()
>lock_timer_base()
> debug_activate()
> printk()
> call_console_drivers()
On Tue, Jun 12, 2018 at 05:57:37PM -0700, Ricardo Neri wrote:
+static bool is_hpet_wdt_interrupt(struct hpet_hld_data *hdata)
+{
+ unsigned long this_isr;
+ unsigned int lvl_trig;
+
+ this_isr = hpet_readl(HPET_STATUS) & BIT(hdata->num);
+
+ lvl_trig = hpet_readl(HPET_Tn_CF
On Tue, Jun 12, 2018 at 05:57:34PM -0700, Ricardo Neri wrote:
> The current default implementation of the hardlockup detector assumes that
> it is implemented using perf events.
The sparc and powerpc things are very much not using perf.
___
iommu mailing
On Wed, Jun 13, 2018 at 05:41:41PM +1000, Nicholas Piggin wrote:
> On Tue, 12 Jun 2018 17:57:32 -0700
> Ricardo Neri wrote:
>
> > Instead of exposing individual functions for the operations of the NMI
> > watchdog, define a common interface that can be used across multiple
> > implementations.
>
On Tue, Jun 12, 2018 at 05:57:23PM -0700, Ricardo Neri wrote:
> diff --git a/include/linux/interrupt.h b/include/linux/interrupt.h
> index 5426627..dbc5e02 100644
> --- a/include/linux/interrupt.h
> +++ b/include/linux/interrupt.h
> @@ -61,6 +61,8 @@
> *interrupt handler after sus
On Fri, Feb 24, 2017 at 12:43:19AM +0700, Suravee Suthikulpanit wrote:
> >Also, who cares about the banks, why is this exposed?
>
> The bank and counter values are not exposed to the user-space.
> The amd_iommu PMU only expose, csource, devid, domid, pasid, devid_mask,
> domid_mask, and pasid_mas
On Tue, Feb 07, 2017 at 08:57:52AM +0700, Suravee Suthikulpanit wrote:
> >But instead it looks like you get the counter form:
> >
> > #define _GET_CNTR(ev) ((u8)(ev->hw.extra_reg.reg))
> >
> >Which is absolutely insane.
> >
>
> So, the IOMMU counters are grouped into bank, and there could b
On Mon, Jan 16, 2017 at 01:23:36AM -0600, Suravee Suthikulpanit wrote:
> + pi = container_of(event->pmu, struct perf_amd_iommu, pmu);
> + hwc->idx = pi->idx;
> + hwc->config = event->attr.config;
> + hwc->extra_reg.config = event->attr.config1;
> static voi
On Mon, Jan 16, 2017 at 01:23:30AM -0600, Suravee Suthikulpanit wrote:
> static void perf_iommu_read(struct perf_event *event)
> {
> - u64 count = 0ULL;
> - u64 prev_raw_count = 0ULL;
> - u64 delta = 0ULL;
> + u64 count, prev;
> + s64 delta;
I did send that email where I told
need to check the return value from
> amd_iommu_get_reg() before using the value.
>
> Cc: Peter Zijlstra
> Cc: Borislav Petkov
> Cc: Joerg Roedel
> Signed-off-by: Suravee Suthikulpanit
> ---
> arch/x86/events/amd/iommu.c | 19 +++
> 1 file changed, 11
On Sun, Jan 15, 2017 at 09:36:10AM +0700, Suravee Suthikulpanit wrote:
> Peter,
>
> On 1/11/17 18:57, Peter Zijlstra wrote:
> >On Mon, Jan 09, 2017 at 09:33:41PM -0600, Suravee Suthikulpanit wrote:
> >>This patch contains the following minor fixup:
> >> * Fixed
On Mon, Jan 09, 2017 at 09:33:41PM -0600, Suravee Suthikulpanit wrote:
> This patch contains the following minor fixup:
> * Fixed overflow handling since u64 delta would lose the MSB sign bit.
Please explain.. afaict this actually introduces a bug.
> diff --git a/arch/x86/events/amd/iommu.c b/
On Tue, Mar 15, 2016 at 11:40:17AM +0100, Borislav Petkov wrote:
> On Tue, Mar 15, 2016 at 07:39:31AM +0700, Suravee Suthikulpanit wrote:
> > What if I just merge the newly introduced arch/x86/include/perf/amd/iommu.h
> > into the include/linux/amd-iommu.h? I do not see the point of having to
> > s
On Tue, Mar 15, 2016 at 07:39:31AM +0700, Suravee Suthikulpanit wrote:
> What if I just merge the newly introduced arch/x86/include/perf/amd/iommu.h
> into the include/linux/amd-iommu.h? I do not see the point of having to
> separate things out into two files.
>
Works for me. Thanks!
On Mon, Mar 14, 2016 at 03:19:45PM +0100, Borislav Petkov wrote:
> On Mon, Mar 14, 2016 at 08:37:02PM +0700, Suravee Suthikulpanit wrote:
> > Basically, we are trying to match the current Perf hierarchy for AMD IOMMU
> > (arch/x86/events/amd/iommu.c). I can put it into
> > arch/x86/include/asm/perf
On Mon, Mar 14, 2016 at 12:26:00PM +0700, Suravee Suthikulpanit wrote:
> Hi,
>
> On 03/12/2016 08:22 PM, Peter Zijlstra wrote:
> >On Tue, Feb 23, 2016 at 08:12:36AM -0600, Suravee Suthikulpanit wrote:
> >>From: Suravee Suthikulpanit
> >>
> >>First, this
On Tue, Feb 23, 2016 at 08:12:36AM -0600, Suravee Suthikulpanit wrote:
> From: Suravee Suthikulpanit
>
> First, this patch move arch/x86/events/amd/iommu.h to
> arch/x86/include/asm/perf/amd/iommu.h so that we easily include
> it in both perf-amd-iommu and amd-iommu drivers.
>
> Then, we consoli
On Mon, Feb 22, 2016 at 03:00:31PM +0700, Suravee Suthikulpanit wrote:
> >So I really don't have time to review new muck while I'm hunting perf
> >core fail, but Boris made me look at this.
> >
> >This is crazy, if you have multiple IOMMUs then create an event per
> >IOMMU, do _NOT_ fold them all i
On Thu, Feb 11, 2016 at 04:15:26PM +0700, Suravee Suthikulpanit wrote:
> static void perf_iommu_read(struct perf_event *event)
> {
> + int i;
> u64 delta = 0ULL;
> struct hw_perf_event *hwc = &event->hw;
> + struct perf_amd_iommu *perf_iommu = container_of(event->pmu,
> +
On Wed, Feb 04, 2015 at 04:10:22PM +0100, Joerg Roedel wrote:
> From: Joerg Roedel
>
> Now that I learned about possible spurious wakeups this
> place needs fixing too. Replace the self-coded sleep variant
> with the generic wait_event() helper.
>
> Signed-off-by: Joerg Roedel
> ---
> drivers/
On Tue, May 28, 2013 at 05:45:12PM -0500, suravee.suthikulpa...@amd.com wrote:
> +static void perf_iommu_start(struct perf_event *event, int flags)
> +{
> + struct hw_perf_event *hwc = &event->hw;
> +
> + pr_debug("perf: amd_iommu:perf_iommu_start\n");
> + if (WARN_ON_ONCE(!(hwc->state
On Tue, May 28, 2013 at 12:17:28PM -0500, Suravee Suthikulanit wrote:
> On 5/28/2013 7:18 AM, Joerg Roedel wrote:
> >That implementation is very basic. Any reason for not using the event
> >reporting mechanism of the IOMMU? You could implement a nice perf
> >iommutop or something to see which devi
On Tue, May 21, 2013 at 04:25:23PM +0200, Joerg Roedel wrote:
> Hi Peter,
>
> On Tue, May 21, 2013 at 03:52:31PM +0200, Peter Zijlstra wrote:
> > OK, I'll take them and will push them to Ingo.
>
> Please wait with that until I had a look at the IOMMU pieces.
Have yo
On Tue, May 21, 2013 at 04:25:23PM +0200, Joerg Roedel wrote:
> Hi Peter,
>
> On Tue, May 21, 2013 at 03:52:31PM +0200, Peter Zijlstra wrote:
> > OK, I'll take them and will push them to Ingo.
>
> Please wait with that until I had a look at the IOMMU pieces.
Sure thi
On Tue, May 21, 2013 at 08:29:47AM -0500, Suravee Suthikulanit wrote:
> On 5/21/2013 4:11 AM, Peter Zijlstra wrote:
> >On Mon, May 20, 2013 at 10:41:29AM -0500, Suravee Suthikulanit wrote:
> >>Peter,
> >>
> >>Please let me know if you have any
On Mon, May 20, 2013 at 10:41:29AM -0500, Suravee Suthikulanit wrote:
> Peter,
>
> Please let me know if you have any questions/concerns regarding the PMU
> implementation.
Looks good, how would you like to go about merging this? Should I push
it through Ingo's tree or do you prefer it goes throu
On Mon, May 13, 2013 at 04:44:13PM -0500, steven.kin...@amd.com wrote:
> From: Steven L Kinney
>
> Implement a perf PMU to handle IOMMU PC perf events. This PMU will handle
> static counter perf events relative to the AMD IOMMU Performance Counters.
>
> To invoke the AMD IOMMU PMU issue a perf
On Mon, May 13, 2013 at 04:43:44PM -0500, steven.kin...@amd.com wrote:
> +static void init_iommu_perf_ctr(struct amd_iommu *iommu)
> +{
> + u32 val = 0xabcd, val2 = 0;
> +
> + if (!iommu_feature(iommu, FEATURE_PC))
> + return;
> +
> + amd_iommu_pc_present = true;
> +
> +
83 matches
Mail list logo