On Thu, Aug 24, 2023 at 10:04:42AM +0100, Ferruh Yigit wrote:
> On 8/23/2023 5:03 PM, Tyler Retzlaff wrote:
> > On Wed, Aug 23, 2023 at 10:19:39AM +0100, Ferruh Yigit wrote:
> >> On 8/22/2023 11:30 PM, Konstantin Ananyev wrote:
> >>> 18/08/2023 14:48, Bruce Richardson пишет:
> >>>> On Fri, Aug 18, 2023 at 02:25:14PM +0100, Ferruh Yigit wrote:
> >>>>> On 8/17/2023 3:18 PM, Konstantin Ananyev wrote:
> >>>>>>
> >>>>>>>> Caution: This message originated from an External Source. Use
> >>>>>>>> proper caution
> >>>>>>>> when opening attachments, clicking links, or responding.
> >>>>>>>>
> >>>>>>>>
> >>>>>>>> On Wed, Aug 16, 2023 at 11:59:59AM -0700, Sivaprasad Tummala wrote:
> >>>>>>>>> mwaitx allows EPYC processors to enter a implementation dependent
> >>>>>>>>> power/performance optimized state (C1 state) for a specific
> >>>>>>>>> period or
> >>>>>>>>> until a store to the monitored address range.
> >>>>>>>>>
> >>>>>>>>> Signed-off-by: Sivaprasad Tummala <sivaprasad.tumm...@amd.com>
> >>>>>>>>> Acked-by: Anatoly Burakov <anatoly.bura...@intel.com>
> >>>>>>>>> ---
> >>>>>>>>>   lib/eal/x86/rte_power_intrinsics.c | 77
> >>>>>>>>> +++++++++++++++++++++++++-----
> >>>>>>>>>   1 file changed, 66 insertions(+), 11 deletions(-)
> >>>>>>>>>
> >>>>>>>>> diff --git a/lib/eal/x86/rte_power_intrinsics.c
> >>>>>>>>> b/lib/eal/x86/rte_power_intrinsics.c
> >>>>>>>>> index 6eb9e50807..b4754e17da 100644
> >>>>>>>>> --- a/lib/eal/x86/rte_power_intrinsics.c
> >>>>>>>>> +++ b/lib/eal/x86/rte_power_intrinsics.c
> >>>>>>>>> @@ -17,6 +17,60 @@ static struct power_wait_status {
> >>>>>>>>>        volatile void *monitor_addr; /**< NULL if not currently
> >>>>>>>>> sleeping
> >>>>>>>>> */  } __rte_cache_aligned wait_status[RTE_MAX_LCORE];
> >>>>>>>>>
> >>>>>>>>> +/**
> >>>>>>>>> + * These functions uses UMONITOR/UMWAIT instructions and will
> >>>>>>>>> enter C0.2
> >>>>>>>> state.
> >>>>>>>>> + * For more information about usage of these instructions, please
> >>>>>>>>> +refer to
> >>>>>>>>> + * Intel(R) 64 and IA-32 Architectures Software Developer's Manual.
> >>>>>>>>> + */
> >>>>>>>>> +static void intel_umonitor(volatile void *addr) {
> >>>>>>>>> +     /* UMONITOR */
> >>>>>>>>> +     asm volatile(".byte 0xf3, 0x0f, 0xae, 0xf7;"
> >>>>>>>>> +                     :
> >>>>>>>>> +                     : "D"(addr));
> >>>>>>>>> +}
> >>>>>>>>> +
> >>>>>>>>> +static void intel_umwait(const uint64_t timeout) {
> >>>>>>>>> +     const uint32_t tsc_l = (uint32_t)timeout;
> >>>>>>>>> +     const uint32_t tsc_h = (uint32_t)(timeout >> 32);
> >>>>>>>>> +     /* UMWAIT */
> >>>>>>>>> +     asm volatile(".byte 0xf2, 0x0f, 0xae, 0xf7;"
> >>>>>>>>> +                     : /* ignore rflags */
> >>>>>>>>> +                     : "D"(0), /* enter C0.2 */
> >>>>>>>>> +                     "a"(tsc_l), "d"(tsc_h)); }
> >>>>>>>>
> >>>>>>>> question and perhaps Anatoly Burakov can chime in with expertise.
> >>>>>>>>
> >>>>>>>> gcc/clang have built-in intrinsics for umonitor and umwait i
> >>>>>>>> believe as per our other
> >>>>>>>> thread of discussion is there a benefit to also providing inline
> >>>>>>>> assembly over just
> >>>>>>>> using the intrinsics? I understand that the intrinsics may not
> >>>>>>>> exist for the monitorx
> >>>>>>>> and mwaitx below so it is probably necessary for amd.
> >>>>>>>>
> >>>>>>>> so the suggestion here is when they are available just use the
> >>>>>>>> intrinsics.
> >>>>>>>>
> >>>>>>>> thanks
> >>>>>>>>
> >>>>>>> The gcc built-in functions
> >>>>>>> __builtin_ia32_monitorx()/__builtin_ia32_mwaitx are available only
> >>>>>>> when -mmwaitx
> >>>>>>> is used specific for AMD platforms. On generic builds, these
> >>>>>>> built-ins are not available and hence inline
> >>>>>>> assembly is required here.
> >>>>>>
> >>>>>> Ok... but we can probably put them into a separate .c file that will
> >>>>>> be compiled with that specific flag?
> >>>>>> Same thing can be probably done for Intel specific instructions.
> >>>>>> In general, I think it is much more preferable to use built-ins vs
> >>>>>> inline assembly
> >>>>>> (if possible off-course).
> >>>>>>
> >>>>>
> >>>>> We don't compile different set of files for AMD and Intel, but there are
> >>>>> runtime checks, so putting into separate file is not much different.
> >>>
> >>> Well, we probably don't compile .c files for particular vendor, but we
> >>> definitely do compile some .c files for particular ISA extensions.
> >>> Let say there are files in lib/acl that requires various '-mavx512*'
> >>> flags, same for other libs and PMDs.
> >>> So still not clear to me why same approach can't be applied to
> >>> power_instrincts.c?
> >>>
> >>>>>
> >>>>> It may be an option to always enable compiler flag (-mmwaitx), I think
> >>>>> it won't hurt other platforms but I am not sure about implications of
> >>>>> this to other platforms (what was the motivation for the compiler guys
> >>>>> to enable these build-ins with specific flag?).
> >>>>>
> >>>>> Also this requires detecting compiler that supports 'mmwaitx' or not,
> >>>>> etc..
> >>>>>
> >>>> This is the biggest reason why we have in the past added support for
> >>>> these
> >>>> instructions via asm bytes rather than intrinsics. It takes a long
> >>>> time for
> >>>> end-user compilers, especially those in LTS releases, to get the
> >>>> necessary
> >>>> intrinsics. 
> >>>
> >>> Yep, understand.
> >>> But why then we can't have both implementations?
> >>> Let say if WAITPKG is defined we can use builtins for
> >>> umonitor/umwait/tpause, otherwise we fallback to inline asm 
> >>> implementation.
> >>> Same story for MWAITX/monitorx.
> >>>
> >>
> >> Yes this can be done,
> >> it can be done either as different .c files per implementation, or as
> >> #ifdef in same file.
> >>
> >> But eventually asm implementation is required, as fallback, and if we
> >> will rely on asm implementation anyway, does it worth to have the
> >> additional checks to be able to use built-in intrinsic?
> >>
> >> Does it helps to comment name of the built-in function to inline
> >> assembly code, to document intention and another possible implementation?
> > 
> > the main value of preferring intrinsics is that when they are available
> > they also work with msvc/windows. the msvc toolchain does not support
> > inline asm. so some of the targets have to use intrinsics because that's all
> > there is.
> > 
> 
> How windows handles current power APIs without inline asm support, like
> rte_power_intrinsics.c one?

so this is a windows vs toolchain entanglement.

> Also will using both built-in and inline assembly work for Windows,
> since there may be compiler versions that doesn't support built-in
> functions, they should disable APIs altogether, and this can create a
> scenario that list of exposed APIs changes based on compiler version.

so I don't intend to disable apis, theres usually a way to make them
work and there should not be any api changes when done correctly.

windows/clang/mingw
    * inline asm may be used, but for me isn't preferred

windows/msvc
    * intrinsics (when available)
    * non-inline asm in a .s (when no intrinsics available)
    * keeping in mind that the compiler version isn't tied to windows
      OS release so it's easier for me to document that you need a
      newer compiler arbitrarily. The periods where there are no intrinsics
      end up being short-lived.
   
I'm on the hook for windows/msvc any stickyness dealing with it ends up
being my problem.

> 
> >>
> >>>> Consider a user running e.g. RHEL 8, who wants to take
> >>>> advantages of the latest DPDK features; they should not be required to
> >>>> upgrade their compiler - and possibly binutils/assembler - to do so.
> >>>>
> >>>> /Bruce
> >>>

Reply via email to