On Sat, Jun 11, 2022 at 1:46 AM H.J. Lu wrote:
>
> On Fri, Jun 10, 2022 at 7:44 AM H.J. Lu wrote:
> >
> > On Fri, Jun 10, 2022 at 2:38 AM Florian Weimer wrote:
> > >
> > > * liuhongt via Libc-alpha:
> > >
> > > > +\subsubsection{Special Types}
> > > > +
> > > > +The \code{__Bfloat16} type uses a
On Fri, Jun 10, 2022 at 7:44 AM H.J. Lu wrote:
>
> On Fri, Jun 10, 2022 at 2:38 AM Florian Weimer wrote:
> >
> > * liuhongt via Libc-alpha:
> >
> > > +\subsubsection{Special Types}
> > > +
> > > +The \code{__Bfloat16} type uses a 8-bit exponent and 7-bit mantissa.
> > > +It is used for \code{BF16
On Fri, Jun 10, 2022 at 2:38 AM Florian Weimer wrote:
>
> * liuhongt via Libc-alpha:
>
> > +\subsubsection{Special Types}
> > +
> > +The \code{__Bfloat16} type uses a 8-bit exponent and 7-bit mantissa.
> > +It is used for \code{BF16} related intrinsics, it cannot be
Please mention that this is an
* liuhongt via Libc-alpha:
> +\subsubsection{Special Types}
> +
> +The \code{__Bfloat16} type uses a 8-bit exponent and 7-bit mantissa.
> +It is used for \code{BF16} related intrinsics, it cannot be
> +used with standard C operators.
I think it's not necessary to specify whether the type supports
On Fri, Jun 10, 2022 at 3:47 PM liuhongt via Libc-alpha
wrote:
>
> Pass and return __Bfloat16 values in XMM registers.
>
> Background:
> __Bfloat16 (BF16) is a new floating-point format that can accelerate machine
> learning (deep learning training, in particular) algorithms.
> It's first introdu