On 01/02/2024 19:16, Paul Eggert wrote:
Oh, and another thought: suppose someone wants to use od on bfloat16_t
values? They're popular in machine learning applications, and likely
will be more popular than float16_t overall. See:

https://sourceware.org/pipermail/libc-alpha/2024-February/154382.html

True. I suppose we would select between these like:

 -t f2, -t fh = IEEE half precision
        -t fb = brain floating point

bfloat16 looks like a truncated single precision IEEE,
so we should be able to just pad the extra 16 bits with zeros
when converting to single precision internally for processing.

cheers,
Pádraig



Reply via email to