Le 27/11/2018 à 02:05, Segher Boessenkool a écrit :
> Cool stuff :-)
> gcc110 (a power7, 2 packages, 16 cores, 64 threads):
> $ hwloc-ls -s --no-io
> depth 0:      1 Machine (type #1)
>  depth 1:     2 NUMANode (type #2)
>   depth 2:    16 Package (type #3)
>    depth 3:   16 L3Cache (type #4)
>     depth 4:  16 L2Cache (type #4)
>      depth 5: 16 L1dCache (type #4)
>       depth 6:        16 L1iCache (type #4)
>        depth 7:       16 Core (type #5)
>         depth 8:      64 PU (type #6)
>
> gcc112 (a power8, 2 DCMs (i.e. 2 packages, 4 dies), 20 cores, 160 threads):
> $ hwloc-ls -s --no-io
> depth 0:      1 Machine (type #1)
>  depth 1:     2 Group0 (type #7)
>   depth 2:    4 NUMANode (type #2)
>    depth 3:   4 Package (type #3)
>     depth 4:  20 L3Cache (type #4)
>      depth 5: 20 L2Cache (type #4)
>       depth 6:        20 L1dCache (type #4)
>        depth 7:       20 L1iCache (type #4)
>         depth 8:      20 Core (type #5)
>          depth 9:     160 PU (type #6)
>
> gcc135 (a power9, 2 packages, 32 cores, 128 threads):
> $ hwloc-ls -s --no-io
> depth 0:      1 Machine (type #1)
>  depth 1:     2 NUMANode (type #2)
>   depth 2:    2 Package (type #3)
>    depth 3:   16 L3Cache (type #4)
>     depth 4:  16 L2Cache (type #4)
>      depth 5: 32 L1dCache (type #4)
>       depth 6:        32 L1iCache (type #4)
>        depth 7:       32 Core (type #5)
>         depth 8:      128 PU (type #6)
>
> so it gets p7 packages wrong, and it doesn't understand p8 DCMs.  The rest
> is fine, and pretty etc. :-)


According to IBM, the POWER7 info is what they want to report :) I tried
to convince them to fix their firmware/kernel but they considered it was
better to report this strange stuff because of an obscure POWER feature
("LPAR" if I remember correctly).

User-space tools such as hwloc or lscpu often require recent kernels to
report correct information on some non-x86 hardware. On x86, things
usually work fine (but hwloc can still directly use the CPUID
instruction whenever the kernel reports something wrong).

Brice



_______________________________________________
cfarm-users mailing list
cfarm-users@lists.tetaneutral.net
https://lists.tetaneutral.net/listinfo/cfarm-users

Reply via email to