Boris,

On 7/24/17 18:14, Borislav Petkov wrote:
On Mon, Jul 24, 2017 at 04:22:45AM -0500, Suravee Suthikulpanit wrote:
For family17h, current cpu_core_id is directly taken from the value
CPUID_Fn8000001E_EBX[7:0] (CoreId), which is the physical ID of the
core within a die. However, on system with downcore configuration
(where not all physical cores within a die are available), this could
result in the case where cpu_core_id > (cores_per_node - 1).

And that is a problem because...?

Actually, this is not totally accurate. My apology. This patch is mainly fix to incorrect core ID in /proc/cpuinfo. I'll update the commit message accordingly for V5.

Currently, with 6-core downcore configuration (out of normally 8), /proc/cpuinfo is currently showing "core id" as.

NODE: 0
cpu 0 core id : 0
cpu 1 core id : 1
cpu 2 core id : 2
cpu 3 core id : 4
cpu 4 core id : 5
cpu 5 core id : 0

NODE: 1
cpu 6 core id : 2
cpu 7 core id : 3
cpu 8 core id : 4
cpu 9 core id : 0
cpu 10 core id : 1
cpu 11 core id : 2

NODE: ....

This is due to the cpu_core_id fixup in amd_get_topology() below:

        /* fixup multi-node processor information */
        if (nodes_per_socket > 1) {
                u32 cus_per_node;

                set_cpu_cap(c, X86_FEATURE_AMD_DCM);
                cus_per_node = c->x86_max_cores / nodes_per_socket;

                /* core id has to be in the [0 .. cores_per_node - 1] range */
                c->cpu_core_id %= cus_per_node;
        }

In this case, the cpu_core_id are {0, 1, 2 ,4, 5, 6, 8 , 9, 10, 12, 13, 14, ...}. Here, the x86_max_cores is 24, the node_per_socket is 4, and cus_per_node is 6. When apply the fix up, the cpu_core_id ended up incorrect as shown above. This logic assumes that the cpu_core_id are contiguous across the socket.

With this patch, this fixup is not needed since the c->cpu_core_id are already between [0, 6].

[...]
diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
index a2a52b5..b5ea28f 100644
--- a/arch/x86/kernel/cpu/amd.c
+++ b/arch/x86/kernel/cpu/amd.c
@@ -305,37 +305,71 @@ static void __get_topoext(struct cpuinfo_x86 *c, int cpu)
 {
        u32 eax, ebx, ecx, edx;
        u8 node_id;
+       u16 l3_nshared = 0;
+
+       if (cpuid_edx(0x80000006)) {

Ok, so Janakarajan did some L3 detection:

https://lkml.kernel.org/r/cover.1497452002.git.janakarajan.natara...@amd.com

and now that you need it too, I'd like you to refactor and unify that
L3 detection code and put it in arch/x86/kernel/cpu/intel_cacheinfo.c
(yes, we will rename that file one fine day :-)) along with accessors
for other users to call. init_amd_cacheinfo() looks good to me right
now.


Let me look into this to re-factoring for reuse here.

+               cpuid_count(0x8000001d, 3, &eax, &ebx, &ecx, &edx);
+               l3_nshared = ((eax >> 14) & 0xfff) + 1;
+       }

        cpuid(0x8000001e, &eax, &ebx, &ecx, &edx);

        node_id  = ecx & 0xff;
        smp_num_siblings = ((ebx >> 8) & 0xff) + 1;

-       if (c->x86 == 0x15)
-               c->cu_id = ebx & 0xff;
-
-       if (c->x86 >= 0x17) {
-               c->cpu_core_id = ebx & 0xff;
-
-               if (smp_num_siblings > 1)
-                       c->x86_max_cores /= smp_num_siblings;
-       }
+       switch (c->x86) {
+       case 0x17: {
+               u32 tmp, ccx_offset, cpu_offset;

-       /*
-        * We may have multiple LLCs if L3 caches exist, so check if we
-        * have an L3 cache by looking at the L3 cache CPUID leaf.
-        */
-       if (cpuid_edx(0x80000006)) {
-               if (c->x86 == 0x17) {
+               /*
+                * In family 17h, the CPUID_Fn8000001E_EBX[7:0] (CoreId)
+                * is non-contiguous in downcore and non-SMT cases.
+                * Fixup the cpu_core_id to be contiguous for cores within
+                * the die.

Why do we need it to be contiguous? It is not contiguous on Intel too.

Please see above description.

+                */
+               tmp = ebx & 0xff;
+               if (smp_num_siblings == 1) {
                        /*
-                        * LLC is at the core complex level.
-                        * Core complex id is ApicId[3].
+                        * CoreId bit-encoding for SMT-disabled
+                        * [7:4] : die
+                        * [3]   : ccx
+                        * [2:0] : core
                         */
-                       per_cpu(cpu_llc_id, cpu) = c->apicid >> 3;
+                       ccx_offset = ((tmp >> 3) & 1) * l3_nshared;
+                       cpu_offset = tmp & 7;
                } else {
-                       /* LLC is at the node level. */
-                       per_cpu(cpu_llc_id, cpu) = node_id;
+                       /*
+                        * CoreId bit-encoding for SMT-enabled
+                        * [7:3] : die
+                        * [2]   : ccx
+                        * [1:0] : core
+                        */
+                       ccx_offset = ((tmp >> 2) & 1) * l3_nshared /
+                                      smp_num_siblings;
+                       cpu_offset = tmp & 3;
+                       c->x86_max_cores /= smp_num_siblings;
+
                }
+               c->cpu_core_id = ccx_offset + cpu_offset;
+
+               /*
+                * Family17h L3 cache (LLC) is at Core Complex (CCX).
+                * There could be multiple CCXs in a node.
+                * CCX ID is ApicId[3].
+                */
+               per_cpu(cpu_llc_id, cpu) = c->apicid >> 3;
+
+               pr_debug("Fixup coreid:%#x to cpu_core_id:%#x\n",
+                        tmp, c->cpu_core_id);
+               break;
+       }
+       case 0x15:
+               c->cu_id = ebx & 0xff;
+               /* Follow through */
+       default:
+               /* LLC is default to L3, which generally per-node */
+               if (l3_nshared > 0)
+                       per_cpu(cpu_llc_id, cpu) = node_id;

If this needs to be executed unconditionally, just move it out of the
switch-case.


Well, for now, it has to be for non-family17h, which seems odd. I am planning to clean this up as well.

Thanks,
Suravee

Reply via email to