> -----Original Message----- > From: Gowrishankar Muthukrishnan > [mailto:gowrishankar.m at linux.vnet.ibm.com] > Sent: Tuesday, August 16, 2016 11:28 AM > To: dev at dpdk.org > Cc: Chao Zhu <chaozhu at linux.vnet.ibm.com>; Richardson, Bruce > <bruce.richardson at intel.com>; Ananyev, Konstantin > <konstantin.ananyev at intel.com>; Thomas Monjalon > <thomas.monjalon at 6wind.com>; Dumitrescu, Cristian > <cristian.dumitrescu at intel.com>; Pradeep <pradeep at us.ibm.com> > Subject: [PATCH v6 8/9] ip_pipeline: fix lcore mapping for varying SMT > threads as in ppc64 > > This patch fixes ip_pipeline panic in app_init_core_map while preparing cpu > core map in powerpc with SMT off. cpu_core_map_compute_linux currently > prepares > core mapping based on file existence in sysfs ie. > > /sys/devices/system/cpu/cpu<LCORE_NUM>/topology/physical_package_id > /sys/devices/system/cpu/cpu<LCORE_NUM>/topology/core_id > > These files do not exist for lcores which are offline for any reason (as in > powerpc, while SMT is off). In this situation, this function should further > continue preparing map for other online lcores instead of returning with -1 > for a first unavailable lcore. > > Also, in SMT=off scenario for powerpc, lcore ids can not be always indexed > from > 0 upto 'number of cores present' (/sys/devices/system/cpu/present). For > eg, for > an online lcore 32, core_id returned in sysfs is 112 where online lcores are > 10 (as in one configuration), hence sysfs lcore id can not be checked with > indexing lcore number before positioning lcore map array. > > Signed-off-by: Gowrishankar Muthukrishnan > <gowrishankar.m at linux.vnet.ibm.com> > --- > examples/ip_pipeline/cpu_core_map.c | 8 ++++++++ > 1 file changed, 8 insertions(+) > > diff --git a/examples/ip_pipeline/cpu_core_map.c > b/examples/ip_pipeline/cpu_core_map.c > index cb088b1..dd8f678 100644 > --- a/examples/ip_pipeline/cpu_core_map.c > +++ b/examples/ip_pipeline/cpu_core_map.c > @@ -351,8 +351,10 @@ cpu_core_map_compute_linux(struct > cpu_core_map *map) > int lcore_socket_id = > > cpu_core_map_get_socket_id_linux(lcore_id); > > +#if !defined(RTE_ARCH_PPC_64) > if (lcore_socket_id < 0) > return -1; > +#endif > > if (((uint32_t) lcore_socket_id) == socket_id) > n_detected++; > @@ -368,6 +370,7 @@ cpu_core_map_compute_linux(struct cpu_core_map > *map) > cpu_core_map_get_socket_id_linux( > lcore_id); > > +#if !defined(RTE_ARCH_PPC_64) > if (lcore_socket_id < 0) > return -1; > > @@ -377,9 +380,14 @@ cpu_core_map_compute_linux(struct > cpu_core_map *map) > > if (lcore_core_id < 0) > return -1; > +#endif > > +#if !defined(RTE_ARCH_PPC_64) > if (((uint32_t) lcore_socket_id == socket_id) > && > ((uint32_t) lcore_core_id == > core_id)) { > +#else > + if (((uint32_t) lcore_socket_id == socket_id)) > { > +#endif > uint32_t pos = > cpu_core_map_pos(map, > socket_id, > core_id_contig, > -- > 1.9.1
This patch only changes the code for PPC CPUs, I don't have the hardware to check it myself, but I will take Gowrishankar's and Chao's word it is the right thing to do for PPC CPUs, so ... Acked by: Cristian Dumitrescu <cristian.dumitrescu at intel.com>