Below is the logic I am now using to solve this problem.

Run a python script to convert all the CPU cores to comma separated.

For example, take the below startup.conf:


































*apiVersion: v1kind: ConfigMapmetadata:  name: vpp-startup-confdata:
startup.conf: |      unix {        nodaemon        log /tmp/vpp.log
full-coredump        gid vpp        interactive        cli-listen
/run/vpp/cli2.sock        exec /openair-upf/etc/init.conf      }      cpu
{        main-core CORE1        corelist-workers CORE2      }
api-trace {        on      }      api-segment {        gid vpp      }
plugins {          path /usr/lib/x86_64-linux-gnu/vpp_plugins/
plugin dpdk_plugin.so { disable }          plugin gtpu_plugin.so { disable
}          plugin upf_plugin.so { enable }      }*


Then I use the following steps to replace CORE1 and CORE2 as part of the
command arguments in the deployment manifest file.



*mainCore=$(python /var/run/vpp/cpuCoresExtract.py "$(cat
/sys/fs/cgroup/cpuset/cpuset.cpus)" | tr -d '[] ' | cut -d ','
-f1);workerCores=$(python /var/run/vpp/cpuCoresExtract.py "$(cat
/sys/fs/cgroup/cpuset/cpuset.cpus)" | tr -d '[] ' | cut -d ',' -f2-);*

*sed -i "s@CORE1@$mainCore@g;s@CORE2@$workerCores@g"
/openair-upf/etc/startup.conf;*

Contents of cpuCoresExtract.py:




*import itertoolsimport syss =
sys.argv[1]print(list(itertools.chain.from_iterable(range(int(ranges[0]),
int(ranges[1])+1) for ranges in ((el+[el[0]])[:2] for el in
(miniRange.split('-') for miniRange in s.split(','))))))*

Reference to where I got the python script:
https://stackoverflow.com/questions/18759512/expand-a-range-which-looks-like-1-3-6-8-10-to-1-2-3-6-8-9-10

For now that's what I was able to come up with, so the first core will
always go to the main-core while the remaining cores will be allocated to
the corelist-workers. Actual example is given below:



*root@vpp-ds-memif-bridge-vpbpk:/vpp# cat
/sys/fs/cgroup/cpuset/cpuset.cpus0-3,18-20*



*mainCore=$(python /var/run/vpp/cpuCoresExtract.py "$(cat
/sys/fs/cgroup/cpuset/cpuset.cpus)" | tr -d '[] ' | cut -d ','
-f1);workerCores=$(python /var/run/vpp/cpuCoresExtract.py "$(cat
/sys/fs/cgroup/cpuset/cpuset.cpus)" | tr -d '[] ' | cut -d ',' -f2-);*





*root@vpp-ds-memif-bridge-vpbpk:/vpp# echo
$mainCore0root@vpp-ds-memif-bridge-vpbpk:/vpp#root@vpp-ds-memif-bridge-vpbpk:/vpp#
echo $workerCores1,2,3,18,19,20*




On Wed, Jul 13, 2022 at 9:37 PM Christopher Adigun <future...@gmail.com>
wrote:

> Hi,
>
> I am currently trying to allocate different CPU cores to the *main-core*
> and *corelist-workers* in a container automatically, while this can be
> done manually it is a bit difficult to achieve when a container
> environment is used because I don't know which specific CPU cores will be
> allocated beforehand.
>
> Example below is a kubernetes manifest:
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> *apiVersion: v1kind: Podmetadata:  name: test-memif-cni  annotations:
> k8s.v1.cni.cncf.io/networks <http://k8s.v1.cni.cncf.io/networks>:
> mem-if-n3-conf  labels:    env: testspec:  containers:  - name: vpp
> image: ligato/vpp-base:22.06-release    imagePullPolicy: IfNotPresent
> command: ["sleep"]    args: [ "infinity" ]    resources:      requests:
>     cpu: "7"        memory: 4Gi        hugepages-1Gi: 1Gi          limits:
>       cpu: "7"        memory: 4Gi      *
>         hugepages-1Gi: 1Gi
>
> 7 CPU cores are requested, checking the POD, the following is the cpu set
> that was allocated:
>
>
> *root@test-memif-cni:/vpp# cat /sys/fs/cgroup/cpuset/cpuset.cpus0-3,18-20*
>
> How can the first core (i.e. 0 in this case) be always used for the
> main-core while the remaining cores should be used by the corelist-workers
> (3,18-20) without having to configure this manually?
>
> *N.B - Set of CPU cores that will be allocated is controlled by kubernetes
> and can change, for instance it can be 1,4-10.*
>
> Thanks
>
> Christopher
>
>
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21677): https://lists.fd.io/g/vpp-dev/message/21677
Mute This Topic: https://lists.fd.io/mt/92371362/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to