Hi, ok i tried:
<vcpu placement='static'>2</vcpu> <iothreads>2</iothreads> <iothreadids> <iothread id='1'/> <iothread id='2'/> </iothreadids> <cputune> <vcpupin vcpu='0' cpuset='0'/> <vcpupin vcpu='1' cpuset='1'/> <emulatorpin cpuset='0-1'/> <iothreadpin iothread='1' cpuset='0'/> <iothreadpin iothread='2' cpuset='1'/> </cputune> aswell as <vcpu placement='static' cpuset='0-1' current='2'>4</vcpu> and, it shows in the cpuset cgroup: #cat cpuset.cpus 0-1 # cat cpuset.effective_cpus 0-1 And yes, the CPU power is reduced to two cores. But still, /proc/cpuinfo will show _all_ cpu cores of the hostmachine. Also the total cpu usage / load, if queried, will be the cpu usage / load of the hostmachine. That confuse the applications as they monitor the cpu load/count and try to balance out things. Is there really no way to tell libvirt to create the cgroups in a way to just show X cpu's to the system ? When i run purely lxc or lxd, its no problem. But i would like to handle things through libvirt because also the qemu-kvm stuff is handled through libvirt. Any ideas ? I also take dark hacking stuff :) Greetings Oliver Am 16.09.19 um 10:58 schrieb Martin Kletzander: > On Sun, Sep 15, 2019 at 12:21:08PM +0200, i...@layer7.net wrote: >> Hi folks! >> >> i created a server with this XML file: >> >> <domain type='lxc'> >> <name>lxctest1</name> >> <uuid>227bd347-dd1d-4bfd-81e1-01052e91ffe2</uuid> >> <metadata> >> <libosinfo:libosinfo >> xmlns:libosinfo="http://libosinfo.org/xmlns/libvirt/domain/1.0"> >> <libosinfo:os id="http://centos.org/centos/6.9"/> >> </libosinfo:libosinfo> >> </metadata> >> <memory unit='KiB'>1024000</memory> >> <currentMemory unit='KiB'>1024000</currentMemory> >> <vcpu>2</vcpu> >> <numatune> >> <memory mode='strict' placement='auto'/> >> </numatune> >> <resource> >> <partition>/machine</partition> >> </resource> >> <os> >> <type arch='x86_64'>exe</type> >> <init>/sbin/init</init> >> </os> >> <idmap> >> <uid start='0' target='200000' count='65535'/> >> <gid start='0' target='200000' count='65535'/> >> </idmap> >> <features> >> <privnet/> >> </features> >> <clock offset='utc'/> >> <on_poweroff>destroy</on_poweroff> >> <on_reboot>restart</on_reboot> >> <on_crash>destroy</on_crash> >> <devices> >> <emulator>/usr/libexec/libvirt_lxc</emulator> >> <filesystem type='mount' accessmode='mapped'> >> <source dir='/mnt'/> >> <target dir='/'/> >> </filesystem> >> <interface type='network'> >> <mac address='00:16:3e:3e:3e:bb'/> >> <source network='Public Network'/> >> </interface> >> <console type='pty'> >> <target type='lxc' port='0'/> >> </console> >> </devices> >> </domain> >> >> >> I would expect it to have 2 cpu cores and 1 GB RAM. >> >> >> The RAM config works. >> The CPU config does not: >> > > You probably checked /proc/meminfo. That is provided by libvirt using fuse > filesystem, but at least it is guaranteed thanks to cgroups. We do not > (and I > don't think we even can, at least reliably) do that with cpuinfo. > > [...] > >> It gives me all CPU's from the host. >> > > Although if you ran some perf benchmark it should just cap at 2 cpus. > >> I also tried it with >> >> <cpu> >> <topology sockets='1' cores='2' threads='1'/> >> </cpu> >> > > We should not allow this, IMO. The reason is that we cannot guarantee > or even > emulate this (or even the number of CPUs for that matter). That's not how > containers work. We can provide /proc/cpuinfo through a fuse > filesystem, but if > the code actually asks the cpu directly there is no layer in which to > emulate > the returned information. > >> That didnt help too. >> >> I tried to modify the vcpus through virsh: >> >> >> >> #virsh -c lxc:/// setvcpus lxctest1 2 >> >> error: this function is not supported by the connection driver: >> virDomainSetVcpus >> > > This should not work for LXC, but it does not make sence because if you > look at > the XML we allow `<vcpus>2</vcpus>`. > >> Which didnt work too. >> >> >> This happens on: >> > > Unfortunately anywhere, for the reasons said above. Ideally it should > not be > able to specify <vcpus/>, but rather just <cputune/>, but I don't think > we can > change that semantics now that we supported the former for quite some time. _______________________________________________ libvirt-users mailing list libvirt-users@redhat.com https://www.redhat.com/mailman/listinfo/libvirt-users