Actually i have a lvm using the ssd for fedora os and a harddrive for the swap.
2015-12-29 11:50 GMT+01:00 thibaut noah <thibaut.n...@gmail.com>: > Sure but how do i do that? > I can check the memory usage on the guest using rivaturner ( i don't use > more than 9gB of ram, i have 16), may the problem come from the balloon > option at the end of the xml ? Dunno how to check the rest though. > > For disk io i use a separate ssd for the vm and all other disks are not > mount when i launch it (it will fail otherwise if i remember correctly). > But talking about swap i need to check where the swap for fedora is, maybe > there is something worth checking here. > > I already reduced the memory i assign to the windows vm because otherwise > it would fail, apparently fedora likes to take a lot of ram of nothing. > > 2015-12-29 11:38 GMT+01:00 Karsten Elfenbein <karsten.elfenb...@gmail.com> > : > >> If it takes time to recover after exiting the VM I think you have a >> different problem than CPU pinning. >> While my Windows VM is idle all CPU cores are idle on the host as well. >> >> Time to recover sounds more like a system that swapped out memory to >> make room for the VM. After exiting it needs the swapped out process >> memory which takes time. >> Can you check the memory/swap/disk io usage? >> >> 2015-12-29 11:13 GMT+01:00 thibaut noah <thibaut.n...@gmail.com>: >> > I'll try to set the topology tonight. >> > Fedora still takes time to be fully operationnal after exiting windows, >> and >> > i use windows only not putting more load on fedora, weird. >> > My goal by tweaking the vcpu is to get the best possible performance i >> can >> > reach without crippling the host os. >> > dual xeon are indeed expensive, specially those with 3+ghz, dunno if i >> need >> > that much cpu for windows though i'm not sure, i can try by just using >> > 2physical cores and see how it goes for both os, would be nice if i >> could >> > apply the new patch to move my keyboard from guest to host T_T >> > >> > 2015-12-28 10:54 GMT+01:00 Karsten Elfenbein < >> karsten.elfenb...@gmail.com>: >> >> >> >> It just hands the layout of your CPU to the guest. So apps can decide >> >> on threading. >> >> >> >> Pinning the CPU to avoid CPU 0 and 4 will reduce the performance a bit >> >> as a complete physical core is reserved for the host OS. >> >> There are faster Desktop CPUs to compensate if needed but dual Xeon >> >> that can keep up in single thread performance are expensive and you >> >> would need 2 CPU + memory + board. >> >> >> >> >> >> >> >> 2015-12-28 10:24 GMT+01:00 Eddie Yen <missile0...@gmail.com>: >> >> > >> >> > This should provide the guest some details of the provided vCPUs: >> >> > >> >> > <topology sockets='1' cores='3' threads='2'/> >> >> > >> >> > >> >> > I tested this method before, using the 4820K. >> >> > The result I got is the performance not get better that set whole >> >> > threads as >> >> > cores when tested 3DMark. >> >> > But I didn't pinning 0 and 4 thread to host. >> >> > >> >> > I'll test it again soon. >> >> > >> >> > 2015-12-28 17:13 GMT+08:00 Karsten Elfenbein >> >> > <karsten.elfenb...@gmail.com>: >> >> >> >> >> >> Hi, >> >> >> >> >> >> that looks good. Processors 0 and 4 are not used in pinning and >> remain >> >> >> on the host OS. https://en.wikipedia.org/wiki/Hyper-threading >> >> >> Basically as soon as you load one OS CPU on a physical core the >> other >> >> >> HT core will be able to perform a lot less stuff. >> >> >> >> >> >> This should provide the guest some details of the provided vCPUs: >> >> >> <topology sockets='1' cores='3' threads='2'/> >> >> >> >> >> >> >> >> >> 2015-12-28 6:55 GMT+01:00 thibaut noah <thibaut.n...@gmail.com>: >> >> >> > So is this the right emulation? If i understand correctly what i'm >> >> >> > trying to >> >> >> > get is a 3cores with 2threads per core ? (so 6cores pinned?) >> >> >> > I don't have that much knowledge in cpu so i'm kinda lost here. >> >> >> > >> >> >> > <cputune> >> >> >> > <vcpupin vcpu='0' cpuset='1'/> >> >> >> > <vcpupin vcpu='1' cpuset='2'/> >> >> >> > <vcpupin vcpu='2' cpuset='3'/> >> >> >> > <vcpupin vcpu='3' cpuset='5'/> >> >> >> > <vcpupin vcpu='4' cpuset='6'/> >> >> >> > <vcpupin vcpu='5' cpuset='7'/> >> >> >> > </cputune> >> >> >> > >> >> >> > 2015-12-26 18:53 GMT+01:00 Karsten Elfenbein >> >> >> > <karsten.elfenb...@gmail.com>: >> >> >> >> >> >> >> >> check that the 2 cores for the host OS are on the same physical >> core >> >> >> >> and that the VM does not use those 2 processors with the same >> core >> >> >> >> id >> >> >> >> >> >> >> >> cat /proc/cpuinfo | grep -e processor -e 'core id' >> >> >> >> >> >> >> >> CPU 0-3 should be the core 0-3 >> >> >> >> CPU 4-7 should be the core 0-3 again with HT >> >> >> >> >> >> >> >> so leaving out CPU 0 and 4 should keep core 0 idle and the host >> OS >> >> >> >> responsive >> >> >> >> >> >> >> >> taskset 1-3,5-7 startVM.sh >> >> >> >> >> >> >> >> Karsten >> >> >> >> >> >> >> >> 2015-12-26 12:56 GMT+01:00 Eddie Yen <missile0...@gmail.com>: >> >> >> >> > Pining CPU can reduce the happening that host and guest using >> the >> >> >> >> > same >> >> >> >> > thread at the same time. >> >> >> >> > >> >> >> >> > And I don't know which program you're using on Fedora, so I >> don't >> >> >> >> > have >> >> >> >> > quite >> >> >> >> > answer that 2 threads is enough for host or not. >> >> >> >> > It's all depends on your usage case. >> >> >> >> > >> >> >> >> > 2015-12-26 19:44 GMT+08:00 thibaut noah < >> thibaut.n...@gmail.com>: >> >> >> >> >> >> >> >> >> >> My cpu is a i7 4790k, is pinning the cpu usefull ? I'll try to >> >> >> >> >> derp >> >> >> >> >> a >> >> >> >> >> bit >> >> >> >> >> with the vcpu thing, just afraid 2 threads aren't enough for >> >> >> >> >> fedora >> >> >> >> >> :/ >> >> >> >> >> >> >> >> >> >> 2015-12-26 11:59 GMT+01:00 Eddie Yen <missile0...@gmail.com>: >> >> >> >> >>> >> >> >> >> >>> I forgot your CPU type, so I don't know about your case. >> >> >> >> >>> >> >> >> >> >>> But, as I'm using 4820K, I usually using 4 to 6 threads gave >> to >> >> >> >> >>> VM, >> >> >> >> >>> only >> >> >> >> >>> use 2 threads for Fedora host. >> >> >> >> >>> And most important is vCPU tweaks, especially CPU topology >> and >> >> >> >> >>> Hyper-V. >> >> >> >> >>> >> >> >> >> >>> For me, I usually set topology as "sockets=1 cores=6 >> threads=1" >> >> >> >> >>> if >> >> >> >> >>> using >> >> >> >> >>> 6 threads from host. >> >> >> >> >>> Then set cpuset= to let vCPU worked on pointed CPU threads. >> >> >> >> >>> >> >> >> >> >>> IME, set all threads as vCPU cores can got better >> performance on >> >> >> >> >>> Windows >> >> >> >> >>> 10. >> >> >> >> >>> >> >> >> >> >>> 2015-12-26 18:00 GMT+08:00 thibaut noah >> >> >> >> >>> <thibaut.n...@gmail.com>: >> >> >> >> >>>> >> >> >> >> >>>> Hello guys merry christmas ! o/ >> >> >> >> >>>> My current issue is vm and host optimization, i use my >> windows >> >> >> >> >>>> 10 >> >> >> >> >>>> vm >> >> >> >> >>>> for >> >> >> >> >>>> gaming purposes only (like most of us i think), problem is, >> to >> >> >> >> >>>> keep >> >> >> >> >>>> my >> >> >> >> >>>> performances on windows high i drain too much ressources on >> my >> >> >> >> >>>> fedora >> >> >> >> >>>> host >> >> >> >> >>>> thus making it almost useless (also when leaving the vm i >> most >> >> >> >> >>>> of >> >> >> >> >>>> the >> >> >> >> >>>> time >> >> >> >> >>>> found myself unable to use fedora at all so i have to >> >> >> >> >>>> reboot...). >> >> >> >> >>>> So is it possible to improve performances to the max on the >> >> >> >> >>>> guest >> >> >> >> >>>> without almost killing the host ? >> >> >> >> >>>> Should i consider switching my gear for a bi-xeon? (to >> assign >> >> >> >> >>>> one >> >> >> >> >>>> fully >> >> >> >> >>>> to the host and one to the guest) >> >> >> >> >>>> I'm actually not sure about what happen here, does anyone >> ran >> >> >> >> >>>> into >> >> >> >> >>>> the >> >> >> >> >>>> same sort of issue? >> >> >> >> >>>> Have a good day >> >> >> >> >>>> >> >> >> >> >>>> _______________________________________________ >> >> >> >> >>>> vfio-users mailing list >> >> >> >> >>>> vfio-users@redhat.com >> >> >> >> >>>> https://www.redhat.com/mailman/listinfo/vfio-users >> >> >> >> >>>> >> >> >> >> >>> >> >> >> >> >> >> >> >> >> > >> >> >> >> > >> >> >> >> > _______________________________________________ >> >> >> >> > vfio-users mailing list >> >> >> >> > vfio-users@redhat.com >> >> >> >> > https://www.redhat.com/mailman/listinfo/vfio-users >> >> >> >> > >> >> >> > >> >> >> > >> >> > >> >> > >> > >> > >> > >
_______________________________________________ vfio-users mailing list vfio-users@redhat.com https://www.redhat.com/mailman/listinfo/vfio-users