[Xen-devel] null scheduler bug
Hi, I implemented Xen Hypervisor 4.9.2 on UltraZed-EG board with carrier card following these steps: 1.) installed petalinux on Ubuntu 16.04 2.) dowloaded UltraZed-EG IO Carrier Card - PetaLinux 2018.2 Standard BSP 3.) created project: petalinux-create -t project –s 4.) copied xen-overlay.dtsi from ZCU102 project (also made with BSP) to my project 5.) built xen hypervisor following this guide link <http://www.wiki.xilinx.com/Building%20Xen%20Hypervisor%20with%20Petalinux%202018.1> (booting with SD card) 6.) made baremetal application that blinks PS LED with Vivado 7.) measured signals jitted when application is on board with and with out Xen I menaged to decrease jitter adding following in xen-overlay.dtsi file in xen-bootargs: sched=null vwfi=native. The problem is when I add sched=null vwfi=native I can create my bare-metal application with xl create and destroy it with xl destroy (disapears from xl list) but when I want to start it again (xl create) this error pops: root@uz3eg-iocc-2018-2:~# xl create bm1.cfg Parsing config from bm1.cfg (XEN) IRQ 48 is already used by domain 1 libxl: error: libxl_create.c:1278:domcreate_launch_dm: Domain 2:failed give domain access to irq 48: Device or resource busy libxl: error: libxl_domain.c:1003:libxl__destroy_domid: Domain 2:Non-existant domain libxl: error: libxl_domain.c:962:domain_destroy_callback: Domain 2:Unable to destroy guest libxl: error: libxl_domain.c:889:domain_destroy_cb: Domain 2:Destruction of domain failed When I remove sched=null vwfi=native and build project again everything works fine, I can create and destroy bm app as many times as i want. Added in attachment bare-metal application's configuration file, xl dmesg, dmesg and xl -v create message, xen-overlay.dtsi file and system-user.dtsi file. Thank you in adance, best regards, Milan Boberic. name = "bm1" kernel = "ultraled.bin" memory = 8 vcpus = 1 cpus = [2] irqs = [ 48 ] iomem = [ "0xff0a0,1" ] hap=0[0.00] Booting Linux on physical CPU 0x0 [0.00] Linux version 4.14.0-xilinx-v2018.2 (oe-user@oe-host) (gcc version 7.2.0 (GCC)) #1 SMP Wed Sep 12 13:32:35 CEST 2018 [0.00] Boot CPU: AArch64 Processor [410fd034] [0.00] Machine model: xlnx,zynqmp [0.00] Xen 4.9 support found [0.00] efi: Getting EFI parameters from FDT: [0.00] efi: UEFI not found. [0.00] cma: Reserved 256 MiB at 0x6000 [0.00] On node 0 totalpages: 196608 [0.00] DMA zone: 2688 pages used for memmap [0.00] DMA zone: 0 pages reserved [0.00] DMA zone: 196608 pages, LIFO batch:31 [0.00] psci: probing for conduit method from DT. [0.00] psci: PSCIv0.2 detected in firmware. [0.00] psci: Using standard PSCI v0.2 function IDs [0.00] psci: Trusted OS migration not required [0.00] random: fast init done [0.00] percpu: Embedded 21 pages/cpu @ffc03ffb7000 s46488 r8192 d31336 u86016 [0.00] pcpu-alloc: s46488 r8192 d31336 u86016 alloc=21*4096 [0.00] pcpu-alloc: [0] 0 [0.00] Detected VIPT I-cache on CPU0 [0.00] CPU features: enabling workaround for ARM erratum 845719 [0.00] Built 1 zonelists, mobility grouping on. Total pages: 193920 [0.00] Kernel command line: console=hvc0 earlycon=xen earlyprintk=xen maxcpus=1 clk_ignore_unused [0.00] PID hash table entries: 4096 (order: 3, 32768 bytes) [0.00] Dentry cache hash table entries: 131072 (order: 8, 1048576 bytes) [0.00] Inode-cache hash table entries: 65536 (order: 7, 524288 bytes) [0.00] Memory: 423276K/786432K available (9980K kernel code, 644K rwdata, 3132K rodata, 512K init, 2168K bss, 101012K reserved, 262144K cma-reserved) [0.00] Virtual kernel memory layout: [0.00] modules : 0xff80 - 0xff800800 ( 128 MB) [0.00] vmalloc : 0xff800800 - 0xffbebfff ( 250 GB) [0.00] .text : 0xff800808 - 0xff8008a4 ( 9984 KB) [0.00] .rodata : 0xff8008a4 - 0xff8008d6 ( 3200 KB) [0.00] .init : 0xff8008d6 - 0xff8008de ( 512 KB) [0.00] .data : 0xff8008de - 0xff8008e81200 ( 645 KB) [0.00].bss : 0xff8008e81200 - 0xff800909f2b0 ( 2169 KB) [0.00] fixed : 0xffbefe7fd000 - 0xffbefec0 ( 4108 KB) [0.00] PCI I/O : 0xffbefee0 - 0xffbeffe0 (16 MB) [0.00] vmemmap : 0xffbf - 0xffc0 ( 4 GB maximum) [0.00] 0xffbf0070 - 0xffbf0188 (17 MB actual) [0.00] memory : 0xffc02000 - 0xffc07000 ( 1280 MB) [0.00] Hierarchical RCU implementation. [0.00] RCU event tracing is enabled. [0.00] RCU restri
Re: [Xen-devel] null scheduler bug
I'm sorry for html and dropping xen-devel and dropping other CCs, missed to read the rules. I tried 4.10 version and checked for commits you asked in earlier reply. 2b936ea7b "xen: RCU: avoid busy waiting until the end of grace period." 38ad8151f "xen: RCU: don't let a CPU with a callback go idle." Commits are there and I will definitely continue with 4.10 version. But it didn't solve my problem entirely. I create my bare-metal application (with xl create) and destroy it with xl destroy (it disappears from xl list) and when I try to create it again same error pops but if I immediately run xl create command again it creates it without error. If I run xl create twice fast sometimes bare-metal application isn't in any state (it should be in "running" state). If I wait some time (approximately between 30 and 90 seconds) after destruction of that bm app and then run xl create it will create it without error. This is github repository of Xen I use: https://github.com/Xilinx/xen/tree/xilinx/stable-4.10 Is there anything else I can send that would be helpful? Thanks in advance, best regards, Milan Boberic. On Thu, Sep 13, 2018 at 10:24 AM Dario Faggioli wrote: > > Hi, > > So, first of all: > 1. use plaintext, not HTML > 2. don't drop the xen-devel list (and other Cc-s) when replying. > > :-) > > That being said... > > On Thu, 2018-09-13 at 09:38 +0200, Milan Boberic wrote: > > Hi Dario, > > yes passtrhough is involved. > > > Ok. > > > This is everything I did so far: > > > > I implemented Xen Hypervisor 4.9.2 on UltraZed-EG board with carrier > > card following these steps: > > 1.) installed petalinux 2018.2 on Ubuntu 16.04 > > 2.) dowloaded UltraZed-EG IO Carrier Card - PetaLinux 2018.2 > > Standard BSP > > 3.) created project: petalinux-create -t project –s > > 4.) copied xen-overlay.dtsi from ZCU102 project (also made with BSP) > > to my project > > 5.) built xen hypervisor following this guide link (booting with SD > > card) > > 6.) made baremetal application that blinks PS LED with Vivado > > 7.) measured signals jitted when application is on board with and > > with out Xen > > > I am not familiar with building an SD-Card image for an ARM system with > Xen on it. What I can tell, though, is that Xen 4.9.2 does not look to > me to include the RCU subsystem fixes that I mentioned in my reply. > > I don't recall why we did not backport them. It may be that they're not > entirely trivial, and they fix a behavior which manifests only in not > fully supported cases. Or that we (well, I :-/) forgot. > > I can have a look, at how challenging it is to apply the patches to > 4.9.x (but no hard feelings if anyone beats me at it, I promise > :-D). > > In the meantime, if you have the chance of trying Xen 4.10 or 4.10.1, > which has those fixes, that would be great. In fact, in case everything > would work with such version, that would be another clue that the RCU > issue is the actual culprit. > > > I menaged to decrease jitter adding sched=null vwfi=native in xen- > > overlay.dtsi file in xen-bootargs. > > > > The problem is when I add sched=null vwfi=native, I can create my > > bare-metal application with xl create and destroy it with xl destroy > > (it disappears from xl list) but when I want to start it again (xl > > create) this error pops: > > > > root@uz3eg-iocc-2018-2:~# xl create bm1.cfg > > Parsing config from bm1.cfg > > (XEN) IRQ 48 is already used by domain 1 > > libxl: error: libxl_create.c:1278:domcreate_launch_dm: Domain > > 2:failed give domain access to irq 48: Device or resource busy > > libxl: error: libxl_domain.c:1003:libxl__destroy_domid: Domain 2:Non- > > existant domain > > libxl: error: libxl_domain.c:962:domain_destroy_callback: Domain > > 2:Unable to destroy guest > > libxl: error: libxl_domain.c:889:domain_destroy_cb: Domain > > 2:Destruction of domain failed > > > > When I remove sched=null vwfi=native and build project again > > everything works fine, I can create and destroy bm app as many times > > as I want. > > > Yes. If you look at the email (and to the full thread) I put links to, > you'll see that the behavior is exactly the same. It mentions the > Credit2 scheduler not working, rather than null, but the problem is the > same, i.e., that there is no periodic timer tick neither in Credit2 nor > in null. > > Regards, > Dario > -- > <> (Raistlin Majere) > - > Dario Faggioli, Ph.D, http://about.me/dario.faggioli > Software Engineer @ SUSE https://www.suse.com/ ___ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel
Re: [Xen-devel] null scheduler bug
Thank you for taking your time to deal with this problem. I did more testing just to be sure and I also measured time (using stopwatch on my phone which isn't precise at all, just wanted You to get the feeling of what time intervals are we talking about). Yes, I can confirm that that situation actually improves with Xen 4.10, which is why I'm going to continue to use it. With Xen 4.9.2 after I create a guest and destroy it (note that it is a guest with pass through which blinks GPIO PS LED) I can't re-create it again. Never. Not even after 30 seconds, 2 minutes, 5 minutes, etc... These are testing results with Xen 4.10: 1.) I created a guest, destroyed it and immediately after that tried to create it again (manualy, over keyboard, for that I need maybe half a second or a second to hit twice "arrow up" and "enter" buttons on keyboard) and this shows: root@uz3eg-iocc-2018-2:~# xl create bm1.cfg root@uz3eg-iocc-2018-2:~# xl destroy bm1 root@uz3eg-iocc-2018-2:~# xl create bm1.cfg Parsing config from bm1.cfg (XEN) IRQ 48 is already used by domain 27 libxl: error: libxl_create.c:1325:domcreate_launch_dm: Domain 28:failed give domain access to irq 48: Device or resource busy libxl: error: libxl_domain.c:1000:libxl__destroy_domid: Domain 28:Non-existant domain libxl: error: libxl_domain.c:959:domain_destroy_callback: Domain 28:Unable to destroy guest libxl: error: libxl_domain.c:886:domain_destroy_cb: Domain 28:Destruction of domain failed 2.) Here I createed a guest, destroyed it and then immediately ran xl create twice, fast. For that I also need like half a second or second. Note that guest isn't in any state, is should be in "running" state because I need that PS LED to blink. root@uz3eg-iocc-2018-2:~# xl create bm1.cfg root@uz3eg-iocc-2018-2:~# xl destroy bm1 root@uz3eg-iocc-2018-2:~# xl create bm1.cfg Parsing config from bm1.cfg (XEN) IRQ 48 is already used by domain 32 libxl: error: libxl_create.c:1325:domcreate_launch_dm: Domain 33:failed give domain access to irq 48: Device or resource busy libxl: error: libxl_domain.c:1000:libxl__destroy_domid: Domain 33:Non-existant domain libxl: error: libxl_domain.c:959:domain_destroy_callback: Domain 33:Unable to destroy guest libxl: error: libxl_domain.c:886:domain_destroy_cb: Domain 33:Destruction of domain failed root@uz3eg-iocc-2018-2:~# xl create bm1.cfg Parsing config from bm1.cfg root@uz3eg-iocc-2018-2:~# xl list NameID Mem VCPUs State Time(s) Domain-00 768 1 r-1936.2 bm1 34 8 1 -- 0.0 3.) Here I did same thing like in 2.) except I waited 6-7 seconds after error pops and then ran xl create and guest worked fine (it is in "running state"), so: root@uz3eg-iocc-2018-2:~# xl create bm1.cfg Parsing config from bm1.cfg root@uz3eg-iocc-2018-2:~# xl destroy bm1 root@uz3eg-iocc-2018-2:~# xl create bm1.cfg Parsing config from bm1.cfg (XEN) IRQ 48 is already used by domain 57 libxl: error: libxl_create.c:1325:domcreate_launch_dm: Domain 58:failed give domain access to irq 48: Device or resource busy libxl: error: libxl_domain.c:1000:libxl__destroy_domid: Domain 58:Non-existant domain libxl: error: libxl_domain.c:959:domain_destroy_callback: Domain 58:Unable to destroy guest libxl: error: libxl_domain.c:886:domain_destroy_cb: Domain 58:Destruction of domain failed /* waited for approximately 6-7 seconds and then ran command bellow */ root@uz3eg-iocc-2018-2:~# xl create bm1.cfg Parsing config from bm1.cfg root@uz3eg-iocc-2018-2:~# xl list NameID Mem VCPUs State Time(s) Domain-0 0 768 1 r-3071.5 bm159 8 1 r- 8.2 4.) Here I createed a guest, destroyed it and then waited for approximately 7 seconds and then ran xl create and everything worked fine: root@uz3eg-iocc-2018-2:~# xl create bm1.cfg Parsing config from bm1.cfg root@uz3eg-iocc-2018-2:~# xl destroy bm1 /* waited for approximately 7 seconds and then ran command bellow */ root@uz3eg-iocc-2018-2:~# xl create bm1.cfg Parsing config from bm1.cfg root@uz3eg-iocc-2018-2:~# xl list NameID Mem VCPUs State Time(s) Domain-0 0 768 1 r-3641.1 bm170 8 1 r- 7.1 It looks like guest needs approximately 7 seconds to be fully destroyed and to fully release IRQ. And yes, if you menage to produce patch I will put it in my source tree, build with it, test it and send you back the results. In attachment I included dmesg, xl dmesg from xen 4.10. On Thu, Sep 13, 2018 at 7:39 PM Dario Faggioli wrote: > > On Thu, 2018-09-13 at 17:18 +0200, Milan Boberic wro
Re: [Xen-devel] null scheduler bug
I ran some more tests and managed to successfully create and destroy domU as many times as I want, without any delay between destroy and create. I added: printk("End of a domain_destroy function"); in domain_destroy function and printk("End of a complete_domain_destroy function"); in complete_domain_destroy function, at the end of the functions. Those functions are in domain.c file. Now, after every xl create it prints: root@uz3eg-iocc-2018-2:~# xl create bm1.cfg Parsing config from bm1.cfg <3>memory_map:add: dom2 gfn=ff0a0 mfn=ff0a0 nr=1 This line never printed before but it doesn't affect anything: <3>memory_map:add: dom2 gfn=ff0a0 mfn=ff0a0 nr=1 I tried removing printk from functions and I got same problem like before. Do you guys have any idea what is going on here? Thanks in advance, best regards! ___ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel
Re: [Xen-devel] null scheduler bug
Hey, yes, I can see prink's outputs on console and in xl dmesg. Also added timestamps, here are the results (created and destroyed domU a few times, just to get more values), this is from xl dmesg: NULL SCHEDULER - Not stressed PetaLinux host domain. (XEN) t=218000327743:End of a domain_destroy function (XEN) t=218000420874:End of a complete_domain_destroy function (XEN) <3>memory_map:add: dom2 gfn=ff0a0 mfn=ff0a0 nr=1 (XEN) t=250216234232:End of a domain_destroy function (XEN) t=250216326943:End of a complete_domain_destroy function (XEN) <3>memory_map:add: dom3 gfn=ff0a0 mfn=ff0a0 nr=1 (XEN) t=842722811648:End of a domain_destroy function (XEN) t=842722906089:End of a complete_domain_destroy function (XEN) <3>memory_map:add: dom4 gfn=ff0a0 mfn=ff0a0 nr=1 (XEN) t=845884879948:End of a domain_destroy function (XEN) t=845884972539:End of a complete_domain_destroy function (XEN) <3>memory_map:add: dom5 gfn=ff0a0 mfn=ff0a0 nr=1 (XEN) t=847696699336:End of a domain_destroy function (XEN) t=847696793097:End of a complete_domain_destroy function (XEN) <3>memory_map:add: dom6 gfn=ff0a0 mfn=ff0a0 nr=1 (XEN) t=919260640576:End of a domain_destroy function (XEN) t=919260732907:End of a complete_domain_destroy function Stressed PetaLinux host with command: yes > /dev/null & (XEN) t=3247747255872:End of a domain_destroy function (XEN) t=3247747349863:End of a complete_domain_destroy function (XEN) t=3253855263752:End of a domain_destroy function (XEN) t=3253855357563:End of a complete_domain_destroy function (XEN) <3>memory_map:add: dom10 gfn=ff0a0 mfn=ff0a0 nr=1 (XEN) t=3256797406444:End of a domain_destroy function (XEN) t=3256797498584:End of a complete_domain_destroy function (XEN) <3>memory_map:add: dom11 gfn=ff0a0 mfn=ff0a0 nr=1 (XEN) t=3259750219352:End of a domain_destroy function (XEN) t=3259750313903:End of a complete_domain_destroy function (XEN) <3>memory_map:add: dom12 gfn=ff0a0 mfn=ff0a0 nr=1 (XEN) t=3262418200662:End of a domain_destroy function (XEN) t=3262418295912:End of a complete_domain_destroy function CREDIT SCHEDULER - not stressed PetaLinux host (XEN) t=86245669606:End of a domain_destroy function (XEN) t=86245761127:End of a complete_domain_destroy function (XEN) t=93862614736:End of a domain_destroy function (XEN) t=93862702657:End of a complete_domain_destroy function (XEN) t=96298736227:End of a domain_destroy function (XEN) t=96298826618:End of a complete_domain_destroy function (XEN) t=98042498304:End of a domain_destroy function (XEN) t=98042590255:End of a complete_domain_destroy function (XEN) t=99755209642:End of a domain_destroy function (XEN) t=99755302643:End of a complete_domain_destroy function (XEN) t=101441462434:End of a domain_destroy function (XEN) t=101441553985:End of a complete_domain_destroy function Stressed PetaLinux host with yes > /dev/null & (XEN) t=331229997499:End of a domain_destroy function (XEN) t=331230091770:End of a complete_domain_destroy function (XEN) t=334493673726:End of a domain_destroy function (XEN) t=334493765647:End of a complete_domain_destroy function (XEN) t=336834521515:End of a domain_destroy function (XEN) t=336834613326:End of a complete_domain_destroy function (XEN) t=339215230042:End of a domain_destroy function (XEN) t=339215321153:End of a complete_domain_destroy function Also thank you for taking your time to help me, best regards! On Thu, Sep 20, 2018 at 6:09 PM Dario Faggioli wrote: > > Hey, > > Sorry for not having followed up. I was (and still am) planning to, but > am also a bit busy. > > On Thu, 2018-09-20 at 15:04 +0200, Milan Boberic wrote: > > I ran some more tests and managed to successfully create and destroy > > domU as many times as I want, without any delay between destroy and > > create. > > I added: > > printk("End of a domain_destroy function"); > > in domain_destroy function and > > printk("End of a complete_domain_destroy function"); in > > complete_domain_destroy function, at the end of the functions. > > Those functions are in domain.c file. > > > Right. This is exactly the kind of debug patch I wanted to > suggest/send. It is, in fact, what was being use to diagnose/fix the > RCU issue, when it first came up, as you may have seen. > > > Now, after every xl create it prints: > > > > root@uz3eg-iocc-2018-2:~# xl create bm1.cfg > > Parsing config from bm1.cfg > > <3>memory_map:add: dom2 gfn=ff0a0 mfn=ff0a0 nr=1 > > > Mmm... So, you added a printk() (or, actually, two of them) in the > domain destruction path, and are seeing (different) things being > printed during domain creation? How nice! :-D > > I'm not sure how that happens, and whether/how this new output relates > to the problem. However, what about the printk() you added? Do you see > their outp
Re: [Xen-devel] null scheduler bug
Hello guys, mapping on my system is: dom0 have one vCPU and it is pinned on pCPU0 domU also have one vCPU and it's pinned for pCPU2 I removed only vwfi=native and everything works fine. I can destroy and create a guest as many times as I want with out any error (still using sched=null). These are xen bootargs in Xen-overlay.dtsi file: xen,xen-bootargs = "console=dtuart dtuart=serial0 dom0_mem=768M bootscrub=0 maxcpus=1 dom0_max_vcpus=1 dom0_vcpus_pin=true timer_slop=0 core_parking=performance cpufreq=xen:performance sched=null"; There is whole xen-overlay.dtsi file included in attachment. Purpose of my work is implementing xen on UltraZed-EG board with maximum performance (and lowest jitter, which is why I'm using null scheduler and "hard" vcpu pinning) which is my master's thesis on faculty. By removing vwfi=native I'm not getting the best performance, right? vwfi=native should decrease interrupt latency by ~60% as I read here: https://blog.xenproject.org/author/stefano-stabellini/ Big thanks to Julien for having a look! / { chosen { #address-cells = <2>; #size-cells = <1>; xen,xen-bootargs = "console=dtuart dtuart=serial0 dom0_mem=768M bootscrub=0 maxcpus=1 dom0_max_vcpus=1 dom0_vcpus_pin=true timer_slop=0 core_parking=performance cpufreq=xen:performance sched=null"; xen,dom0-bootargs = "console=hvc0 earlycon=xen earlyprintk=xen maxcpus=1 clk_ignore_unused"; dom0 { compatible = "xen,linux-zimage", "xen,multiboot-module"; reg = <0x0 0x8 0x310>; }; }; }; &smmu { status = "okay"; mmu-masters = < &gem0 0x874 &gem1 0x875 &gem2 0x876 &gem3 0x877 &dwc3_0 0x860 &dwc3_1 0x861 &qspi 0x873 &lpd_dma_chan1 0x868 &lpd_dma_chan2 0x869 &lpd_dma_chan3 0x86a &lpd_dma_chan4 0x86b &lpd_dma_chan5 0x86c &lpd_dma_chan6 0x86d &lpd_dma_chan7 0x86e &lpd_dma_chan8 0x86f &fpd_dma_chan1 0x14e8 &fpd_dma_chan2 0x14e9 &fpd_dma_chan3 0x14ea &fpd_dma_chan4 0x14eb &fpd_dma_chan5 0x14ec &fpd_dma_chan6 0x14ed &fpd_dma_chan7 0x14ee &fpd_dma_chan8 0x14ef &sdhci0 0x870 &sdhci1 0x871 &nand0 0x872>; }; &uart1 { xen,passthrough = <0x1>; }; &gpio { xen,passthrough = <0x1>; };___ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel
Re: [Xen-devel] null scheduler bug
Reply for Julien, yes, my platform have 4 CPUs it's UltraZed-EG board with carrier card. I use only 2 CPUs, one for dom0 which is PetaLinux and one for domU which is bare-metal application that blinks LED on the board (I use it to measure jitter with oscilloscope), other two CPUs are unused (in idle loop). About command, commad is from xen-overlay.dtsi file which is included in system-user.dtsi file in my project. Whole file is included in atatchment in my earlier reply. About this options: maxvcpus=1 core_parking=performance cpufreq=xen:performance I was just testing them to see will I get any performance improvement, I will remove them right away. Best regards, Milan Boberic! On Tue, Sep 25, 2018 at 1:15 PM Julien Grall wrote: > > Hi Dario, > > On 09/25/2018 10:02 AM, Dario Faggioli wrote: > > On Mon, 2018-09-24 at 22:46 +0100, Julien Grall wrote: > >> On 09/21/2018 05:20 PM, Dario Faggioli wrote: > >>> > >>> What I'm after, is how log, after domain_destroy(), > >>> complete_domain_destroy() is called, and whether/how it relates the > >>> the > >>> grace period idle timer we've added in the RCU code. > >> > >> NULL scheduler and vwfi=native will inevitably introduce a latency > >> when > >> destroying a domain. vwfi=native means the guest will not trap when > >> it > >> has nothing to do and switch to the idle vCPU. So, in such > >> configuration, it is extremely unlikely the execute the idle_loop or > >> even enter in the hypervisor unless there are an interrupt on that > >> pCPU. > >> > > Ah! I'm not familiar with wfi=native --and in fact I was completely > > ignoring it-- but this analysis makes sense to me. > > > >> Per my understanding of call_rcu, the calls will be queued until the > >> RCU > >> reached a threshold. We don't have many place where call_rcu is > >> called, > >> so reaching the threeshold may just never happen. But nothing will > >> tell > >> that vCPU to go in Xen and say "I am done with RCU". Did I miss > >> anything? > >> > > Yeah, and in fact we added the timer _but_, in this case, it does not > > look that the timer is firing. It looks much more like "some random > > interrupt happens", as you're suggesting. OTOH, in the case where there > > are no printk()s, it might be that the timer does fire, but the vcpu > > has not gone through Xen, so the grace period is, as far as we know, > > not expired yet (which is also in accordance with Julien's analysis, as > > far as I understood it). > > The timer is only activated when sched_tick_suspend() is called. With > vwfi=native, you will never reach the idle_loop() and therefore never > setup a timer. > > Milan confirmed that guest can be destroyed with vwfi=native removed. So > this is confirming my thinking. Trapping wfi will end up to switch to > idle vCPU and trigger the grace period. > > I am not entirely sure you will be able to reproduce it on x86, but I > don't think it is a Xen Arm specific. > > When I looked at the code, I don't see any grace period in other context > than idle_loop. Rather than adding another grace period, I would just > force quiescence for every call_rcu. > > This should not be have a big performance impact as we don't use much > call_rcu and it would allow domain to be fully destroyed in timely manner. > > Cheers, > > -- > Julien Grall ___ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel
Re: [Xen-devel] null scheduler bug
Hi, I applied patch and vwfi=native and everything works fine, I can create and destroy guest domain as many times as I want. I have to ask, will this patch have any impact on performance (I will test it later, but I just need your opinions)? And what this patch exactly do? I need to fully understand it because I need to document it in my master thesis which will be finished soon thanks to you people :D Thank you for taking your time to make this patch, best regards! On Thu, Sep 27, 2018 at 11:51 AM Dario Faggioli wrote: > > On Tue, 2018-09-25 at 19:49 +0200, Dario Faggioli wrote: > > [Adding a few people to the Cc-list. See below...] > > On Tue, 2018-09-25 at 12:15 +0100, Julien Grall wrote: > > > On 09/25/2018 10:02 AM, Dario Faggioli wrote: > > > > On Mon, 2018-09-24 at 22:46 +0100, Julien Grall wrote: > > > > > > > My knowledge of RCU themselves would need refreshing, though. I > > managed > > to getbecome reasonably familiar with how the implementation we > > imported works back then, when working on the said issue, but I guess > > I > > better go check the code again. > > > > I'm Cc-ing the people that have reviewed the patches and helping with > > the idle timer problem, in case anyone has bright ideas out of the > > top > > of his head. > > > > Perhaps we should "just" get away from using RCU for domain > > destruction > > (but I'm just tossing the idea around, without much consideration > > about > > whether it's the right solution, or about how hard/feasible it really > > is). > > > > Or maybe we can still use the timer, in some special way, if we have > > wfi=native (or equivalent)... > > > So, I've had a look (but only a quick one). > > If we want to do something specific within the domain destruction path, > we can add an rcu_barrier() there (I mean in domain_destroy()). > However, that does not feel right either. Also, how can we be sure that > the CPU never going through idle (as far as Xen knows, at least), isn't > going to be problem for other RCU calls as well? > > Another thing that we can do is to act on the parameters that control > the threshold which decides when a quiescent state is forced. This was > basically what Julien was suggesting, but I still would avoid to do > that always. > > So, basically, in this hackish patch attached, I added a new boot > command line argument, called rcu_force_quiesc. If set to true, > thresholds are set so that quiescence is always forced at each > invocation of call_rcu(). And even if the new param is not explicitly > specified, I do tweak the threshold when "wfi=native" is. > > Milan, can you apply this patch, add "wfi=native" again, and re-test? > If it works, we'll decide what to do next. > > E.g., we can expose the RCU threshold via the appropriate set of boot > time parameters --like Linux, from where this code comes, did/does-- > and document how they should be set, if one wants to use "wfi=native". > > Thanks and Regards, > Dario > -- > <> (Raistlin Majere) > - > Dario Faggioli, Ph.D, http://about.me/dario.faggioli > Software Engineer @ SUSE https://www.suse.com/ ___ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel
Re: [Xen-devel] null scheduler bug
Hi, thank you for explanation, links and advices. I'm gonna go through all that literature. Best regards! On Thu, Sep 27, 2018 at 7:06 PM Dario Faggioli wrote: > > On Thu, 2018-09-27 at 16:09 +0100, Julien Grall wrote: > > Hi Dario, > > > Hi, > > > On 09/27/2018 03:32 PM, Dario Faggioli wrote: > > > On Thu, 2018-09-27 at 15:15 +0200, Milan Boberic wrote: > > > > > > In one of your e-mail, you wrote: > > > > "Well, our implementation of RCU requires that, from time to time, > > the > > various physical CPUs of your box become idle, or get an interrupt, > > or > > go executing inside Xen (for hypercalls, vmexits, etc). In fact, a > > CPU > > going through Xen is what allow us to tell that it reached a so- > > called > > 'quiescent state', which in turns is necessary for declaring a so- > > called 'RCU grace period' over." > > > > I don't quite agree with you on the definition of "quiescent state" > > here. > > > Hehe... I was trying to be both quick and accurate. It's more than > possible that I failed. :-) > > > To take the domain example, we want to wait until all the CPU has > > stopped using the pointer (an hypercall could race put_domain). > > > I'm not sure what you mean with "an hypercall could race put_domain". > What we want is to wait until all the CPUs that are involved in the > grace period, have gone through rcupdate.c:cpu_quiet(), or have become > idle. > > Receiving an interrupt, or experiencing a context switch, or even going > idle, it's "just" how it happens that these CPUs have their chance to > go through cpu_quiet(). It is in this sense that I meant that those > events are used as markers of a quiescent state. > > And "wfi=native" (in particular in combination with the null scheduler, > but I guess also with other ones, at least to a certain extent) makes > figuring out the "or have become idle" part tricky. That is the problem > here, isn't it? > > > That > > pointer will not be in-use if the CPU is in kernel-mode/user-mode or > > in > > the idle loop. Am I correct? > > > Right. > > So, we want that all the CPUs that were in Xen to have either left Xen > at least once or, if they're still there and have never left, that must > be because they've become idle. > > And currently we treat all the CPUs that have not told the RCU > subsystem that they're idle (via rcu_idle_enter()) as busy, without > distinguishing between the ones that are busy in Xen from the one which > are busy in guest (kernel or user) mode. > > > So I am wondering whether we could: > > - Mark any CPU in kernel-mode/user-mode quiet > > > Right. We'd need something like a rcu_guest_enter()/rcu_guest_exit() > (or a rcu_xen_exit()/rcu_xen_enter()), which works for all combination > of arches and guest types. > > It looks to me too that this would help in this case, as the vCPU that > stays in guest mode because of wfi=idle would be counted as quiet, and > we won't have to wait for it. > > > - Raise a RCU_SOFTIRQ in call_rcu? > > > Mmm... what would be the point of this? > > > With that solution, it may even be possible to avoid the timer in > > the > > idle loop. > > > Not sure. The timer is there to deal with the case when a CPU which has > a callback queued wants to go idle. It may have quiesced already, but > if there are others which have not, either: > 1) we let it go idle, but then the callback will run only when it >wakes up from idle which, without the timer, could be far ahead in >time; > 2) we don't let it go idle, but we waste resources; > 3) we let it go idle and keep the timer. :-) > > But anyway, even if it would not let us get rid of the timer, it seems > like it could be nicer than any other approaches. I accept > help/suggestions about the "let's intercept guest-Xen and Xen-guest > transitions, and track that inside RCU code. > > Regards, > Dario > -- > <> (Raistlin Majere) > - > Dario Faggioli, Ph.D, http://about.me/dario.faggioli > Software Engineer @ SUSE https://www.suse.com/ ___ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel
[Xen-devel] Xen optimization
Hi, I'm testing Xen Hypervisor 4.10 performance on UltraZed-EG board with carrier card. I created bare-metal application in Xilinx SDK. In bm application I: - start triple timer counter (ttc) which generates interrupt every 1us - turn on PS LED - call function 100 times in for loop (function that sets some values) - turn off LED - stop triple timer counter - reset counter value I ran this bare-metal application under Xen Hypervisor with following settings: - used null scheduler (sched=null) and vwfi=native - bare-metal application have one vCPU and it is pinned for pCPU1 - domain which is PetaLinux also have one vCPU pinned for pCPU0, other pCPUs are unused. Under Xen Hypervisor I can see 3us jitter on oscilloscope. When I ran same bm application with JTAG from Xilinx SDK (without Xen Hypervisor, directly on the board) there is no jitter. I'm curios what causes this 3us jitter in Xen (which isn't small jitter at all) and is there any way of decreasing it? Also I would gladly accept any suggestion about increasing performance, decreasing jitter, decreasing interrupt latency, etc. Thanks in advance, Milan Boberic. ___ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel
Re: [Xen-devel] Xen optimization
Hi, sorry, my explanation wasn't precise and I missed the point. vCPU pinning with sched=null I put "just in case", because it doesn't hurt. Yes, PetaLinux domain is dom0. Tested with Credit scheduler before (it was just the LED blink application but anyway), it results with bigger jitter then null scheduler. For example, with Credit scheduler LED blinking result in approximately 3us jitter where with null scheduler there is no jitter. vwfi=native was giving the domain destruction problem which you fixed by sending me patch, approximately 2 weeks ago if you recall :) but I still didn't test it's impact on performance, I will do it ASAP and share results (I think that without vwfi=native jitter will be the same or even bigger). When I say "without Xen", yes, I mean without any OS. Just hardware and this bare-metal app. I do expect latency to be higher in the Xen case and I'm curious how much exactly (which is the point of my work and also master thesis for my faculty :D). Now, the point is that when I set only LED blinking (without timer) in my application there is no jitter (in Xen case) but when I add timer which generates interrupt every us, jitter of 3 us occurs. Timer I use is zynq ultrascale's triple timer counter. I'm suspecting that timer interrupt is creating that jitter. For interrupts I use passthrough in bare-metal application's configuration file (which works for GPIO LED because there is no jitter, interrupt can "freely go" from guest domain directly to GPIO LED). Also, when I create guest domain (which is this bare-metal application) I get this messages: (XEN) printk: 54 messages suppressed. (XEN) d2v0 No valid vCPU found for vIRQ32 in the target list (0x2). Skip it (XEN) d2v0 No valid vCPU found for vIRQ33 in the target list (0x2). Skip it root@uz3eg-iocc-2018-2:~# (XEN) d2v0 No valid vCPU found for vIRQ34 in the target list (0x2). Skip it (XEN) d2v0 No valid vCPU found for vIRQ35 in the target list (0x2). Skip it (XEN) d2v0 No valid vCPU found for vIRQ36 in the target list (0x2). Skip it (XEN) d2v0 No valid vCPU found for vIRQ37 in the target list (0x2). Skip it (XEN) d2v0 No valid vCPU found for vIRQ38 in the target list (0x2). Skip it (XEN) d2v0 No valid vCPU found for vIRQ39 in the target list (0x2). Skip it (XEN) d2v0 No valid vCPU found for vIRQ40 in the target list (0x2). Skip it (XEN) d2v0 No valid vCPU found for vIRQ41 in the target list (0x2). Skip it In attachments I included dmesg, xl dmesg and bare-metal application's configuration file. Thanks in advance, Milan Boberic. On Tue, Oct 9, 2018 at 6:46 PM Dario Faggioli wrote: > > On Tue, 2018-10-09 at 12:59 +0200, Milan Boberic wrote: > > Hi, > > > Hi Milan, > > > I'm testing Xen Hypervisor 4.10 performance on UltraZed-EG board with > > carrier card. > > I created bare-metal application in Xilinx SDK. > > In bm application I: > >- start triple timer counter (ttc) which generates > > interrupt every 1us > >- turn on PS LED > >- call function 100 times in for loop (function that sets > > some values) > >- turn off LED > >- stop triple timer counter > >- reset counter value > > > Ok, I'm adding Stefano, Julien, and a couple of other people interested > in RT/lowlat on Xen. > > > I ran this bare-metal application under Xen Hypervisor with following > > settings: > > - used null scheduler (sched=null) and vwfi=native > > - bare-metal application have one vCPU and it is pinned for pCPU1 > > - domain which is PetaLinux also have one vCPU pinned for pCPU0, > > other pCPUs are unused. > > Under Xen Hypervisor I can see 3us jitter on oscilloscope. > > > So, this is probably me not being familiar with Xen on Xilinx (and with > Xen on ARM as a whole), but there's a few things I'm not sure I > understand: > - you say you use sched=null _and_ pinning? That should not be > necessary (although, it shouldn't hurt either) > - "domain which is PetaLinux", is that dom0? > > IAC, if it's not terrible hard to run this kind of test, I'd say, try > without 'vwfi=native', and also with another scheduler, like Credit, > (but then do make sure you use pinning). > > > When I ran same bm application with JTAG from Xilinx SDK (without Xen > > Hypervisor, directly on the board) there is no jitter. > > > Here, when you say "without Xen", do you also mean without any > baremetal OS at all? > > > I'm curios what causes this 3us jitter in Xen (which isn't small > > jitter at all) and is there any way of decreasing it? > > > Right. So, I'm not sure I've understood the te
Re: [Xen-devel] Xen optimization
Attachments. name = "test" kernel = "timer.bin" memory = 8 vcpus = 1 cpus = [1] irqs = [ 48, 54, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79 ] iomem = [ "0xff010,1", "0xff110,1", "0xff120,1", "0xff130,1", "0xff140,1", "0xff0a0,1" ][0.00] Booting Linux on physical CPU 0x0 [0.00] Linux version 4.14.0-xilinx-v2018.2 (oe-user@oe-host) (gcc version 7.2.0 (GCC)) #1 SMP Mon Oct 1 16:41:32 CEST 2018 [0.00] Boot CPU: AArch64 Processor [410fd034] [0.00] Machine model: xlnx,zynqmp [0.00] Xen 4.10 support found [0.00] efi: Getting EFI parameters from FDT: [0.00] efi: UEFI not found. [0.00] cma: Reserved 256 MiB at 0x6000 [0.00] On node 0 totalpages: 196608 [0.00] DMA zone: 2688 pages used for memmap [0.00] DMA zone: 0 pages reserved [0.00] DMA zone: 196608 pages, LIFO batch:31 [0.00] psci: probing for conduit method from DT. [0.00] psci: PSCIv1.1 detected in firmware. [0.00] psci: Using standard PSCI v0.2 function IDs [0.00] psci: Trusted OS migration not required [0.00] random: fast init done [0.00] percpu: Embedded 21 pages/cpu @ffc03ffb7000 s46488 r8192 d31336 u86016 [0.00] pcpu-alloc: s46488 r8192 d31336 u86016 alloc=21*4096 [0.00] pcpu-alloc: [0] 0 [0.00] Detected VIPT I-cache on CPU0 [0.00] CPU features: enabling workaround for ARM erratum 845719 [0.00] Built 1 zonelists, mobility grouping on. Total pages: 193920 [0.00] Kernel command line: console=hvc0 earlycon=xen earlyprintk=xen maxcpus=1 clk_ignore_unused [0.00] PID hash table entries: 4096 (order: 3, 32768 bytes) [0.00] Dentry cache hash table entries: 131072 (order: 8, 1048576 bytes) [0.00] Inode-cache hash table entries: 65536 (order: 7, 524288 bytes) [0.00] Memory: 423788K/786432K available (9980K kernel code, 644K rwdata, 3132K rodata, 512K init, 2168K bss, 100500K reserved, 262144K cma-reserved) [0.00] Virtual kernel memory layout: [0.00] modules : 0xff80 - 0xff800800 ( 128 MB) [0.00] vmalloc : 0xff800800 - 0xffbebfff ( 250 GB) [0.00] .text : 0xff800808 - 0xff8008a4 ( 9984 KB) [0.00] .rodata : 0xff8008a4 - 0xff8008d6 ( 3200 KB) [0.00] .init : 0xff8008d6 - 0xff8008de ( 512 KB) [0.00] .data : 0xff8008de - 0xff8008e81200 ( 645 KB) [0.00].bss : 0xff8008e81200 - 0xff800909f2b0 ( 2169 KB) [0.00] fixed : 0xffbefe7fd000 - 0xffbefec0 ( 4108 KB) [0.00] PCI I/O : 0xffbefee0 - 0xffbeffe0 (16 MB) [0.00] vmemmap : 0xffbf - 0xffc0 ( 4 GB maximum) [0.00] 0xffbf0070 - 0xffbf0188 (17 MB actual) [0.00] memory : 0xffc02000 - 0xffc07000 ( 1280 MB) [0.00] Hierarchical RCU implementation. [0.00] RCU event tracing is enabled. [0.00] RCU restricting CPUs from NR_CPUS=8 to nr_cpu_ids=1. [0.00] RCU: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=1 [0.00] NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 [0.00] arch_timer: cp15 timer(s) running at 99.99MHz (virt). [0.00] clocksource: arch_sys_counter: mask: 0xff max_cycles: 0x171015c90f, max_idle_ns: 440795203080 ns [0.03] sched_clock: 56 bits at 99MHz, resolution 10ns, wraps every 4398046511101ns [0.000287] Console: colour dummy device 80x25 [0.283041] console [hvc0] enabled [0.286513] Calibrating delay loop (skipped), value calculated using timer frequency.. 199.99 BogoMIPS (lpj=36) [0.296969] pid_max: default: 32768 minimum: 301 [0.301730] Mount-cache hash table entries: 2048 (order: 2, 16384 bytes) [0.308393] Mountpoint-cache hash table entries: 2048 (order: 2, 16384 bytes) [0.316319] ASID allocator initialised with 65536 entries [0.321502] xen:grant_table: Grant tables using version 1 layout [0.327092] Grant table initialized [0.330637] xen:events: Using FIFO-based ABI [0.334961] Xen: initializing cpu0 [0.338469] Hierarchical SRCU implementation. [0.343130] EFI services will not be available. [0.347423] zynqmp_plat_init Platform Management API v1.0 [0.352852] zynqmp_plat_init Trustzone version v1.0 [0.357828] smp: Bringing up secondary CPUs ... [0.362366] smp: Brought up 1 node, 1 CPU [0.366430] SMP: Total of 1 processors activated. [0.371189] CPU features: detected feature: 32-bit EL0 Support [0.377073] CPU: All CPU(s) started at EL1 [0.381231] alternatives: patching kernel code [0.386133] devtmpfs: initialized [0.392766] clocksource: jiffies: mask: 0x max_cycles: 0xf
Re: [Xen-devel] Xen optimization
On Wed, Oct 10, 2018 at 6:41 PM Meng Xu wrote: > > The jitter may come from Xen or the OS in dom0. > It will be useful to know what is the jitter if you run the test on PetaLinux. > (It's understandable the jitter is gone without OS. It is also common > that OS introduces various interferences.) Hi Meng, well... I'm using bare-metal application and I need it exclusively to be ran on one CPU as domU (guest) without OS (and I'm not sure how would I make the same app to be ran on PetaLinux dom0 :D haha). Is there a chance that PetaLinux as dom0 is creating this jitter and how? Is there a way of decreasing it? Yes, there are no prints. I'm not sure about this timer interrupt passthrough because I didn't find any example of it, in attachment I included xen-overlay.dtsi file which I edited to add passthrough, in earlier replies there are bare-metal configuration file. It would be helpful to know if those setting are correct. If they are not correct it would explain the jitter. Thanks in advance, Milan Boberic! / { chosen { #address-cells = <2>; #size-cells = <1>; xen,xen-bootargs = "console=dtuart dtuart=serial0 dom0_mem=768M bootscrub=0 dom0_max_vcpus=1 dom0_vcpus_pin=true timer_slop=0 sched=null vwfi=native"; xen,dom0-bootargs = "console=hvc0 earlycon=xen earlyprintk=xen maxcpus=1 clk_ignore_unused"; dom0 { compatible = "xen,linux-zimage", "xen,multiboot-module"; reg = <0x0 0x8 0x310>; }; }; }; &smmu { status = "okay"; mmu-masters = < &gem0 0x874 &gem1 0x875 &gem2 0x876 &gem3 0x877 &dwc3_0 0x860 &dwc3_1 0x861 &qspi 0x873 &lpd_dma_chan1 0x868 &lpd_dma_chan2 0x869 &lpd_dma_chan3 0x86a &lpd_dma_chan4 0x86b &lpd_dma_chan5 0x86c &lpd_dma_chan6 0x86d &lpd_dma_chan7 0x86e &lpd_dma_chan8 0x86f &fpd_dma_chan1 0x14e8 &fpd_dma_chan2 0x14e9 &fpd_dma_chan3 0x14ea &fpd_dma_chan4 0x14eb &fpd_dma_chan5 0x14ec &fpd_dma_chan6 0x14ed &fpd_dma_chan7 0x14ee &fpd_dma_chan8 0x14ef &sdhci0 0x870 &sdhci1 0x871 &nand0 0x872>; }; &uart1 { xen,passthrough = <0x1>; }; &gpio { xen,passthrough = <0x1>; }; &ttc0 { xen,passthrough = <0x1>; }; &ttc1 { xen,passthrough = <0x1>; }; &ttc2 { xen,passthrough = <0x1>; }; &ttc3 { xen,passthrough = <0x1>; };___ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel
Re: [Xen-devel] Xen optimization
I misunderstood the passthrough concept, it only allows guest domain to use certain interrupts and memory. Is there are way to somehow route interrupt from domU (bare-metal app) to hw? On Thu, Oct 11, 2018 at 9:36 AM Milan Boberic wrote: > > On Wed, Oct 10, 2018 at 6:41 PM Meng Xu wrote: > > > > The jitter may come from Xen or the OS in dom0. > > It will be useful to know what is the jitter if you run the test on > > PetaLinux. > > (It's understandable the jitter is gone without OS. It is also common > > that OS introduces various interferences.) > > Hi Meng, > well... I'm using bare-metal application and I need it exclusively to > be ran on one CPU as domU (guest) without OS (and I'm not sure how > would I make the same app to be ran on PetaLinux dom0 :D haha). > Is there a chance that PetaLinux as dom0 is creating this jitter and > how? Is there a way of decreasing it? > > Yes, there are no prints. > > I'm not sure about this timer interrupt passthrough because I didn't > find any example of it, in attachment I included xen-overlay.dtsi file > which I edited to add passthrough, in earlier replies there are > bare-metal configuration file. It would be helpful to know if those > setting are correct. If they are not correct it would explain the > jitter. > > Thanks in advance, Milan Boberic! ___ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel
Re: [Xen-devel] Xen optimization
Hi Stefano, glad to have you back :D, this is my setup: - dom0 is PetaLinux, has 1 vCPU and it's pinned for pCPU0 - there is only one domU and this is my bare-metal app that also have one vCPU and it's pinned for pCPU1 so yeah, there is only dom0 and bare-metal app on the board. Jitter is the same with and without Dario's patch. I'm still not sure about timer's passthrough because there is no mention of triple timer counter is device tree so I added: &ttc0 { xen,passthrough = <0x1>; }; at the end of the xen-overlay.dtsi file which I included in attachment. About patch you sent, I can't find this funcion void vgic_inject_irq in /xen/arch/arm/vgic.c file, this is link of git repository from where I build my xen so you can take a look if that printk can be put somewhere else. https://github.com/Xilinx/xen/ I ran some more testing and realized that results are the same with or without vwfi=native, which I think again points out that passthrough that I need to provide in device tree isn't valid. And of course, higher the frequency of interrupts results in higher jitter. I'm still battling with Xilinx SDK and triple timer counter that's why I can't figure out what is the exact frequency set (I'm just rising it and lowering it), I'll give my best to solve that ASAP because we need to know exact value of frequency set. Thanks in advance! Milan On Fri, Oct 12, 2018 at 12:29 AM Stefano Stabellini < stefano.stabell...@xilinx.com> wrote: > On Thu, 11 Oct 2018, Milan Boberic wrote: > > On Wed, Oct 10, 2018 at 6:41 PM Meng Xu wrote: > > > > > > The jitter may come from Xen or the OS in dom0. > > > It will be useful to know what is the jitter if you run the test on > PetaLinux. > > > (It's understandable the jitter is gone without OS. It is also common > > > that OS introduces various interferences.) > > > > Hi Meng, > > well... I'm using bare-metal application and I need it exclusively to > > be ran on one CPU as domU (guest) without OS (and I'm not sure how > > would I make the same app to be ran on PetaLinux dom0 :D haha). > > Is there a chance that PetaLinux as dom0 is creating this jitter and > > how? Is there a way of decreasing it? > > > > Yes, there are no prints. > > > > I'm not sure about this timer interrupt passthrough because I didn't > > find any example of it, in attachment I included xen-overlay.dtsi file > > which I edited to add passthrough, in earlier replies there are > > bare-metal configuration file. It would be helpful to know if those > > setting are correct. If they are not correct it would explain the > > jitter. > > > > Thanks in advance, Milan Boberic! > > Hi Milan, > > Sorry for taking so long to go back to this thread. But I am here now :) > > First, let me ask a couple of questions to understand the scenario > better: is there any interference from other virtual machines while you > measure the jitter? Or is the baremetal app the only thing actively > running on the board? > > Second, it would be worth double-checking that Dario's patch to fix > sched=null is not having unexpected side effects. I don't think so, it > would be worth testing with it and without it to be sure. > > I gave a look at your VM configuration. The configuration looks correct. > There is no dtdev settings, but given that none of the devices you are > assigning to the guest does any DMA, it should be OK. You want to make > sure that Dom0 is not trying to use those same devices -- make sure to > add "xen,passthrough;" to each corresponding node on the host device > tree. > > The error messages "No valid vCPU found" are due to the baremetal > applications trying to configure as target cpu for the interrupt cpu1 > (the second cpu in the system), while actually only 1 vcpu is assigned > to the VM. Hence, only cpu0 is allowed. I don't think it should cause > any jitter issues, because the request is simply ignored. Just to be > safe, you might want to double check that the physical interrupt is > delivered to the right physical cpu, which would be cpu1 in your > configuration, the one running the only vcpu of the baremetal app. You > can do that by adding a printk to xen/arch/arm/vgic.c:vgic_inject_irq, > for example: > > diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c > index 5a4f082..208fde7 100644 > --- a/xen/arch/arm/vgic.c > +++ b/xen/arch/arm/vgic.c > @@ -591,6 +591,7 @@ void vgic_inject_irq(struct domain *d, struct vcpu *v, > unsigned int virq, > out: > spin_unlock_irqrestore(&v->arch.vgic.lock, flags); > > +if (v != current) printk
Re: [Xen-devel] Xen optimization
Hi, > Don't interrupt _come_ from hardware and go/are routed to > hypervisor/os/app? Yes they do, sorry, I reversed the order because I'm a newbie :) . > Would you mind to explain what is the triple timer counter? On this link on page 342 is explanation. > This is not the official Xen repository and look like patches have been > applied on top. I am afraid, I am not going to be able help here. Could you > do the same experiment with Xen 4.11? I think I have to get Xen from Xilinx because I use board that has Zynq Ultrascale. Stefano sent branch with Xen 4.11 so I built with it. > This could also means that wfi is not used by the guest or you never go to > the idle vCPU. Right. > This is definitely wrong. Can you please also post the full host device > tree with your modifications that you are using for Xen and Dom0? You > should have something like: > > timer@ff11 { > compatible = "cdns,ttc"; > interrupt-parent = <0x2>; > interrupts = <0x0 0x24 0x4 0x0 0x25 0x4 0x0 0x26 0x4>; > reg = <0x0 0xff11 0x0 0x1000>; > timer-width = <0x20>; > power-domains = <0x3b>; > xen,passthrough; > }; > For each of the nodes of the devices you are assigning to the DomU. I put &ttc0 { xen,passthrough = <0x1>; }; because when I was making bm app I was following this guide. Now I see it's wrong. When I copied directly: timer@ff11 { compatible = "cdns,ttc"; interrupt-parent = <0x2>; interrupts = <0x0 0x24 0x4 0x0 0x25 0x4 0x0 0x26 0x4>; reg = <0x0 0xff11 0x0 0x1000>; timer-width = <0x20>; power-domains = <0x3b>; xen,passthrough; }; in to the xen-overlay.dtsi file it resulted an error during device-tree build. I modified it a little bit so I can get successful build, there are all device-tree files included in attachment. I'm not sure how to set this passthrough properly, if you could take a look at those files in attachment I'd be more then grateful. > It's here: > https://github.com/Xilinx/xen/blob/xilinx/stable-4.9/xen/arch/arm/vgic.c#L462 Oh, about that. I sent you wrong branch, I was using Xen 4.10. Anyway now I moved to Xen 4.11 like you suggested and applied your patch and Dario's also. Okay, now when I want to xl create my domU (bare-metal app) I get error: Parsing config from timer.cfg (XEN) IRQ 68 is already used by domain 0 libxl: error: libxl_create.c:1354:domcreate_launch_dm: Domain 1:failed give domain access to irq 68: Device or resource busy libxl: error: libxl_domain.c:1034:libxl__destroy_domid: Domain 1:Non-existant domain libxl: error: libxl_domain.c:993:domain_destroy_callback: Domain 1:Unable to destroy guest libxl: error: libxl_domain.c:920:domain_destroy_cb: Domain 1:Destruction of domain failed I guess my modifications of: timer@ff11 { compatible = "cdns,ttc"; interrupt-parent = <0x2>; interrupts = <0x0 0x24 0x4 0x0 0x25 0x4 0x0 0x26 0x4>; reg = <0x0 0xff11 0x0 0x1000>; timer-width = <0x20>; power-domains = <0x3b>; xen,passthrough; }; are not correct. I tried to change interrupts to: interrupts = <0x0 0x44 0x4 0x0 0x45 0x4 0x0 0x46 0x4>; because if you check here on page 310 interrupts for TTC0 are 68:70. But that didn't work either I still get same error. I also tried to change xen,passthrough; line with: xen,passthrough = <0x1>; but also without success, still the same error. Are you sure about this line: reg = <0x0 0xff11 0x0 0x1000>; ? Or it should be like this? reg = <0x0 0xff11 0x1000>; I also included xl dmesg and dmesg in attachments (after xl create of bm app). Thanks in advance! Milan FILESEXTRAPATHS_prepend := "${THISDIR}/files:" SRC_URI += "file://system-user.dtsi" SRC_URI += "file://xen-overlay.dtsi"(XEN) Checking for initrd in /chosen (XEN) Initrd 02bd7000-05fffd97 (XEN) RAM: - 7fef (XEN) (XEN) MODULE[0]: 07ff4000 - 07ffc080 Device Tree (XEN) MODULE[1]: 02bd7000 - 05fffd97 Ramdisk (XEN) MODULE[2]: 0008 - 0318 Kernel (XEN) RESVD[0]: 07ff4000 - 07ffc000 (XEN) RESVD[1]: 02bd7000 - 05fffd97 (XEN) (XEN) Command line: console=dtuart dtuart=serial0 dom0_mem=768M bootscrub=0 dom0_max_vcpus=1 dom0_vcpus_pin=true timer_slop=0 sched=null vwfi=native (XEN) Placing Xen at 0x7fc0-0x7fe0 (XEN) Update BOOTMOD_XEN from 0600-06108d81 => 7fc0-7fd08d81 (XEN) Domain heap initialised (XEN) Booting using Device Tree (XEN) Looking for dtuart at "serial0", options "" Xen 4.11.1-pre (XEN) Xen version 4.11.1-pre (milan@) (aarch64-xilinx-linux-gcc (GCC) 7.2.0) debug=n Sat Oct 13 16:34:51 CEST 2018 (XEN) Latest ChangeSet: Mon Sep 24 16:07:33 2018 -0700 git:8610a91abc-dirty (XEN) Proce
Re: [Xen-devel] Xen optimization
> On 15/10/2018 09:14, Julien Grall wrote: > Which link? I made hyperlink on "link" word, looks like somehow it got lost, here is the link: https://www.xilinx.com/support/documentation/user_guides/ug1085-zynq-ultrascale-trm.pdf > The board should be fully supported upstreamed. If Xilinx has more patch > on top, then you would need to seek support from them because I don't > know what they changed in Xen. I think Stefano can help, thanks for sugesstion. Cheers, Milan ___ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel
Re: [Xen-devel] Xen optimization
Hi, > > The device tree with everything seems to be system.dts, that was enough > :-) I don't need the dtsi files you used to build the final dts, I only > need the one you use in uboot and for your guest. I wasn't sure so I sent everything, sorry for being bombarded with all those files. :-) > It looks like you set xen,passthrough correctly in system.dts for > timer@ff11, serial@ff01, and gpio@ff0a. Thank you for taking a look, now we are sure that passthrough works correctly because there is no error during guest creation and there are no prints of "DEBUG irq slow path". > If you are not getting any errors anymore when creating your baremetal > guest, then yes, it should be working passthrough. I would double-check > that everything is working as expected using the DEBUG patch for Xen I > suggested to you in the other email. You might even want to remove the > "if" check and always print something for every interrupt of your guest > just to get an idea of what's going on. See the attached patch. When I apply this patch it prints forever: (XEN) DEBUG virq=68 local=1 which is a good thing I guess because interrupts are being generated non-stop. > Once everything is as expected I would change the frequency of the > timer, because 1u is way too frequent. I think it should be at least > 3us, more like 5us. Okay, about this... I double checked my bare-metal application and looks like interrupts weren't generated every 1 us. Maximum frequency of interrupts is 8 us. I checked interrupt frequency with oscilloscope just to be sure (toggling LED on/off when interrupts occur). So, when I set: - interrupts to be generated every 8 us I get jitter of 6 us - interrupts to be generated every 10 us I get jitter of 3 us (after 2-3mins it jumps to 6 us) - interrupts to be generated every 15 us jitter is the same as when only bare-metal application runs on board (without Xen or any OS) I want to remind you that bare-metal application that only blinks LED with high speed gives 1 us jitter, somehow introducing frequent interrupts causes this jitter, that's why I was unsecure about this timer passthrough. Taking in consideration that you measured Xen overhead of 1 us I have a feeling that I'm missing something, is there anything else I could do to get better results except sched=null, vwfi=native, hard vCPU pinning (1 vCPU on 1 pCPU) and passthrough (not sure if it affects the jitter) ? I'm forcing frequent interrupts because I'm testing to see if this board with Xen on it could be used for real-time simulations, real-time signal processing, etc. If I could get results like yours (1 us Xen overhead) of even better that would be great! BTW how did you measure Xen's overhead? > Keep in mind that jitter is about having > deterministic IRQ latency, not about having extremely frequent > interrupts. Yes, but I want to see exactly where will I lose deterministic IRQ latency which is extremely important in real-time signal processing. So, what causes this jitter, are those Xen limits, ARM limits, etc? It would be nice to know, I'll share all the results I get. > I would also double check that you are not using any other devices or > virtual interfaces in your baremetal app because that could negatively > affect the numbers. I checked the bare-metal app and I think there is no other devices that bm app is using. > Linux by default uses the virtual > timer interface ("arm,armv8-timer", I would double check that the > baremetal app is not doing the same -- you don't want to be using two > timers when doing your measurements. Hmm, I'm not sure how to check that, I could send bare-metal app if that helps, it's created in Xilinx SDK 2017.4. Also, should I move to Xilinx SDK 2018.2 because I'm using PetaLinux 2018.2 ? I'm also using hardware description file for SDK that is created in Vivado 2017.4. Is all this could be a "not matching version" problem (I don't think so because bm app works)? Meng mentioned in some of his earlier posts: > Even though the app. is the only one running on the CPU, the CPU may > be used to handle other interrupts and its context (such as TLB and > cache) might be flushed by other components. When these happen, the > interrupt handling latency can vary a lot. What do you think about this? I don't know how would I check this. I also tried using default scheduler (removed sched=null and vwfi=native) and jitter is 10 us when interrupt is generated every 10 us. Thanks in advance! Milan ___ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel
Re: [Xen-devel] Xen optimization
Hi, > I think we want to fully understand how many other interrupts the > baremetal guest is receiving. To do that, we can modify my previous > patch to suppress any debug messages for virq=68. That way, we should > only see the other interrupts. Ideally there would be none. > diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c > index 5a4f082..b7a8e17 100644 > --- a/xen/arch/arm/vgic.c > +++ b/xen/arch/arm/vgic.c > @@ -577,7 +577,11 @@ void vgic_inject_irq(struct domain *d, struct vcpu *v, > unsigned int virq, > /* the irq is enabled */ > if ( test_bit(GIC_IRQ_GUEST_ENABLED, &n->status) ) > +{ > gic_raise_guest_irq(v, virq, priority); > +if ( d->domain_id != 0 && virq != 68 ) > +printk("DEBUG virq=%d local=%d\n",virq,v == current); > +} > list_for_each_entry ( iter, &v->arch.vgic.inflight_irqs, inflight ) > { when I apply this patch there are no prints nor debug messages in xl dmesg. So bare-metal receives only interrupt 68, which is good. > Next step would be to verify that there are no other physical interrupts > interrupting the vcpu execution other the irq=68. We should be able to > check that with the following debug patch: > > diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c > index e524ad5..b34c3e4 100644 > --- a/xen/arch/arm/gic.c > +++ b/xen/arch/arm/gic.c > @@ -381,6 +381,13 @@ void gic_interrupt(struct cpu_user_regs *regs, int > is_fiq) > /* Reading IRQ will ACK it */ > irq = gic_hw_ops->read_irq(); > +if (current->domain->domain_id > 0 && irq != 68) > +{ > +local_irq_enable(); > +printk("DEBUG irq=%d\n",irq); > +local_irq_disable(); > +} > + > if ( likely(irq >= 16 && irq < 1020) ) > { > local_irq_enable(); But when I apply this patch it prints forever: (XEN) DEBUG irq=1023 Thanks in advance! Milan ___ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel
Re: [Xen-devel] Xen optimization
> Just add an && irq != 1023 to the if check. Added it and now when I create bare-metal guest it prints only once: (XEN) DEBUG irq=0 (XEN) d1v0 No valid vCPU found for vIRQ32 in the target list (0x2). Skip it (XEN) d1v0 No valid vCPU found for vIRQ33 in the target list (0x2). Skip it (XEN) d1v0 No valid vCPU found for vIRQ34 in the target list (0x2). Skip it root@uz3eg-iocc-2018-2:~# (XEN) d1v0 No valid vCPU found for vIRQ35 in the target list (0x2). Skip it (XEN) d1v0 No valid vCPU found for vIRQ36 in the target list (0x2). Skip it (XEN) d1v0 No valid vCPU found for vIRQ37 in the target list (0x2). Skip it (XEN) d1v0 No valid vCPU found for vIRQ38 in the target list (0x2). Skip it (XEN) d1v0 No valid vCPU found for vIRQ39 in the target list (0x2). Skip it (XEN) d1v0 No valid vCPU found for vIRQ40 in the target list (0x2). Skip it (XEN) d1v0 No valid vCPU found for vIRQ41 in the target list (0x2). Skip it This part always prints only once when I create this bare-metal guest like I mentioned in earlier replies and we said it doesn't do any harm: (XEN) d1v0 No valid vCPU found for vIRQ32 in the target list (0x2). Skip it (XEN) d1v0 No valid vCPU found for vIRQ33 in the target list (0x2). Skip it (XEN) d1v0 No valid vCPU found for vIRQ34 in the target list (0x2). Skip it root@uz3eg-iocc-2018-2:~# (XEN) d1v0 No valid vCPU found for vIRQ35 in the target list (0x2). Skip it (XEN) d1v0 No valid vCPU found for vIRQ36 in the target list (0x2). Skip it (XEN) d1v0 No valid vCPU found for vIRQ37 in the target list (0x2). Skip it (XEN) d1v0 No valid vCPU found for vIRQ38 in the target list (0x2). Skip it (XEN) d1v0 No valid vCPU found for vIRQ39 in the target list (0x2). Skip it (XEN) d1v0 No valid vCPU found for vIRQ40 in the target list (0x2). Skip it (XEN) d1v0 No valid vCPU found for vIRQ41 in the target list (0x2). Skip it Now, from this patch I get: (XEN) DEBUG irq=0 also printed only once. Forgot to mention in reply before this one, I added serrors=panic and it didn't make any change, numbers are the same. Thanks in advance! Milan ___ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel
Re: [Xen-devel] Xen optimization
Hi, > On Wed, Oct 24, 2018 at 2:24 AM Stefano Stabellini > wrote: > It is good that there are no physical interrupts interrupting the cpu. > serrors=panic makes the context switch faster. I guess there are not > enough context switches to make a measurable difference. Yes, when I did: grep ctxt /proc/2153/status I got: voluntary_ctxt_switches:5 nonvoluntary_ctxt_switches: 3 > I don't have any other things to suggest right now. You should be able > to measure an overall 2.5us IRQ latency (if the interrupt rate is not > too high). This bare-metal application is the most suspicious, indeed. Still waiting answer on Xilinx forum. > Just to be paranoid, we might also want to check the following, again it > shouldn't get printed: > diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c > index 5a4f082..6cf6814 100644 > --- a/xen/arch/arm/vgic.c > +++ b/xen/arch/arm/vgic.c > @@ -532,6 +532,8 @@ void vgic_inject_irq(struct domain *d, struct vcpu *v, > unsigned int virq, > struct pending_irq *iter, *n; > unsigned long flags; > +if ( d->domain_id != 0 && virq != 68 ) > +printk("DEBUG virq=%d local=%d\n",virq,v == current); > /* > * For edge triggered interrupts we always ignore a "falling edge". > * For level triggered interrupts we shouldn't, but do anyways. Checked it again, no prints. I hoped that I will discover some vIRQs or pIRQs slowing things down but no, no prints. I might try something else instead of this bare-metal application because this Xilinx SDK example is very suspicious. Thank you for your time. Milan ___ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel
Re: [Xen-devel] Xen optimization
On Thu, Oct 25, 2018 at 1:30 PM Julien Grall wrote: > > Hi Milan, Hi Julien, > Sorry if it was already asked. Can you provide your .config for your > test? Yes of course, bare-metal's .cfg file is in it's in attachment (if that is what you asked :) ). > Do you have DEBUG enabled? I'm not sure where exactly should I disable it. If you check line 18 in xl dmesg file in attachment it says debug=n, it's output of xl dmesg. I'm not sure if that is the DEBUG you are talking about. Also if I add prints somewhere in the code, I can see them, does that mean that DEBUG is enabled? If yes, can you tell me where exactly should I disable it? Thanks in advance! Milan name = "test" kernel = "only_timer.bin" memory = 8 vcpus = 1 cpus = [1] irqs = [ 48, 54, 68, 69, 70 ] iomem = [ "0xff010,1", "0xff110,1", "0xff120,1", "0xff130,1", "0xff140,1", "0xff0a0,1" ](XEN) Checking for initrd in /chosen (XEN) Initrd 02bd7000-05fffd6d (XEN) RAM: - 7fef (XEN) (XEN) MODULE[0]: 07ff4000 - 07ffc080 Device Tree (XEN) MODULE[1]: 02bd7000 - 05fffd6d Ramdisk (XEN) MODULE[2]: 0008 - 0318 Kernel (XEN) RESVD[0]: 07ff4000 - 07ffc000 (XEN) RESVD[1]: 02bd7000 - 05fffd6d (XEN) (XEN) Command line: console=dtuart dtuart=serial0 dom0_mem=1024M bootscrub=0 dom0_max_vcpus=1 dom0_vcpus_pin=true timer_slop=0 sched=null vwfi=native serrors=panic (XEN) Placing Xen at 0x7fc0-0x7fe0 (XEN) Update BOOTMOD_XEN from 0600-06108d81 => 7fc0-7fd08d81 (XEN) Domain heap initialised (XEN) Booting using Device Tree (XEN) Looking for dtuart at "serial0", options "" Xen 4.11.1-pre (XEN) Xen version 4.11.1-pre (milan@) (aarch64-xilinx-linux-gcc (GCC) 7.2.0) debug=n Wed Oct 24 10:11:47 CEST 2018 (XEN) Latest ChangeSet: Mon Sep 24 16:07:33 2018 -0700 git:8610a91abc-dirty (XEN) Processor: 410fd034: "ARM Limited", variant: 0x0, part 0xd03, rev 0x4 (XEN) 64-bit Execution: (XEN) Processor Features: (XEN) Exception Levels: EL3:64+32 EL2:64+32 EL1:64+32 EL0:64+32 (XEN) Extensions: FloatingPoint AdvancedSIMD (XEN) Debug Features: 10305106 (XEN) Auxiliary Features: (XEN) Memory Model Features: 1122 (XEN) ISA Features: 00011120 (XEN) 32-bit Execution: (XEN) Processor Features: 0131:00011011 (XEN) Instruction Sets: AArch32 A32 Thumb Thumb-2 Jazelle (XEN) Extensions: GenericTimer Security (XEN) Debug Features: 03010066 (XEN) Auxiliary Features: (XEN) Memory Model Features: 10201105 4000 0126 02102211 (XEN) ISA Features: 02101110 13112111 21232042 01112131 00011142 00011121 (XEN) Generic Timer IRQ: phys=30 hyp=26 virt=27 Freq: 9 KHz (XEN) GICv2 initialization: (XEN) gic_dist_addr=f901 (XEN) gic_cpu_addr=f902 (XEN) gic_hyp_addr=f904 (XEN) gic_vcpu_addr=f906 (XEN) gic_maintenance_irq=25 (XEN) GICv2: Adjusting CPU interface base to 0xf902f000 (XEN) GICv2: 192 lines, 4 cpus, secure (IID 0200143b). (XEN) Using scheduler: null Scheduler (null) (XEN) Initializing null scheduler (XEN) WARNING: This is experimental software in development. (XEN) Use at your own risk. (XEN) Allocated console ring of 16 KiB. (XEN) Bringing up CPU1 (XEN) Bringing up CPU2 (XEN) Bringing up CPU3 (XEN) Brought up 4 CPUs (XEN) P2M: 40-bit IPA with 40-bit PA and 8-bit VMID (XEN) P2M: 3 levels with order-1 root, VTCR 0x80023558 (XEN) I/O virtualisation enabled (XEN) - Dom0 mode: Relaxed (XEN) Interrupt remapping enabled (XEN) *** LOADING DOMAIN 0 *** (XEN) Loading kernel from boot module @ 0008 (XEN) Loading ramdisk from boot module @ 02bd7000 (XEN) Allocating 1:1 mappings totalling 1024MB for dom0: (XEN) BANK[0] 0x002000-0x006000 (1024MB) (XEN) Grant table range: 0x007fc0-0x007fc4 (XEN) Allocating PPI 16 for event channel interrupt (XEN) Loading zImage from 0008 to 2008-2318 (XEN) Loading dom0 initrd from 02bd7000 to 0x2820-0x2b628d6d (XEN) Loading dom0 DTB to 0x2800-0x28006e46 (XEN) Initial low memory virq threshold set at 0x4000 pages. (XEN) Std. Loglevel: Errors and warnings (XEN) Guest Loglevel: Nothing (Rate-limited: Errors and warnings) (XEN) *** Serial input -> DOM0 (type 'CTRL-a' three times to switch input to Xen) (XEN) Freed 280kB init memory. (XEN) d0v0: vGICD: unhandled word write 0x to ICACTIVER4 (XEN) d0v0: vGICD: unhandled word write 0x to ICACTIVER8 (XEN) d0v0: vGICD: unhandled word write 0x to ICACTIVER12 (XEN) d0v0: vGICD: unhandled word write 0x to ICACTIVER16 (XEN) d0v0: vGICD: unhandled word write 0x to ICACTIVER20 (XEN) d0v
Re: [Xen-devel] Xen optimization
> I was asking the Xen configuration (xen/.config) to know what you have > enabled in Xen. Oh, sorry, because I'm building xen from git repository here is the link to it where you can check the file you mentioned. https://github.com/Xilinx/xen/tree/xilinx/versal/xen > It might, OTOH, be wise to turn it on when investigating the system > behavior (but that's a general remark, I don't know to what Julien was > referring to in this specific case). I will definitely try to enable DEBUG. Milan ___ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel
Re: [Xen-devel] Xen optimization
Sorry for late reply, > I am afraid no. .config is generated during building time. So can you > paste here please. ".config" file is in attachment. I also tried Xen 4.9 and I got almost same numbers, jitter is smaller by 150ns which isn't significant change at all. Milan # # Automatically generated file; DO NOT EDIT. # Xen/arm 4.11.1-pre Configuration # CONFIG_64BIT=y CONFIG_ARM_64=y CONFIG_ARM=y CONFIG_ARCH_DEFCONFIG="arch/arm/configs/arm64_defconfig" # # Architecture Features # CONFIG_NR_CPUS=128 # CONFIG_ACPI is not set CONFIG_GICV3=y # CONFIG_HAS_ITS is not set # CONFIG_NEW_VGIC is not set CONFIG_SBSA_VUART_CONSOLE=y # # ARM errata workaround via the alternative framework # CONFIG_ARM64_ERRATUM_827319=y CONFIG_ARM64_ERRATUM_824069=y CONFIG_ARM64_ERRATUM_819472=y CONFIG_ARM64_ERRATUM_832075=y CONFIG_ARM64_ERRATUM_834220=y CONFIG_HARDEN_BRANCH_PREDICTOR=y CONFIG_ARM64_HARDEN_BRANCH_PREDICTOR=y CONFIG_ALL_PLAT=y # CONFIG_QEMU is not set # CONFIG_RCAR3 is not set # CONFIG_MPSOC is not set CONFIG_ALL64_PLAT=y # CONFIG_ALL32_PLAT is not set CONFIG_MPSOC_PLATFORM=y # # Common Features # CONFIG_HAS_ALTERNATIVE=y CONFIG_HAS_DEVICE_TREE=y # CONFIG_MEM_ACCESS is not set CONFIG_HAS_PDX=y CONFIG_TMEM=y # CONFIG_XSM is not set # # Schedulers # CONFIG_SCHED_CREDIT=y CONFIG_SCHED_CREDIT2=y CONFIG_SCHED_RTDS=y # CONFIG_SCHED_ARINC653 is not set CONFIG_SCHED_NULL=y CONFIG_SCHED_CREDIT_DEFAULT=y # CONFIG_SCHED_CREDIT2_DEFAULT is not set # CONFIG_SCHED_RTDS_DEFAULT is not set # CONFIG_SCHED_NULL_DEFAULT is not set CONFIG_SCHED_DEFAULT="credit" # CONFIG_LIVEPATCH is not set CONFIG_SUPPRESS_DUPLICATE_SYMBOL_WARNINGS=y CONFIG_CMDLINE="" # # Device Drivers # CONFIG_HAS_NS16550=y CONFIG_HAS_CADENCE_UART=y CONFIG_HAS_MVEBU=y CONFIG_HAS_PL011=y CONFIG_HAS_SCIF=y CONFIG_HAS_PASSTHROUGH=y CONFIG_ARM_SMMU=y CONFIG_VIDEO=y CONFIG_HAS_ARM_HDLCD=y CONFIG_DEFCONFIG_LIST="$ARCH_DEFCONFIG" # # Debugging Options # # CONFIG_DEBUG is not set # CONFIG_FRAME_POINTER is not set # CONFIG_COVERAGE is not set # CONFIG_LOCK_PROFILE is not set # CONFIG_PERF_COUNTERS is not set # CONFIG_VERBOSE_DEBUG is not set # CONFIG_DEVICE_TREE_DEBUG is not set # CONFIG_SCRUB_DEBUG is not set___ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel
Re: [Xen-devel] Xen optimization
Hi, > Interesting. Could you confirm the commit you were using (or the point > release)? > Stefano's number were based on commit "fuzz: update README.afl example" > 55a04feaa1f8ab6ef7d723fbb1d39c6b96ad184a which is an unreleased version > of Xen. All Xens I used are from Xilinx git repository because I have UltraZed-EG board which has Zynq UltraScale SoC. Under branches you can find Xen 4.8, 4.9, etc. I always used latest commit: c227fe68589bdfb36b85f7b78c034a40c95b9a30 Here is link to it: https://github.com/Xilinx/xen/tree/xilinx/stable-4.9 Best regards. Milan ___ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel