https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=225535

--- Comment #5 from Aleksander Derevianko <ae...@list.ru> ---
Additional info: I have tryed the following configuration: Xen (CentOS 7 in
domain 0) running over Moxa hardware, under Xen - FreeBSD 10.3-RELEASE i386.
xn0 configured as bridge. Between two copies of FreeBSD-10.3 running under Xen
on different host computers - this test application produce much better result:

grep times fspa2_fspa1_virt.txt | awk '{print $3 " " $4 " " $6 " " $8 " "
$10;}'
 | sort | uniq -c
9863 send_sync 0 0 0 0
38565 send_sync 0 0 0 1
   2 send_sync 0 0 0 10
   9 send_sync 0 0 0 11
  16 send_sync 0 0 0 12
  10 send_sync 0 0 0 13
   1 send_sync 0 0 0 17
301599 send_sync 0 0 0 2
   1 send_sync 0 0 0 233
698526 send_sync 0 0 0 3
5195 send_sync 0 0 0 4
2770 send_sync 0 0 0 5
1361 send_sync 0 0 0 6
  14 send_sync 0 0 0 7
   5 send_sync 0 0 0 8
   4 send_sync 0 0 0 9
   1 send_sync 0 0 1 2
   1 send_sync 0 0 2 0
   4 send_sync 0 0 2 1
  51 send_sync 0 0 2 2
  79 send_sync 0 0 2 3
6545 send_sync 0 1 0 0
21849 send_sync 0 1 0 1
   1 send_sync 0 1 0 10
   1 send_sync 0 1 0 11
   1 send_sync 0 1 0 12
39671 send_sync 0 1 0 2
 244 send_sync 0 1 0 3
  23 send_sync 0 1 0 4
   8 send_sync 0 1 0 5
   1 send_sync 0 1 0 6
   1 send_sync 0 1 2 0
   1 send_sync 0 14 0 2
17499 send_sync 0 2 0 0
58073 send_sync 0 2 0 1
   1 send_sync 0 2 0 10
   3 send_sync 0 2 0 11
   2 send_sync 0 2 0 12
115462 send_sync 0 2 0 2
 572 send_sync 0 2 0 3
   7 send_sync 0 2 0 4
   4 send_sync 0 2 0 5
   1 send_sync 0 2 0 7
   2 send_sync 0 2 0 9
7959 send_sync 0 3 0 0
20669 send_sync 0 3 0 1
   1 send_sync 0 3 0 11
   1 send_sync 0 3 0 12
49220 send_sync 0 3 0 2
 267 send_sync 0 3 0 3
   3 send_sync 0 3 0 4
   2 send_sync 0 3 0 5
   1 send_sync 0 3 1 0
  23 send_sync 0 4 0 0
  35 send_sync 0 4 0 1
  54 send_sync 0 4 0 2
   2 send_sync 0 4 0 3
   1 send_sync 0 49 0 1
   1 send_sync 0 5 0 1
   1 send_sync 0 5 0 2
   1 send_sync 0 6 0 2
   1 send_sync 0 7 0 0
   1 send_sync 0 7 0 3
   1 send_sync 2 0 0 0
   2 send_sync 2 0 0 1
  16 send_sync 2 0 0 2
  41 send_sync 2 0 0 3
   1 send_sync 2 1 0 1
   1 send_sync 2 1 0 2

>From 1.396.680 samples only 52 >= 10ms, and only 2 >= 20 ms. 
>From my POV, it means that problem is somewhere in OS<->hardware layer, not in
OS itself.



(In reply to amvandemore from comment #2)

No, I have not tried multiqueue - first, it give advantage only under high
load, and 40 Kbytes * 3 = 120 Kbytes/sec over 1Gibit line is very low load.
Seconds, Under 6.3-RELEASE it works fine without multiqueue.

(In reply to Eugene Grosbein from comment #3)
dmesg will be added as attachment.

root@fspa1:~/clock/new_res # sysctl kern.timecounter
kern.timecounter.tsc_shift: 1
kern.timecounter.smp_tsc_adjust: 0
kern.timecounter.smp_tsc: 1
kern.timecounter.invariant_tsc: 1
kern.timecounter.fast_gettime: 1
kern.timecounter.tick: 1
kern.timecounter.choice: TSC-low(1000) ACPI-fast(900) i8254(0) HPET(950)
dummy(-1000000)
kern.timecounter.hardware: TSC-low
kern.timecounter.alloweddeviation: 0
kern.timecounter.stepwarnings: 0
kern.timecounter.tc.TSC-low.quality: 1000
kern.timecounter.tc.TSC-low.frequency: 1247191596
kern.timecounter.tc.TSC-low.counter: 233727093
kern.timecounter.tc.TSC-low.mask: 4294967295
kern.timecounter.tc.ACPI-fast.quality: 900
kern.timecounter.tc.ACPI-fast.frequency: 3579545
kern.timecounter.tc.ACPI-fast.counter: 10612645
kern.timecounter.tc.ACPI-fast.mask: 16777215
kern.timecounter.tc.i8254.quality: 0
kern.timecounter.tc.i8254.frequency: 1193182
kern.timecounter.tc.i8254.counter: 14761
kern.timecounter.tc.i8254.mask: 65535
kern.timecounter.tc.HPET.quality: 950
kern.timecounter.tc.HPET.frequency: 14318180
kern.timecounter.tc.HPET.counter: 3051462975
kern.timecounter.tc.HPET.mask: 4294967295


root@fspa1:~/clock/new_res # sysctl kern.eventtimer
kern.eventtimer.periodic: 0
kern.eventtimer.timer: LAPIC
kern.eventtimer.idletick: 0
kern.eventtimer.singlemul: 2
kern.eventtimer.choice: LAPIC(600) HPET(550) HPET1(440) HPET2(440) HPET3(440)
HP
ET4(440) i8254(100) RTC(0)
kern.eventtimer.et.i8254.quality: 100
kern.eventtimer.et.i8254.frequency: 1193182
kern.eventtimer.et.i8254.flags: 1
kern.eventtimer.et.RTC.quality: 0
kern.eventtimer.et.RTC.frequency: 32768
kern.eventtimer.et.RTC.flags: 17
kern.eventtimer.et.HPET4.quality: 440
kern.eventtimer.et.HPET4.frequency: 14318180
kern.eventtimer.et.HPET4.flags: 3
kern.eventtimer.et.HPET3.quality: 440
kern.eventtimer.et.HPET3.frequency: 14318180
kern.eventtimer.et.HPET3.flags: 3
kern.eventtimer.et.HPET2.quality: 440
kern.eventtimer.et.HPET2.frequency: 14318180
kern.eventtimer.et.HPET2.flags: 3
kern.eventtimer.et.HPET1.quality: 440
kern.eventtimer.et.HPET1.frequency: 14318180
kern.eventtimer.et.HPET1.flags: 3
kern.eventtimer.et.HPET.quality: 550
kern.eventtimer.et.HPET.frequency: 14318180
kern.eventtimer.et.HPET.flags: 7
kern.eventtimer.et.LAPIC.quality: 600
kern.eventtimer.et.LAPIC.frequency: 49887687
kern.eventtimer.et.LAPIC.flags: 7

root@fspa1:~/clock/new_res # sysctl kern.hz
kern.hz: 1000

root@fspa1:~/clock/new_res # sysctl kern.eventtimer.periodic
kern.eventtimer.periodic: 0

I will try and report results here.

> And why do use version 10.3 that has "Expected EoL" in April 30, 2018? And 
> 10.4 in October 31, 2018, as whole line of 10.x versions.

> Have you considered trying 11.1-RELEASE? It has many more changes to get 
> fixes, if found necessary.

We have very long verification/release cycle because this software will be used
in critical infrastructure (railway automation). Latest security fixes is not
actually important, because it will be used in very closed environment. 
I will try to update to 10.4 and 11.1 and check if it helps.

(In reply to Conrad Meyer from comment #4)
> What syscalls are you measuring delay over?  It seems that the column with 
> delay is different between your two sets of samples -- does this mean 
> anything?  Thanks. 

clock_gettime( CLOCK_MONOTONIC, ...);
See attached source code.

Actually, delay under 6.3-RELEASE is lower ( and evaluate time is higher )
because 6.3-RELEASE is running on different hardware. Test load (Fibonacci
calculation) is executed faster under modern hardware.
I will try to install 6.3-RELEASE on Moxa hardware, repeat test and report
result. Unfortunelly, I need to have test running for at least 6 hours before I
get reliable results.

-- 
You are receiving this mail because:
You are the assignee for the bug.
_______________________________________________
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"

Reply via email to