On 04/07/18 10:18, Peter wrote:
> Hi all,
> [...]
Thanks for all the investigation!
> 3. kern.sched.preempt_thresh
>
> I could make the problem disappear by changing kern.sched.preempt_thresh
> from the default 80 to either 11 (i5-3570T) or 7 (p3) or smaller. This
> seems to correspond to the dis
Hi Stefan,
I'm glad to see You're thinking along similar paths as I did. But let
me fist answer Your question straight away, and sort out the remainder
afterwards.
> I'd be interested in your results with preempt_thresh set to a value
> of e.g.190.
There is no difference. Any value above 7
I forgot to attach the commands used to create the logs - they are ugly
anyway:
[1]
dtrace -q -n '::sched_choose:return { @[((struct thread
*)arg1)->td_proc->p_pid, stringof(((struct thread
*)arg1)->td_proc->p_comm), timestamp] = count(); } tick-1s { exit(0); }'
| sort -nk 3 | awk '$1 > 27 {$
Am 07.04.18 um 16:18 schrieb Peter:
> 3. kern.sched.preempt_thresh
>
> I could make the problem disappear by changing kern.sched.preempt_thresh from
> the default 80 to either 11 (i5-3570T) or 7 (p3) or smaller. This seems to
> correspond to the disk interrupt threads, which run at intr:12 (i5-35
On 04/07/18 10:18, Peter wrote:
> [...]
> B. Findings:
>
> 1. Filesystem
>
> I could never reproduce this when reading from plain UFS. Only when
> reading from ZFS (direct or via l2arc).
> [...]
My consistent way of reproducing the problem was to run ports/misc/dnetc
while trying to b
Hi all,
in the meantime I did some tests and found the following:
A. The Problem:
---
On a single CPU, there are -exactly- two processes runnable:
One is doing mostly compute without I/O - this can be a compressing job
or similar; in the tests I used simply an endless-loop. Lets ca