Andriy Gapon wrote:
Not everyone has a postgres server and a suitable database.
Could you please devise a test scenario that demonstrates the problem and that
anyone could run?
Alright, simple things first: I can reproduce the effect without
postgres, with regular commands. I run this on my database file:
# lz4 2058067.1 /dev/null
And have this as throughput:
pool alloc free read write read write
cache - - - - - -
ada1s4 7.08G 10.9G 889 0 7.07M 42.3K
PID USERNAME PRI NICE SIZE RES STATE TIME WCPU COMMAND
51298 root 87 0 16184K 7912K RUN 1:00 51.60% lz4
I start the piglet:
$ while true; do :; done
And, same effect:
pool alloc free read write read write
cache - - - - - -
ada1s4 7.08G 10.9G 10 0 82.0K 0
PID USERNAME PRI NICE SIZE RES STATE TIME WCPU COMMAND
1911 admin 98 0 7044K 2860K RUN 65:48 89.22% bash
51298 root 52 0 16184K 7880K RUN 0:05 0.59% lz4
It does *not* happen with plain "cat" instead of "lz4".
What may or may not have an influence on it: the respective filesystem
is block=8k, and is 100% resident in l2arc.
What is also interesting: I started trying this with "tar" (no effect,
behaves properly), then with "tar --lz4". In the latter case "tar"
starts "lz4" as a sub-process, so we have three processes in the play -
and in that case the effect happens, but to lesser extent: about 75 I/Os
per second.
So, it seems quite clear that this has something to do with the logic
inside the scheduler.
_______________________________________________
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"