https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=277389
--- Comment #42 from Mark Millard ---
I just checked a more recent PkgBase kernel & world based
system on the 32 GiByte Windows Dev Kit 2023 without
swap space being enabled. USB3 UFS boot media with separate
USB3 ZFS media imported later.
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=277389
Mark Linimon changed:
What|Removed |Added
See Also||https://bugs.freebsd.org/bu
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=277389
--- Comment #41 from Henrich Hartzer ---
(In reply to Mark Millard from comment #40)
Thank you! I opened this bug for it:
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=280846
--
You are receiving this mail because:
You are the assign
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=277389
--- Comment #40 from Mark Millard ---
(In reply to Henrich Hartzer from comment #39)
"was killed: a thread waited too long to allocate a page" and how
you produce it is likely not a good match to the context for the
panics or for the "fail
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=277389
--- Comment #39 from Henrich Hartzer ---
Ok, I finally had an OOM again. Here's the dmesg excerpt:
pid 95407 (firefox), jid 0, uid 1003, was killed: a thread waited too long to
allocate a page
Not sure if related to ZFS or not.
--
You a
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=277389
--- Comment #38 from Henrich Hartzer ---
I turned off compression on the zroots of the two machines. This appeared to
work and function as well as one might hope.
I ran the full test all the way though on a 8GB memory system with one older
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=277389
--- Comment #37 from Mark Millard ---
(In reply to Henrich Hartzer from comment #36)
The original description said, in part,
"create a ZFS filesystem with compression=off".
My testing followed that. Having some form of
compression on would
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=277389
--- Comment #36 from Henrich Hartzer ---
Re: My Firefox OOMs. I haven't been able to reproduce it lately as I've been
more careful to have fewer tabs open. I tried to force it to happen, but didn't
have any luck. I'll try to coax it over th
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=277389
--- Comment #35 from Mark Millard ---
(In reply to Mark Millard from comment #33)
Going in a different direction: I set up 118 GiBytes of
swap (so: RAM+SWAP=150 GiBytes).
It made little difference, things apparently being
killed. (No OOM
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=277389
--- Comment #34 from Mark Millard ---
FYI:
With vfs.nullfs.cache_vnodes=0 I instead got the
"failed to reclaim memory" type of failure, losing
control because of what had been killed.
So the nullfs VNODE caching is not a fundamental
part
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=277389
--- Comment #33 from Mark Millard ---
(In reply to Mark Millard from comment #32)
FYI:
This is still with the very simple zpool type: a
single GPT partition on just one physical drive.
No other zpool present.
The drive was attached via U
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=277389
--- Comment #32 from Mark Millard ---
(In reply to Mark Millard from comment #31)
The console output's backtrace:
panic: pmap_growkernel: no memory to grow kernel
cpuid = 6
time = 1710740968
KDB: stack backtrace:
db_trace_self() at db_tra
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=277389
--- Comment #31 from Mark Millard ---
(In reply to Mark Millard from comment #30)
The vmstat -z output:
vmstat -z
ITEM SIZE LIMIT USED FREE REQ FAIL SLEEP XDOM
UMA Kegs: 512, 0, 92
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=277389
--- Comment #30 from Mark Millard ---
The dump worked. The backtrace related part of core.txt.0
follows. The system was booted from a PkgBase kernel and
world for the test, not from a personal build.
# less /var/crash/core.txt.0
aarch64-m
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=277389
--- Comment #29 from Mark Millard ---
On an aarch64 with 32 GiBytes of RAM (noswap enabled)
I got a panic:
panic: pmap_growkernel: no memory to grow kernel
So, I'd say: Reproduced.
I'll note that I rebooted between the iozone write
step
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=277389
--- Comment #28 from pascal.guitier...@gmail.com ---
(In reply to Mark Millard from comment #27)
i've tried using a -stable kernel (FreeBSD 14.0-STABLE #0
stable/14-n266971-91c1c36102a6: Thu Mar 14 04:49:15 UTC 2024), same result.
command
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=277389
--- Comment #27 from Mark Millard ---
(In reply to pascal.guitierrez from comment #26)
I'll note that I never used mount_nullfs explicitly at
any point but I still got the huge difference in VNODE
results in "vmstat -z" when doing your tes
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=277389
--- Comment #26 from pascal.guitier...@gmail.com ---
(In reply to Mark Millard from comment #24)
Hi Mark,
I can try with those settings and on -stable, however I'm not using nullfs in
my tests so those loader settings won't have any effect
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=277389
--- Comment #25 from Mark Millard ---
(In reply to Mark Millard from comment #23)
22941Mi Wired looks to be where it finally stabilized while
the system was left idle. The later decrements that I happened
to watch were in smaller sized chu
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=277389
--- Comment #24 from Mark Millard ---
(In reply to pascal.guitierrez from comment #11)
If you can test main , stable/14 , or stable/13 , then
testing with:
nullfs_load="YES"
vfs.nullfs.cache_vnodes=0
zfs_load="YES"
in /boot/loader.conf c
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=277389
--- Comment #23 from Mark Millard ---
Looks to me like what changed so far was mostly the decreases:
UMA Slabs 0: 80, 0,43974304, 40,44081542, 0, 0, 0
to:
UMA Slabs 0: 80, 0,19620210,24354134,44
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=277389
--- Comment #22 from Mark Millard ---
(In reply to Mark Millard from comment #21)
I has progressed to 85719Mi and looks to still be decreasing.
For reference:
# vmstat -z
ITEM SIZE LIMIT USED FREE REQ FAIL
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=277389
--- Comment #21 from Mark Millard ---
(In reply to Mark Millard from comment #20)
Hmm. Turns out Wired decreases show in top when top also shows the likes of
(ordered by cpu):
PID JID USERNAMEPRI NICE SIZE RES STATEC
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=277389
--- Comment #20 from Mark Millard ---
(In reply to Mark Millard from comment #19)
I updated to a PkgBase vintage of:
# uname -apKU
FreeBSD 7950X3D-ZFS 15.0-CURRENT FreeBSD 15.0-CURRENT main-n268827-75464941dc17
GENERIC-NODEBUG amd64 amd64
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=277389
--- Comment #19 from Mark Millard ---
(In reply to Mark Millard from comment #18)
For reference:
# vmstat -z
ITEM SIZE LIMIT USED FREE REQ FAIL SLEEP XDOM
kstack_cache: 16384, 0,1986,
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=277389
--- Comment #18 from Mark Millard ---
(In reply to pascal.guitierrez from comment #17)
I tried this in my context (so: main). I did not get OOM activity
or any hangup. But I do see that:
# rm iozone.DUMMY.*
after the 2nd iozone run resul
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=277389
--- Comment #17 from pascal.guitier...@gmail.com ---
(In reply to Kurt Jaeger from comment #13)
hi kurt,
the issue is triggered upon the read tests and from your logs looks like the
iozone test did not conduct those tests (-i 1)?
from wha
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=277389
k...@denninger.net changed:
What|Removed |Added
CC||k...@denninger.net
--- Comment
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=277389
--- Comment #15 from Mark Millard ---
(In reply to Kurt Jaeger from comment #13)
An interesting oddity in your top output is that Wired stayed
at 25G after ARC memory use dropped off. From the last
reported values in the file:
Mem: 29M A
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=277389
--- Comment #14 from Mark Millard ---
(In reply to pascal.guitierrez from comment #11)
Given the message, it might be wroth reporting if likes
of sysctl vm.pageout_oom_seq=120 (or larger?) makes any
difference and what the difference is, e
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=277389
Kurt Jaeger changed:
What|Removed |Added
CC||p...@freebsd.org
--- Comment #13 fro
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=277389
--- Comment #12 from pascal.guitier...@gmail.com ---
(In reply to pascal.guitierrez from comment #11)
(In reply to Henrich Hartzer from comment #9)
hi henrich
i haven't tested on 13.X, only 14.0 and can reproduce reliably.
the symptom is t
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=277389
--- Comment #11 from pascal.guitier...@gmail.com ---
I just tested on 2x machines running 14.0-p5 with 32GB RAM backed by nvme
drives - and I achieved the same result, only this time it literally took a few
seconds to deadlock.
The messages
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=277389
--- Comment #10 from Mark Millard ---
What OOM console messages are being generated? The kernel has
multiple, distinct OOM messages. Which type(s) are you
getting? :
"failed to reclaim memory"
"a thread waited too long to allocate a page"
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=277389
Henrich Hartzer changed:
What|Removed |Added
CC||henrichhart...@tuta.io
--- Comme
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=277389
Mark Millard changed:
What|Removed |Added
CC||marklmi26-f...@yahoo.com
--- Commen
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=277389
Vladimir Druzenko changed:
What|Removed |Added
CC||f...@freebsd.org
--- Comment #
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=277389
--- Comment #6 from pascal.guitier...@gmail.com ---
Ping? I can reproduce this livelock on demand after only a few minutes if
anyone is interested in taking a look?
--
You are receiving this mail because:
You are the assignee for the bug.
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=277389
--- Comment #5 from pascal.guitier...@gmail.com ---
(In reply to Vladimir Druzenko from comment #4)
cat /boot/loader.conf.local
vfs.zfs.arc_max=2147483648
sysctl vfs.zfs.arc_max
vfs.zfs.arc_max: 2147483648
run iozone -i 1 -l 512 -r 4k -s
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=277389
Vladimir Druzenko changed:
What|Removed |Added
CC||zfs-de...@freebsd.org
--- Comm
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=277389
--- Comment #3 from pascal.guitier...@gmail.com ---
(In reply to Vladimir Druzenko from comment #2)
Thanks for your reply, i can still exceed arc_max even if setting via
loader.conf
last pid: 62730; load averages: 6.28, 8.93, 6.72
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=277389
Vladimir Druzenko changed:
What|Removed |Added
CC||v...@freebsd.org
--- Comment #
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=277389
--- Comment #1 from pascal.guitier...@gmail.com ---
here is the output from top at the point of the freeze:
last pid: 98721; load averages: 2.43, 1.34, 0.55
up 0+00:04:36
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=277389
Bug ID: 277389
Summary: Reproduceable low memory freeze on 14.0-RELEASE-p5
Product: Base System
Version: 14.0-RELEASE
Hardware: Any
OS: Any
Status: New
44 matches
Mail list logo