On Jan 15, 2024, at 00:07, Lexi Winter wrote:
> Mark Millard:
>> You seem to be under the impression that "Inact" means "page is not
>> dirty" and so can be freed without being written out to the swap
>> space.
>
> indeed, i was, because this is how sysutils/htop displays memory usage:
>
> top(
Mark Millard:
> You seem to be under the impression that "Inact" means "page is not
> dirty" and so can be freed without being written out to the swap
> space.
indeed, i was, because this is how sysutils/htop displays memory usage:
top(1)
Mem: 8502M Active, 15G Inact, 1568M Laundry, 5518M Wired,
e (so it's using ZFS only) and with the
> ZFS ARC limited to 8GB.
>
> running poudriere produces many kernel log messages like this:
>
> Jan 10 21:40:00 ilythia kernel: swap_pager: out of swap space
> Jan 10 21:40:00 ilythia kernel: swp_pager_getswapspace(2): failed
> Jan
Ronald Klop:
> On 1/11/24 03:21, Lexi Winter wrote:
> > i'm building packages with poudriere on a system with 32GB memory, with
> > tmpfs and md disabled in poudriere (so it's using ZFS only) and with the
> > ZFS ARC limited to 8GB.
> My first guess would be that you are using a tmpfs tmp dir whic
using ZFS only) and with the
ZFS ARC limited to 8GB.
running poudriere produces many kernel log messages like this:
Jan 10 21:40:00 ilythia kernel: swap_pager: out of swap space
Jan 10 21:40:00 ilythia kernel: swp_pager_getswapspace(2): failed
Jan 10 22:41:55 ilythia kernel: swap_pager: out of sw
ith the
ZFS ARC limited to 8GB.
running poudriere produces many kernel log messages like this:
Jan 10 21:40:00 ilythia kernel: swap_pager: out of swap space
Jan 10 21:40:00 ilythia kernel: swp_pager_getswapspace(2): failed
Jan 10 22:41:55 ilythia kernel: swap_pager: out of swap space
Jan 10 22:41:55
in poudriere (so it's using ZFS only) and with the
> ZFS ARC limited to 8GB.
>
> running poudriere produces many kernel log messages like this:
>
> Jan 10 21:40:00 ilythia kernel: swap_pager: out of swap space
> Jan 10 21:40:00 ilythia kernel: swp_pager_getswapspace(2): fa
nning poudriere produces many kernel log messages like this:
Jan 10 21:40:00 ilythia kernel: swap_pager: out of swap space
Jan 10 21:40:00 ilythia kernel: swp_pager_getswapspace(2): failed
Jan 10 22:41:55 ilythia kernel: swap_pager: out of swap space
Jan 10 22:41:55 ilythia kernel: swp_pager_getswapspace(
d: out of swap space"
if you also got messages of the form:
"swap_pager_getswapspace(...): failed"
The other causes that I know of for the "out of swap space"
messages are:
Sustained low free RAM [via stays-runnable process(es)].
A sufficiently delayed pageout.
The sw
; > > > > > "-Zbinary-dep-depinfo" "-j" "1" "-v" "--release" "--frozen"
> > > > > > "--features"
> > > > > > "panic-unwind backtrace compiler-builtins-c" "--manifest-path&quo
t;-Zbinary-dep-depinfo" "-j" "1" "-v" "--release" "--frozen" "--features
> "
> >>>>> "panic-unwind backtrace compiler-builtins-c" "--manifest-path"
> >>>>> "/usr/ports/lang/rust/
quot; "--manifest-path"
"/usr/ports/lang/rust/work/rustc-1.41.1-src/src/libtest/Cargo.toml"
"--message-format" "json-render-diagnostics"
^C^C^C
Build completed unsuccessfully in 0:00:55
Here I pressed ^C as the build actually continues despite several
rus
quot;/usr/ports/lang/rust/work/rustc-1.41.1-src/src/libtest/Cargo.toml"
> > > > "--message-format" "json-render-diagnostics"
> > > > ^C^C^C
> > > > Build completed unsuccessfully in 0:00:55
> > > >
> > > > Here I pressed ^C as th
t/Cargo.toml"
"--message-format" "json-render-diagnostics"
^C^C^C
Build completed unsuccessfully in 0:00:55
Here I pressed ^C as the build actually continues despite several rustdoc,
python, and other processes being killed.
swap_pager: out of swap space
swp_pager_getswapspace
uot;
> > "/usr/ports/lang/rust/work/rustc-1.41.1-src/src/libtest/Cargo.toml"
> > "--message-format" "json-render-diagnostics"
> > ^C^C^C
> > Build completed unsuccessfully in 0:00:55
> >
> > Here I pressed ^C as the build actually c
;--message-format" "json-render-diagnostics"
> ^C^C^C
> Build completed unsuccessfully in 0:00:55
>
> Here I pressed ^C as the build actually continues despite several rustdoc,
> python, and other processes being killed.
>
> swap_pager: out of swap space
> sw
C
Build completed unsuccessfully in 0:00:55
Here I pressed ^C as the build actually continues despite several
rustdoc, python, and other processes being killed.
swap_pager: out of swap space
swp_pager_getswapspace(20): failed
swap_pager: out of swap space
swp_pager_getswapspace(11): failed
The
uot;--features"
"panic-unwind backtrace compiler-builtins-c" "--manifest-path"
"/usr/ports/lang/rust/work/rustc-1.41.1-src/src/libtest/Cargo.toml"
"--message-format" "json-render-diagnostics"
^C^C^C
Build completed unsuccessfully in 0
From: Yasuhiro KIMURA
Subject: After update to r357104 build of poudriere jail fails with 'out of
swap space'
Date: Sat, 25 Jan 2020 23:43:28 +0900 (JST)
> I use VirtualBox to run 13-CURRENT. Host is 64bit Windows 10 1909 and
> spec of VM is as following.
>
> * 4 CPU
&g
On January 28, 2020 2:43:51 PM PST, Mark Millard wrote:
>FYI: kib has a patch out for review to fix the
> head -r357026 change to OOM behavior in
> the vm_pageout_oom(VM_OOM_MEM_PF) context:
>
>https://reviews.freebsd.org/D23409
>
>===
>Mark Millard
>marklmi at yahoo.com
>( dsl-only.net we
FYI: kib has a patch out for review to fix the
head -r357026 change to OOM behavior in
the vm_pageout_oom(VM_OOM_MEM_PF) context:
https://reviews.freebsd.org/D23409
===
Mark Millard
marklmi at yahoo.com
( dsl-only.net went
away in early 2018-Mar)
___
for reporting what settings you were using.
>>
>> . . .
>
> I've been able to reproduce the problem at $JOB in a Virtualbox VM with 1
> vCPU, 1.5 GB vRAM, and 2 GB swap building graphics/graphviz: cc killed out of
> swap space. The killed cc had an address space o
rst kill does not report a backtrace spanning the
>code choosing to do the kill (or otherwise report the type of issue
>leading the the kill).
>
>Your is consistent with the small arm board folks reporting that
>recently
>contexts that were doing buildworld and the like fine under
On 2020-Jan-27, at 12:48, Cy Schubert wrote:
> In message , Mark Millard
> write
> s:
>>
>>
>>
>> On 2020-Jan-27, at 10:20, Cy Schubert wrote:
>>
>>> On January 27, 2020 5:09:06 AM PST, Cy Schubert
>> wrote:
> . . .
Setting a lower arc_max at boot is unlikely to help. Ru
In message , Mark Millard
write
s:
>
>
>
> On 2020-Jan-27, at 10:20, Cy Schubert wrote:
>
> > On January 27, 2020 5:09:06 AM PST, Cy Schubert
> wrote:
> >>> . . .
> >>
> >> Setting a lower arc_max at boot is unlikely to help. Rust was building
> >> on
> >> the 8 GB and 5 GB 4 core machines
On 2020-Jan-27, at 10:20, Cy Schubert wrote:
> On January 27, 2020 5:09:06 AM PST, Cy Schubert
> wrote:
>>> . . .
>>
>> Setting a lower arc_max at boot is unlikely to help. Rust was building
>> on
>> the 8 GB and 5 GB 4 core machines last night. It completed successfully
>> on
>> the 8 G
On January 27, 2020 10:19:50 AM PST, "Rodney W. Grimes"
wrote:
>> In message <202001261745.00qhjkuw044...@gndrsh.dnsmgr.net>, "Rodney
>W.
>> Grimes"
>> writes:
>> > > In message <20200125233116.ga49...@troutmask.apl.washington.edu>,
>Steve
>> > > Kargl w
>> > > rites:
>> > > > On Sat, Jan 25, 2
On January 27, 2020 5:09:06 AM PST, Cy Schubert
wrote:
>In message <202001261745.00qhjkuw044...@gndrsh.dnsmgr.net>, "Rodney W.
>Grimes"
>writes:
>> > In message <20200125233116.ga49...@troutmask.apl.washington.edu>,
>Steve
>> > Kargl w
>> > rites:
>> > > On Sat, Jan 25, 2020 at 02:09:29PM -0800
> In message <202001261745.00qhjkuw044...@gndrsh.dnsmgr.net>, "Rodney W.
> Grimes"
> writes:
> > > In message <20200125233116.ga49...@troutmask.apl.washington.edu>, Steve
> > > Kargl w
> > > rites:
> > > > On Sat, Jan 25, 2020 at 02:09:29PM -0800, Cy Schubert wrote:
> > > > > On January 25, 2020
In message <202001261745.00qhjkuw044...@gndrsh.dnsmgr.net>, "Rodney W.
Grimes"
writes:
> > In message <20200125233116.ga49...@troutmask.apl.washington.edu>, Steve
> > Kargl w
> > rites:
> > > On Sat, Jan 25, 2020 at 02:09:29PM -0800, Cy Schubert wrote:
> > > > On January 25, 2020 1:52:03 PM PST,
> In message <20200125233116.ga49...@troutmask.apl.washington.edu>, Steve
> Kargl w
> rites:
> > On Sat, Jan 25, 2020 at 02:09:29PM -0800, Cy Schubert wrote:
> > > On January 25, 2020 1:52:03 PM PST, Steve Kargl
> > > > on.edu> wrote:
> > > >On Sat, Jan 25, 2020 at 01:41:16PM -0800, Cy Schubert
On Sat, Jan 25, 2020 at 03:59:06PM -0800, Cy Schubert wrote:
> A rule of thumb would probably be, have ~ 2 GB RAM for every core or
> thread when doing large parallel builds.
This is a rule of thumb that I have used for quite some time.
Perhaps less is necessary for certain tasks, but if you want
In message <20200125235405.ga50...@troutmask.apl.washington.edu>, Steve
Kargl w
rites:
> On Sat, Jan 25, 2020 at 03:31:16PM -0800, Steve Kargl wrote:
> > On Sat, Jan 25, 2020 at 02:09:29PM -0800, Cy Schubert wrote:
> > > On January 25, 2020 1:52:03 PM PST, Steve Kargl gton.edu> wrote:
> > > >On S
In message <20200125233116.ga49...@troutmask.apl.washington.edu>, Steve
Kargl w
rites:
> On Sat, Jan 25, 2020 at 02:09:29PM -0800, Cy Schubert wrote:
> > On January 25, 2020 1:52:03 PM PST, Steve Kargl on.edu> wrote:
> > >On Sat, Jan 25, 2020 at 01:41:16PM -0800, Cy Schubert wrote:
> > >>
> > >>
On Sat, Jan 25, 2020 at 03:31:16PM -0800, Steve Kargl wrote:
> On Sat, Jan 25, 2020 at 02:09:29PM -0800, Cy Schubert wrote:
> > On January 25, 2020 1:52:03 PM PST, Steve Kargl
> > wrote:
> > >On Sat, Jan 25, 2020 at 01:41:16PM -0800, Cy Schubert wrote:
> > >>
> > >> It's not just poudeiere. Stan
On Sat, Jan 25, 2020 at 02:09:29PM -0800, Cy Schubert wrote:
> On January 25, 2020 1:52:03 PM PST, Steve Kargl
> wrote:
> >On Sat, Jan 25, 2020 at 01:41:16PM -0800, Cy Schubert wrote:
> >>
> >> It's not just poudeiere. Standard port builds of chromium, rust
> >> and thunderbird also fail on my m
On January 25, 2020 1:52:03 PM PST, Steve Kargl
wrote:
>On Sat, Jan 25, 2020 at 01:41:16PM -0800, Cy Schubert wrote:
>>
>> It's not just poudeiere. Standard port builds of chromium, rust
>> and thunderbird also fail on my machines with less than 8 GB.
>>
>
>Interesting. I routinely build chromi
re jail with `poudriere jail -u -j jailname -b`. But it
>failed
>> > at install stage. After the failure I found following message is
>> > written to syslog.
>> >
>> > Jan 25 19:18:25 rolling-vm-freebsd1 kernel: pid 7963 (strip), jid
>0, uid 0, was killed: out
On Sat, Jan 25, 2020 at 01:41:16PM -0800, Cy Schubert wrote:
>
> It's not just poudeiere. Standard port builds of chromium, rust
> and thunderbird also fail on my machines with less than 8 GB.
>
Interesting. I routinely build chromium, rust, firefox,
llvm and few other resource-hunger ports on a
failure I found following message is
>> written to syslog.
>>
>> Jan 25 19:18:25 rolling-vm-freebsd1 kernel: pid 7963 (strip), jid 0,
>uid 0, was killed: out of swap space
>
>This message text's detailed wording is a misnomer.
>Do you also have any messages of the f
; > written to syslog.
> >
> > Jan 25 19:18:25 rolling-vm-freebsd1 kernel: pid 7963 (strip), jid 0, uid 0,
> > was killed: out of swap space
>
> This message text's detailed wording is a misnomer.
> Do you also have any messages of the form:
>
> . . . senti
963 (strip), jid 0, uid 0,
> was killed: out of swap space
This message text's detailed wording is a misnomer.
Do you also have any messages of the form:
. . . sentinel kernel: swap_pager_getswapspace(32): failed
If yes: you really were out of swap space.
If no: you were not out of
jailname -b`. But it failed
at install stage. After the failure I found following message is
written to syslog.
Jan 25 19:18:25 rolling-vm-freebsd1 kernel: pid 7963 (strip), jid 0, uid 0, was
killed: out of swap space
To make sure I shutdown both VM and host, restarted them and tried
update of jail
El día Thursday, June 13, 2019 a las 04:38:25PM +0800, Martin Wilke escribió:
> On 2019-06-11 14:59, Matthias Apitz wrote:
> >Hello,
> >
> >I'm build the latest port svn checkout (June 10) on r342378 with
> >poudriere and (after isolating it) have the problem that building
> >x11-wm/plasma5-kwin r
On 2019-06-11 14:59, Matthias Apitz wrote:
Hello,
I'm build the latest port svn checkout (June 10) on r342378 with
poudriere and (after isolating it) have the problem that building
x11-wm/plasma5-kwin runs reproducible out of swap during 'configure'
phase:
[00:00:49] Cleaning the build queue
[0
Hello,
I'm build the latest port svn checkout (June 10) on r342378 with
poudriere and (after isolating it) have the problem that building
x11-wm/plasma5-kwin runs reproducible out of swap during 'configure'
phase:
[00:00:49] Cleaning the build queue
[00:00:49] Sanity checking build queue
[00:00:
: ARGV=[distextract] GETOPTS_STDARGS=[dD:]
>> DEBUG: f_debug_init: debug=[1] debugFile=[/tmp/bsdinstall_log]
>> DEBUG: Running installation step: distextract
>> Killed
>>
>> Last message from /var/log/messages:
>>
>> Nov 3 20:02:9 kernel: pid 967 (distextract)
debug=[1] debugFile=[/tmp/bsdinstall_log]
> DEBUG: Running installation step: distextract
> Killed
>
> Last message from /var/log/messages:
>
> Nov 3 20:02:9 kernel: pid 967 (distextract), uid 0, was killed: out
> of swap space
>
> My VM has 2 gigs of memory, vmstat tells
_log]
>> DEBUG: Running installation step: distextract
>> Killed
>>
>> Last message from /var/log/messages:
>>
>> Nov 3 20:02:9 kernel: pid 967 (distextract), uid 0, was killed: out
>> of swap space
>>
>> My VM has 2 gigs of memory, vmstat tells tha
;>> DEBUG: f_debug_init: debug=[1] debugFile=[/tmp/bsdinstall_log]
>>> DEBUG: Running installation step: distextract
>>> Killed
>>>
>>> Last message from /var/log/messages:
>>>
>>> Nov 3 20:02:9 kernel: pid 967 (distextract), uid 0, was killed: out
File=[/tmp/bsdinstall_log]
DEBUG: Running installation step: distextract
Killed
Last message from /var/log/messages:
Nov 3 20:02:9 kernel: pid 967 (distextract), uid 0, was killed: out
of swap space
My VM has 2 gigs of memory, vmstat tells that I have ~537M free
(swapinfo tells nothing). I dunno is it a
ation step: distextract
Killed
Last message from /var/log/messages:
Nov 3 20:02:9 kernel: pid 967 (distextract), uid 0, was killed: out
of swap space
My VM has 2 gigs of memory, vmstat tells that I have ~537M free
(swapinfo tells nothing). I dunno is it a bug or I'm doing
On Thursday, September 20, 2012 4:17:52 am Anton Shterenlikht wrote:
> It is expected that running out of swap space
> causes panic?
>
> This is on ia64 r235474.
>
> Mark, I'll add another swap disk,
> so there will be ~85 GB of swap space.
>
It is expected that running out of swap space
causes panic?
This is on ia64 r235474.
Mark, I'll add another swap disk,
so there will be ~85 GB of swap space.
Will let you know when done.
- - - - - - - - - - Prior Console Output - - - - - - - - - -
swap_pager_getswapspace(16): failed
Sep
On Wed, May 30, 2012 at 2:11 PM, HIROSHI OOTA wrote:
> Hi all,
> my PCEngine's wrap(NanoBSD, i386, 128Mbytes mem, no swap) won't start, after
> updating to r234569.
> some of daemons was killed with the message 'out of swap space'.
>
> vmstat in single user
Hi all,
my PCEngine's wrap(NanoBSD, i386, 128Mbytes mem, no swap) won't start,
after updating to r234569.
some of daemons was killed with the message 'out of swap space'.
vmstat in single user mode as:
--- r234568(works fine)
# uname -a
FreeBSD 10.0-CURRENT Fre
56 matches
Mail list logo