--On 30 October 2012 19:51 +0200 Konstantin Belousov
wrote:
I suggest to take a look at where the actual memory goes.
Start with procstat -v.
Ok, running that for the milter PID I get seem to be able to see smallish
chunks used for things like 'libmilter.so', and 'libthr.so' / 'libc.so'
Hi!
This questions about Inactive queue and Swap layer in VM management
system at FreeBSD. For test, i running dd (for put ufs cache to Inactive), and
i get this:
1132580 wire
896796 act
5583964 inact
281852 cache
112252 free
836960 buf
in swap: 20M
It is good. Lets start run programm like:
ty
On Wed, Oct 31, 2012 at 09:49:21AM +, Karl Pielorz wrote:
>
> --On 30 October 2012 19:51 +0200 Konstantin Belousov
> wrote:
>
> > I suggest to take a look at where the actual memory goes.
> >
> > Start with procstat -v.
>
> Ok, running that for the milter PID I get seem to be able to see s
--On 31 October 2012 16:06 +0200 Konstantin Belousov
wrote:
Since you neglected to provide the verbatim output of procstat, nothing
conclusive can be said. Obviously, you can make an investigation on your
own.
Sorry - when I ran it this morning the output was several hundred lines - I
di
On Wed, Oct 31, 2012 at 02:44:05PM +, Karl Pielorz wrote:
>
>
> --On 31 October 2012 16:06 +0200 Konstantin Belousov
> wrote:
>
> > Since you neglected to provide the verbatim output of procstat, nothing
> > conclusive can be said. Obviously, you can make an investigation on your
> > own.
In the last episode (Oct 31), Karl Pielorz said:
> --On 31 October 2012 16:06 +0200 Konstantin Belousov
> wrote:
> > Since you neglected to provide the verbatim output of procstat, nothing
> > conclusive can be said. Obviously, you can make an investigation on
> > your own.
>
> Sorry - when I r
.. isn't the default thread stack size now really quite large?
Like one megabyte large?
adrian
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to "freebsd-hackers-uns
On Wed, 2012-10-31 at 10:55 -0700, Adrian Chadd wrote:
> .. isn't the default thread stack size now really quite large?
>
> Like one megabyte large?
That would explain a larger VSZ but the original post mentions that both
virtual and resident sizes have grown by almost an order of magnitude.
I
On 31 October 2012 11:20, Ian Lepore wrote:
> I think there are some things we should be investigating about the
> growth of memory usage. I just noticed this:
>
> Freebsd 6.2 on an arm processor:
>
> 369 root 1 8 -88 1752K 748K nanslp 3:00 0.00% watchdogd
>
> Freebsd 10.0 on the same
On Wed, Oct 31, 2012 at 11:52:06AM -0700, Adrian Chadd wrote:
> On 31 October 2012 11:20, Ian Lepore wrote:
> > I think there are some things we should be investigating about the
> > growth of memory usage. I just noticed this:
> >
> > Freebsd 6.2 on an arm processor:
> >
> > 369 root 1 8 -8
It seems like the new compiler likes to get up to ~200+MB resident when
building some basic things in our tree.
Unfortunately this causes smaller machines (VMs) to take days because of
swap thrashing.
Doesn't our make(1) have some stuff to mitigate this? I would expect it
to be a bit smarte
On Wed, Oct 31, 2012 at 12:58 PM, Alfred Perlstein wrote:
> It seems like the new compiler likes to get up to ~200+MB resident when
> building some basic things in our tree.
>
> Unfortunately this causes smaller machines (VMs) to take days because of
> swap thrashing.
>
> Doesn't our make(1) have
On Wed, Oct 31, 2012 at 12:06 PM, Konstantin Belousov
wrote:
...
> If not wired, swapout might cause a delay of the next pat, leading to
> panic.
Yes. We need to write microbenchmarks and do more careful analysis to
figure out where and why things have grown. Maybe a mock daemon and
application
On 31 October 2012 12:06, Konstantin Belousov wrote:
> Watchdogd was recently changed to mlock its memory. This is the cause
> of the RSS increase.
>
> If not wired, swapout might cause a delay of the next pat, leading to
> panic.
Right, but look at the virtual size of the 6.4 process. It's not
On 2012-Oct-31 12:58:18 -0700, Alfred Perlstein wrote:
>It seems like the new compiler likes to get up to ~200+MB resident when
>building some basic things in our tree.
The killer I found was the ctfmerge(1) on the kernel - which exceeds
~400MB on i386. Under low RAM, that fails _without_ repor
On 31 October 2012 13:41, Peter Jeremy wrote:
> Another, more involved, approach would be for the scheduler to manage
> groups of processes - if a group of processes is causing memory
> pressure as a whole then the scheduler just stops scheduling some of
> them until the pressure reduces (effecti
On 10/31/12 1:41 PM, Peter Jeremy wrote:
On 2012-Oct-31 12:58:18 -0700, Alfred Perlstein wrote:
It seems like the new compiler likes to get up to ~200+MB resident when
building some basic things in our tree.
The killer I found was the ctfmerge(1) on the kernel - which exceeds
~400MB on i386.
On Wed, Oct 31, 2012 at 1:44 PM, Adrian Chadd wrote:
> On 31 October 2012 13:41, Peter Jeremy wrote:
>
>> Another, more involved, approach would be for the scheduler to manage
>> groups of processes - if a group of processes is causing memory
>> pressure as a whole then the scheduler just stops s
On 2012-Oct-31 14:21:51 -0700, Alfred Perlstein wrote:
>Ah, but make(1) can delay spawning any new processes when it knows its
>children are paging.
That could work in some cases and may be worth implementing. Where it
won't work is when make(1) initially hits a parallelisable block of
"big" pr
On 10/31/12 3:14 PM, Peter Jeremy wrote:
On 2012-Oct-31 14:21:51 -0700, Alfred Perlstein wrote:
Ah, but make(1) can delay spawning any new processes when it knows its
children are paging.
That could work in some cases and may be worth implementing. Where it
won't work is when make(1) initiall
On 2012/10/31 22:44, Karl Pielorz wrote:
--On 31 October 2012 16:06 +0200 Konstantin Belousov
wrote:
Since you neglected to provide the verbatim output of procstat, nothing
conclusive can be said. Obviously, you can make an investigation on your
own.
Sorry - when I ran it this morning the
21 matches
Mail list logo