On Fri, 20 Dec 2002, Matthew Dillon wrote:
> :Hi,
> :
> :It seems that kern/32672 is not fixed yet on FreeBSD 4.5-STABLE.
> :
> :System 4Gb RAM, 4x700MHz
> :
> :When the system is not using all RAM, the FFS node memory grows up to a
> :limit of 102400K which leads to a system deadlocking.
>
>
:Hi,
:
:It seems that kern/32672 is not fixed yet on FreeBSD 4.5-STABLE.
:
:System 4Gb RAM, 4x700MHz
:
:When the system is not using all RAM, the FFS node memory grows up to a
:limit of 102400K which leads to a system deadlocking.
Well, there was some further work done to the vnode reclamation
try Morozovsky <[EMAIL PROTECTED]>,
> David Schultz <[EMAIL PROTECTED]>,
> Terry Lambert <[EMAIL PROTECTED]>, <[EMAIL PROTECTED]>
> Subject: Re: maxusers and random system freezes
>
> On 16:51+0300, Dec 19, 2002, Varshavchick Alexander wrote:
>
On 16:51+0300, Dec 19, 2002, Varshavchick Alexander wrote:
> On Thu, 19 Dec 2002, Maxim Konovalov wrote:
[...]
> > [ Trim -questions ]
> >
> > On 16:21+0300, Dec 19, 2002, Varshavchick Alexander wrote:
> >
> > > There seems to be archive posts already on the subject, the most
> > > informative o
IL PROTECTED]>,
> Terry Lambert <[EMAIL PROTECTED]>, <[EMAIL PROTECTED]>
> Subject: Re: maxusers and random system freezes
>
>
> [ Trim -questions ]
>
> On 16:21+0300, Dec 19, 2002, Varshavchick Alexander wrote:
>
> > There seems to be archive p
t; Terry Lambert <[EMAIL PROTECTED]>, [EMAIL PROTECTED],
> > [EMAIL PROTECTED]
> > Subject: Re: maxusers and random system freezes
> >
> > Hi,
> >
> > Despite the increased KVA space (2G now) and the perfect patch of the
> > pthreads mechanism made
> To: Dmitry Morozovsky <[EMAIL PROTECTED]>
> Cc: David Schultz <[EMAIL PROTECTED]>,
> Terry Lambert <[EMAIL PROTECTED]>, [EMAIL PROTECTED],
> [EMAIL PROTECTED]
> Subject: Re: maxusers and random system freezes
>
> Hi,
>
> Despite the increased K
lexander <[EMAIL PROTECTED]>
> Cc: David Schultz <[EMAIL PROTECTED]>,
> Terry Lambert <[EMAIL PROTECTED]>,
> <[EMAIL PROTECTED]>, <[EMAIL PROTECTED]>
> Subject: Re: maxusers and random system freezes
>
> On Mon, 9 Dec 2002, Varshavchick Ale
Nate Lawson wrote:
> On Wed, 4 Dec 2002, Terry Lambert wrote:
> > useful documentation; otherwise, I would have published what I
> > wrote in Pentad Embedded Systems Journal already (example: the
>^^^
>
> I appreciate some of the info you give. But every tim
On Mon, 9 Dec 2002, Varshavchick Alexander wrote:
VA> the server went to a swap, because it occurs practically instantly, and
VA> this state goes for hours. The system is lacking some resources, or may be
VA> a bug somewhere, can you give any hints to it?
Hmm, what about logging vmstat/pstat/nets
0
> From: David Schultz <[EMAIL PROTECTED]>
> To: Varshavchick Alexander <[EMAIL PROTECTED]>
> Cc: Terry Lambert <[EMAIL PROTECTED]>, [EMAIL PROTECTED],
> [EMAIL PROTECTED]
> Subject: Re: maxusers and random system freezes
>
> Thus spake Varshavchick Alexan
Thus spake Gary Thorpe <[EMAIL PROTECTED]>:
> I have a question: does the entire KVA *have* to be mapped into the
> each process's address space? How much of the KVA does a process need
> to communicate with the kernel effectively?
No, it doesn't have to be that way. An alternative organization
i
--- David Schultz <[EMAIL PROTECTED]> wrote: > Thus spake
Varshavchick Alexander <[EMAIL PROTECTED]>:
> > Well, now I made KVA space 2G, we'll see later on if it helps to
> get rid
> > of the sudden system halts, but for some reason a side-effect has
> > appeared: pthread_create function returns E
On Fri, 6 Dec 2002, David Schultz wrote:
...
> > Yes this makes sense, however this call to pthread_create didn't specify
> > any special addresses for the new thread. The pthread_create was called
> > with the NULL attribute which means that the system defaults were being
> > used. Something in t
Thus spake Varshavchick Alexander <[EMAIL PROTECTED]>:
> On Fri, 6 Dec 2002, David Schultz wrote:
>
> > Thus spake Varshavchick Alexander <[EMAIL PROTECTED]>:
> > > Well, now I made KVA space 2G, we'll see later on if it helps to get rid
> > > of the sudden system halts, but for some reason a side
On Fri, 6 Dec 2002, David Schultz wrote:
> Thus spake Varshavchick Alexander <[EMAIL PROTECTED]>:
> > Well, now I made KVA space 2G, we'll see later on if it helps to get rid
> > of the sudden system halts, but for some reason a side-effect has
> > appeared: pthread_create function returns EAGAIN
Thus spake Varshavchick Alexander <[EMAIL PROTECTED]>:
> Well, now I made KVA space 2G, we'll see later on if it helps to get rid
> of the sudden system halts, but for some reason a side-effect has
> appeared: pthread_create function returns EAGAIN error now, so I had to
> recompile the software us
On Fri, 6 Dec 2002, David Schultz wrote:
> > vm.zone_kmem_pages: 5413
> > vm.zone_kmem_kvaspace: 218808320
> > vm.kvm_size: 1065353216
> > vm.kvm_free: 58720256
> >
> > does it mean that total KVA reservation is 1065353216 bytes (1G) and
> > almost all of it is really mapped to physical memory bec
Thus spake Varshavchick Alexander <[EMAIL PROTECTED]>:
> Thank you David for such an excellent explanation. So if sysctl reports
>
> vm.zone_kmem_pages: 5413
> vm.zone_kmem_kvaspace: 218808320
> vm.kvm_size: 1065353216
> vm.kvm_free: 58720256
>
> does it mean that total KVA reservation is 1065353
On Thu, 5 Dec 2002, Terry Lambert wrote:
...
> > Are you talking primarily about SHMMAXPGS=262144 option here? Then may be
> > it'll be oevrall better to reduce it and make KVA space 2G, to leave more
> > room for user address space?
>
> That's the one I was referring to, yes, but you didn't post
On Thu, 5 Dec 2002, David Schultz wrote:
> In FreeBSD, each process has a unique 4G virtual address space
> associated with it. Not every virtual page in every address space
> has to be associated with real memory. Most pages can be pushed
> out to disk when there isn't enough free RAM, and una
"Ronald G. Minnich" wrote:
> On Thu, 5 Dec 2002, David Schultz wrote:
>
> > Linux used to do that, but AFAIK it doesn't anymore.
>
> Linux puts kvm at 0xc000, kernel at physical 0x10, etc. There
> was a time when you could address all of physical memory just by
> direct-mapping the PT
On Thu, 5 Dec 2002, David Schultz wrote:
> Linux used to do that, but AFAIK it doesn't anymore.
Linux puts kvm at 0xc000, kernel at physical 0x10, etc. There
was a time when you could address all of physical memory just by
direct-mapping the PTEs, since base of 0xc000 means KVM sp
Thus spake Gary Thorpe <[EMAIL PROTECTED]>:
> As far as I know, Linux maps all the memory in the machine into the
> kernel address space, so there is never a problem of it running out
> while there is free memory (if you run out of it, there isn't any at
> all left in the machine). It also permits
--- Terry Lambert <[EMAIL PROTECTED]> wrote:
> Marc Recht wrote:
> > Every now and this I hear people saying (mostly you :)) that some
> problems
> > are KVA related or that the KVA must be increased. This makes me a
> bit
> > curious, since I've never seen problems like that on Linux. It
> sounds
On Wed, 4 Dec 2002, Terry Lambert wrote:
> Marc Recht wrote:
> > Every now and this I hear people saying (mostly you :)) that some problems
> > are KVA related or that the KVA must be increased. This makes me a bit
> > curious, since I've never seen problems like that on Linux. It sounds for
> > me
On Thu, 5 Dec 2002, David Schultz wrote:
> In FreeBSD, each process has a unique 4G virtual address space
> associated with it. Not every virtual page in every address space
> has to be associated with real memory. Most pages can be pushed
> out to disk when there isn't enough free RAM, and unal
On Thu, 5 Dec 2002, Varshavchick Alexander wrote:
> On Thu, 5 Dec 2002, Terry Lambert wrote:
>
> > IMO, KVA need to be more than half of physical memory. But I tend
> > to use a lot of mbufs and mbuf clusters in products I work on lately
> > (mostly networking stuff). If you don't tune kernel me
Thus spake Varshavchick Alexander <[EMAIL PROTECTED]>:
> A question arises. The value 256 (1G KVA space) acts as a default for any
> system installation, not depending of real phisical memory size. So for
> any server with RAM less than 2G (which is a majority I presume) the KVA
> space occupies mo
On Thu, 5 Dec 2002, Terry Lambert wrote:
> IMO, KVA need to be more than half of physical memory. But I tend
> to use a lot of mbufs and mbuf clusters in products I work on lately
> (mostly networking stuff). If you don't tune kernel memory usage up,
> then you may be able to get away with 2G.
Thus spake Terry Lambert <[EMAIL PROTECTED]>:
> As a rule, swap should be at least physical memory size + 64K on
> any system that you need to be able to get a system dump from,
> since it needs to dump physical RAM. If you are not worried about
> the machine falling over, then you can ignore that
Varshavchick Alexander wrote:
> > So: 2G might be OK, 3G would be more certain, given you are cranking
> > some things up, in the config you posted, that make me think you will
> > be eating more physical memory.
>
> Are you talking primarily about SHMMAXPGS=262144 option here? Then may be
> it'll
On Thu, 5 Dec 2002, Terry Lambert wrote:
...
> > Because it's not defined in the custom
> > server's kernel then it's value default to 256 (FreeBSD 4.5-STABLE), which
> > makes the KVA space to occupy 1G. Then if I make KVA_PAGES=512 (KVA space
> > 2G), will it solve the problem for this particul
Varshavchick Alexander wrote:
> On Wed, 4 Dec 2002, Terry Lambert wrote:
>
> > grep -B 7 KVA_ /sys/i386/conf/LINT
>
> Thanks a lot Terry, and will you please correct me if I'm wrong, so I
> don't mess anything up on a production server? The kernel option in
> question is KVA_PAGES, correct?
On Wed, 4 Dec 2002, Terry Lambert wrote:
> grep -B 7 KVA_ /sys/i386/conf/LINT
>
> -- Terry
>
Thanks a lot Terry, and will you please correct me if I'm wrong, so I
don't mess anything up on a production server? The kernel option in
question is KVA_PAGES, correct? Because it's not defined in t
Marc Recht wrote:
> Every now and this I hear people saying (mostly you :)) that some problems
> are KVA related or that the KVA must be increased. This makes me a bit
> curious, since I've never seen problems like that on Linux. It sounds for
> me, the not kernel hacker, a bit like something which
With these settings, and that much physical RAM, you should set
your KVA space to 3G (the default is 2G); have you?
Most likely, you are running out of KVA space for mappings.
Every now and this I hear people saying (mostly you :)) that some problems
are KVA related or that the KVA must be incre
Varshavchick Alexander wrote:
> > With these settings, and that much physical RAM, you should set
> > your KVA space to 3G (the default is 2G); have you?
> >
> > Most likely, you are running out of KVA space for mappings.
>
> No, I didn't do it, and I'm not sure how to perform it, can you please
>
On Wed, 4 Dec 2002, Terry Lambert wrote:
> Varshavchick Alexander wrote:
> > Can it be so that kernel maxusers=768 value being more than 512 leads to
> > spontaneous system freezes which can take up to several hours when the
> > system is just sleeping (only replying to pi
Varshavchick Alexander wrote:
> Can it be so that kernel maxusers=768 value being more than 512 leads to
> spontaneous system freezes which can take up to several hours when the
> system is just sleeping (only replying to ping) and doing nothing else,
> not allowing to telnet or a
Hi people,
Can it be so that kernel maxusers=768 value being more than 512 leads to
spontaneous system freezes which can take up to several hours when the
system is just sleeping (only replying to ping) and doing nothing else,
not allowing to telnet or anything. The system is 4.5-STABLE with much
On Wed, 8 Nov 2000, Len Conrad wrote:
> Sorry to bother you hackers, but -questions isn't responding, and the
> handbook and Complete/Lehey don't, afaics, cover this situation
> explicitly. I can't really afford to screw up this production
> machine and start over from fresh disk, nor futz
Ian Dowse wrote:
>
> I think a few slots are reserved, so you can consider 1050 as being
> equal to 1064. Try putting
>
> set kern.ipc.maxsockets=4000
>
> in /boot/loader.rc and rebooting.
Eeee!
kern.ipc.maxsockets="4000" in /boot/loader.conf instead, please!
--
Daniel C. Sobral
On Wed, 8 Nov 2000, Len Conrad wrote:
> All I need to change, I think, is maxusers since we're getting this
> error from postfix:
>
> Nov 8 04:59:41 postfix/qmgr[16383]: fatal: socket: No buffer space available
> Nov 8 04:59:41 postfix/smtp[16872]: fatal: socket: No bu
Lyndon Nerenberg writes:
> FWIW I run our NFS server with NMBCLUSTERS=1. It doesn't burn that
> much additional memory.
As an additional data point, I had an NFS server that regularly
crashed when it ran out; logs showed that it needed up to 1700
(against the default of 1024). I bumped it t
On Wed, 8 Nov 2000, Mike Silbersack wrote:
> I think you can up the mbuf related settings while the system is
> running. Give it a try. The two sysctls you'll want to fiddle with are:
>
> kern.ipc.nmbclusters
> kern.ipc.nmbufs
Nope.
These are read-only but can be tuned from l
;s. I
scanner> think that is why they are saying dont just jack up
scanner> MAXUSERS. Use the NMBCLUSTERS= instead. Because that
scanner> is usually the variable you want increased not the other
scanner> parameters MAXUSERS increases.
FWIW I run our NFS server with NMBCL
768.
>
> Is it possible to make the tuning of nmbclusters available after
> the kenrel is loaded. So that you don't have to reboot a server to get
> loader's changes to take effect?
Nope.
>
> > when maxusers was above 256, but that hasn't been an issue fo
oaded. So that you don't have to reboot a server to get
loader's changes to take effect?
> when maxusers was above 256, but that hasn't been an issue for quite
> some time.
So one could go as high as.. 512? 1024? There has to still be
drawbacks at some number where y
al. Can the /stand/systinstall
kernel! :)
> post-config option be used to put on all the developer source pkg
> without bothering the current config? which choice (I don't want X,
> just enough to build a custom kernal)
>
> It's in production as 200 K msgs/day mail hu
> > kern.ipc.nmbclusters
> > kern.ipc.nmbufs
>
> Nope. Those are read only at least on my 4.2-BETA kernel.
read-only also in 4.1
# sysctl -w kern.ipc.nmbclusters=2048
sysctl: oid 'kern.ipc.nmbclusters' is read only
# sysctl -w kern.ipc.nmbufs=8192
sysctl: oid 'kern.ipc.nmbufs' is read o
>kern.ipc.nmbclusters
>kern.ipc.nmbufs
# sysctl -w kern.ipc.nmbclusters=2048
sysctl: oid 'kern.ipc.nmbclusters' is read only
# sysctl -w kern.ipc.nmbufs=8192
sysctl: oid 'kern.ipc.nmbufs' is read only
I'll have to reboot,
>You can determine which is needed more through a quick netstat -m.
# n
In message <[EMAIL PROTECTED]>, Len Conrad
writes:
># vmstat -z
...
>socket 607 1050 113/196K
...
>kern.ipc.maxsockets: 1064
>doesn't look like it to me.
I think a few slots are reserved, so you can consider 1050 as being
equal to 1064. Try putting
set kern.ipc.maxsoc
On Wed, 8 Nov 2000, Mike Silbersack wrote:
> > The machine can get up 200 SMTP processes and 50 SMTPD processes
> > simulatenously, 256 meg RAM.
> >
> > Increasing maxusers will fix this pb? afaic, maxusers can't be fixed
> > with sysctl.
>
> I t
>In message <[EMAIL PROTECTED]>,
>Len Conrad
>writes:
>
> >All I need to change, I think, is maxusers since we're getting this
> >error from postfix:
>
>You may be able to increase these limits without recompiling the
>kernel, by using kernel
On Wed, 8 Nov 2000, Len Conrad wrote:
> All I need to change, I think, is maxusers since we're getting this
> error from postfix:
>
> Nov 8 04:59:41 postfix/qmgr[16383]: fatal: socket: No buffer space available
> Nov 8 04:59:41 postfix/smtp[16872]: fatal: socket: No bu
g the current config? which choice (I don't want X,
just enough to build a custom kernal)
It's in production as 200 K msgs/day mail hub.
All I need to change, I think, is maxusers since we're getting this
error from postfix:
Nov 8 04:59:41 postfix/qmgr[16383]: fatal: socket: No
gt; i'm using zebra to do full BGP routing with 2 peers.
>
> netstat -rn shows some 75,000 routes.
>
> i've got:
> maxusers32
> options NMBCLUSTERS=1
>
> vmstat -m shows:
> routetbl 154337 21118K 21118K 21118K 237725
On Mon, Apr 03, 2000 at 04:18:37PM +0100, Koster, K.J. wrote:
> From the original post I understood that the problem is that not all
> physical RAM is detected. Is FreeBSD seeing all oof the 128 MB's, or only 80
> MB's?
i think it is seeing all of it:
FreeBSD 3.4-STABLE #4: Mon Apr 3 01:25:50 E
Dear Hackers,
>From the original post I understood that the problem is that not all
physical RAM is detected. Is FreeBSD seeing all oof the 128 MB's, or only 80
MB's?
Kees Jan
==
You are only young once,
but you can stay immature all your l
On Mon, Apr 03, 2000 at 03:06:27PM +0100, Tony Finch wrote:
> Jim Mercer <[EMAIL PROTECTED]> wrote:
> >
> >how do i increase the amount of RAM for the kernel?
>
> http://www.freebsd.org/FAQ/hackers.html#AEN4204
geez, that one looks a bit scary.
since 4.x has 1GB of address space, would moving f
Jim Mercer <[EMAIL PROTECTED]> wrote:
>
>how do i increase the amount of RAM for the kernel?
http://www.freebsd.org/FAQ/hackers.html#AEN4204
>i thought NMBCLUSTERS was the one, but i guess not.
That's just for network buffers.
Tony.
--
f.a.n.finch[EMAIL PROTECTED][EMAIL PROTECTED]
342
e 75,000 routes.
i've got:
maxusers32
options NMBCLUSTERS=1
vmstat -m shows:
routetbl 154337 21118K 21118K 21118K 2377250 0 16,32,64,128,256
Memory Totals: In UseFreeRequests
21842K 47K 249883
how do i increase the amount of R
W. Rohrbach" <[EMAIL PROTECTED]>
Cc: <[EMAIL PROTECTED]>
Sent: Thursday, March 02, 2000 9:51 AM
Subject: Re: MAXUSERS question, what is max MAXUSERS setting?
> > i just wondered what the maximum MAXUSERS setting for a 3.4 kernel would
> > be on a smp system with 512mb ram..
> i just wondered what the maximum MAXUSERS setting for a 3.4 kernel would
> be on a smp system with 512mb ram... the impact on the system structures
> seems to be very... errrhh... rather complex.
>
> any ideas? it gives me a warning if i got past 512, but what will happen
&g
hiya folks
i just wondered what the maximum MAXUSERS setting for a 3.4 kernel would
be on a smp system with 512mb ram... the impact on the system structures
seems to be very... errrhh... rather complex.
any ideas? it gives me a warning if i got past 512, but what will happen
then?
/k
What's the danger in changing the NPROC define in param.c to be ifdef'd,
and an option in the kernel config file?
it looks like the only way to change the # of allowed processes is via
changing maxusers, but is that only by convention?Or is there
some other reason I can
What is the maximum number that MAXUSERS can currently be set to,
in the following environments:
3.2-STABLE
4.0-CURRENT
Also, what is the limiting factor for this setting? MAXFILES?
maxproc?
Regards,
Greg
--
Gregory S. Sutter"Very funny, Scotty.
mailto:gsut...@pobo
68 matches
Mail list logo