On 16:51+0300, Dec 19, 2002, Varshavchick Alexander wrote:
> On Thu, 19 Dec 2002, Maxim Konovalov wrote:
[...]
> > [ Trim -questions ]
> >
> > On 16:21+0300, Dec 19, 2002, Varshavchick Alexander wrote:
> >
> > > There seems to be archive posts already on the subject, the most
> > > informative o
IL PROTECTED]>,
> Terry Lambert <[EMAIL PROTECTED]>, <[EMAIL PROTECTED]>
> Subject: Re: maxusers and random system freezes
>
>
> [ Trim -questions ]
>
> On 16:21+0300, Dec 19, 2002, Varshavchick Alexander wrote:
>
> > There seems to be archive p
t; Terry Lambert <[EMAIL PROTECTED]>, [EMAIL PROTECTED],
> > [EMAIL PROTECTED]
> > Subject: Re: maxusers and random system freezes
> >
> > Hi,
> >
> > Despite the increased KVA space (2G now) and the perfect patch of the
> > pthreads mechanism made
> To: Dmitry Morozovsky <[EMAIL PROTECTED]>
> Cc: David Schultz <[EMAIL PROTECTED]>,
> Terry Lambert <[EMAIL PROTECTED]>, [EMAIL PROTECTED],
> [EMAIL PROTECTED]
> Subject: Re: maxusers and random system freezes
>
> Hi,
>
> Despite the increased K
lexander <[EMAIL PROTECTED]>
> Cc: David Schultz <[EMAIL PROTECTED]>,
> Terry Lambert <[EMAIL PROTECTED]>,
> <[EMAIL PROTECTED]>, <[EMAIL PROTECTED]>
> Subject: Re: maxusers and random system freezes
>
> On Mon, 9 Dec 2002, Varshavchick Ale
Nate Lawson wrote:
> On Wed, 4 Dec 2002, Terry Lambert wrote:
> > useful documentation; otherwise, I would have published what I
> > wrote in Pentad Embedded Systems Journal already (example: the
>^^^
>
> I appreciate some of the info you give. But every tim
On Mon, 9 Dec 2002, Varshavchick Alexander wrote:
VA> the server went to a swap, because it occurs practically instantly, and
VA> this state goes for hours. The system is lacking some resources, or may be
VA> a bug somewhere, can you give any hints to it?
Hmm, what about logging vmstat/pstat/nets
0
> From: David Schultz <[EMAIL PROTECTED]>
> To: Varshavchick Alexander <[EMAIL PROTECTED]>
> Cc: Terry Lambert <[EMAIL PROTECTED]>, [EMAIL PROTECTED],
> [EMAIL PROTECTED]
> Subject: Re: maxusers and random system freezes
>
> Thus spake Varshavchick Alexan
Thus spake Gary Thorpe <[EMAIL PROTECTED]>:
> I have a question: does the entire KVA *have* to be mapped into the
> each process's address space? How much of the KVA does a process need
> to communicate with the kernel effectively?
No, it doesn't have to be that way. An alternative organization
i
--- David Schultz <[EMAIL PROTECTED]> wrote: > Thus spake
Varshavchick Alexander <[EMAIL PROTECTED]>:
> > Well, now I made KVA space 2G, we'll see later on if it helps to
> get rid
> > of the sudden system halts, but for some reason a side-effect has
> > appeared: pthread_create function returns E
On Fri, 6 Dec 2002, David Schultz wrote:
...
> > Yes this makes sense, however this call to pthread_create didn't specify
> > any special addresses for the new thread. The pthread_create was called
> > with the NULL attribute which means that the system defaults were being
> > used. Something in t
Thus spake Varshavchick Alexander <[EMAIL PROTECTED]>:
> On Fri, 6 Dec 2002, David Schultz wrote:
>
> > Thus spake Varshavchick Alexander <[EMAIL PROTECTED]>:
> > > Well, now I made KVA space 2G, we'll see later on if it helps to get rid
> > > of the sudden system halts, but for some reason a side
On Fri, 6 Dec 2002, David Schultz wrote:
> Thus spake Varshavchick Alexander <[EMAIL PROTECTED]>:
> > Well, now I made KVA space 2G, we'll see later on if it helps to get rid
> > of the sudden system halts, but for some reason a side-effect has
> > appeared: pthread_create function returns EAGAIN
Thus spake Varshavchick Alexander <[EMAIL PROTECTED]>:
> Well, now I made KVA space 2G, we'll see later on if it helps to get rid
> of the sudden system halts, but for some reason a side-effect has
> appeared: pthread_create function returns EAGAIN error now, so I had to
> recompile the software us
On Fri, 6 Dec 2002, David Schultz wrote:
> > vm.zone_kmem_pages: 5413
> > vm.zone_kmem_kvaspace: 218808320
> > vm.kvm_size: 1065353216
> > vm.kvm_free: 58720256
> >
> > does it mean that total KVA reservation is 1065353216 bytes (1G) and
> > almost all of it is really mapped to physical memory bec
Thus spake Varshavchick Alexander <[EMAIL PROTECTED]>:
> Thank you David for such an excellent explanation. So if sysctl reports
>
> vm.zone_kmem_pages: 5413
> vm.zone_kmem_kvaspace: 218808320
> vm.kvm_size: 1065353216
> vm.kvm_free: 58720256
>
> does it mean that total KVA reservation is 1065353
On Thu, 5 Dec 2002, Terry Lambert wrote:
...
> > Are you talking primarily about SHMMAXPGS=262144 option here? Then may be
> > it'll be oevrall better to reduce it and make KVA space 2G, to leave more
> > room for user address space?
>
> That's the one I was referring to, yes, but you didn't post
On Thu, 5 Dec 2002, David Schultz wrote:
> In FreeBSD, each process has a unique 4G virtual address space
> associated with it. Not every virtual page in every address space
> has to be associated with real memory. Most pages can be pushed
> out to disk when there isn't enough free RAM, and una
"Ronald G. Minnich" wrote:
> On Thu, 5 Dec 2002, David Schultz wrote:
>
> > Linux used to do that, but AFAIK it doesn't anymore.
>
> Linux puts kvm at 0xc000, kernel at physical 0x10, etc. There
> was a time when you could address all of physical memory just by
> direct-mapping the PT
On Thu, 5 Dec 2002, David Schultz wrote:
> Linux used to do that, but AFAIK it doesn't anymore.
Linux puts kvm at 0xc000, kernel at physical 0x10, etc. There
was a time when you could address all of physical memory just by
direct-mapping the PTEs, since base of 0xc000 means KVM sp
Thus spake Gary Thorpe <[EMAIL PROTECTED]>:
> As far as I know, Linux maps all the memory in the machine into the
> kernel address space, so there is never a problem of it running out
> while there is free memory (if you run out of it, there isn't any at
> all left in the machine). It also permits
--- Terry Lambert <[EMAIL PROTECTED]> wrote:
> Marc Recht wrote:
> > Every now and this I hear people saying (mostly you :)) that some
> problems
> > are KVA related or that the KVA must be increased. This makes me a
> bit
> > curious, since I've never seen problems like that on Linux. It
> sounds
On Wed, 4 Dec 2002, Terry Lambert wrote:
> Marc Recht wrote:
> > Every now and this I hear people saying (mostly you :)) that some problems
> > are KVA related or that the KVA must be increased. This makes me a bit
> > curious, since I've never seen problems like that on Linux. It sounds for
> > me
On Thu, 5 Dec 2002, David Schultz wrote:
> In FreeBSD, each process has a unique 4G virtual address space
> associated with it. Not every virtual page in every address space
> has to be associated with real memory. Most pages can be pushed
> out to disk when there isn't enough free RAM, and unal
On Thu, 5 Dec 2002, Varshavchick Alexander wrote:
> On Thu, 5 Dec 2002, Terry Lambert wrote:
>
> > IMO, KVA need to be more than half of physical memory. But I tend
> > to use a lot of mbufs and mbuf clusters in products I work on lately
> > (mostly networking stuff). If you don't tune kernel me
Thus spake Varshavchick Alexander <[EMAIL PROTECTED]>:
> A question arises. The value 256 (1G KVA space) acts as a default for any
> system installation, not depending of real phisical memory size. So for
> any server with RAM less than 2G (which is a majority I presume) the KVA
> space occupies mo
On Thu, 5 Dec 2002, Terry Lambert wrote:
> IMO, KVA need to be more than half of physical memory. But I tend
> to use a lot of mbufs and mbuf clusters in products I work on lately
> (mostly networking stuff). If you don't tune kernel memory usage up,
> then you may be able to get away with 2G.
Thus spake Terry Lambert <[EMAIL PROTECTED]>:
> As a rule, swap should be at least physical memory size + 64K on
> any system that you need to be able to get a system dump from,
> since it needs to dump physical RAM. If you are not worried about
> the machine falling over, then you can ignore that
Varshavchick Alexander wrote:
> > So: 2G might be OK, 3G would be more certain, given you are cranking
> > some things up, in the config you posted, that make me think you will
> > be eating more physical memory.
>
> Are you talking primarily about SHMMAXPGS=262144 option here? Then may be
> it'll
On Thu, 5 Dec 2002, Terry Lambert wrote:
...
> > Because it's not defined in the custom
> > server's kernel then it's value default to 256 (FreeBSD 4.5-STABLE), which
> > makes the KVA space to occupy 1G. Then if I make KVA_PAGES=512 (KVA space
> > 2G), will it solve the problem for this particul
Varshavchick Alexander wrote:
> On Wed, 4 Dec 2002, Terry Lambert wrote:
>
> > grep -B 7 KVA_ /sys/i386/conf/LINT
>
> Thanks a lot Terry, and will you please correct me if I'm wrong, so I
> don't mess anything up on a production server? The kernel option in
> question is KVA_PAGES, correct?
On Wed, 4 Dec 2002, Terry Lambert wrote:
> grep -B 7 KVA_ /sys/i386/conf/LINT
>
> -- Terry
>
Thanks a lot Terry, and will you please correct me if I'm wrong, so I
don't mess anything up on a production server? The kernel option in
question is KVA_PAGES, correct? Because it's not defined in t
Marc Recht wrote:
> Every now and this I hear people saying (mostly you :)) that some problems
> are KVA related or that the KVA must be increased. This makes me a bit
> curious, since I've never seen problems like that on Linux. It sounds for
> me, the not kernel hacker, a bit like something which
With these settings, and that much physical RAM, you should set
your KVA space to 3G (the default is 2G); have you?
Most likely, you are running out of KVA space for mappings.
Every now and this I hear people saying (mostly you :)) that some problems
are KVA related or that the KVA must be incre
Varshavchick Alexander wrote:
> > With these settings, and that much physical RAM, you should set
> > your KVA space to 3G (the default is 2G); have you?
> >
> > Most likely, you are running out of KVA space for mappings.
>
> No, I didn't do it, and I'm not sure how to perform it, can you please
>
On Wed, 4 Dec 2002, Terry Lambert wrote:
> Varshavchick Alexander wrote:
> > Can it be so that kernel maxusers=768 value being more than 512 leads to
> > spontaneous system freezes which can take up to several hours when the
> > system is just sleeping (only replying to ping) and doing nothing els
Varshavchick Alexander wrote:
> Can it be so that kernel maxusers=768 value being more than 512 leads to
> spontaneous system freezes which can take up to several hours when the
> system is just sleeping (only replying to ping) and doing nothing else,
> not allowing to telnet or anything. The syste
Hi people,
Can it be so that kernel maxusers=768 value being more than 512 leads to
spontaneous system freezes which can take up to several hours when the
system is just sleeping (only replying to ping) and doing nothing else,
not allowing to telnet or anything. The system is 4.5-STABLE with much
38 matches
Mail list logo