:Matt Dillon <[EMAIL PROTECTED]> writes:
:> So you would be able to create approximately four 17GB swap partitions.
:> If you reduce NSWAP to 2 you would be able to create approximately
:> two 34GB swap partitions. If you reduce NSWAP to 1 you would be able
:> to create approxima
Matt Dillon <[EMAIL PROTECTED]> writes:
> So you would be able to create approximately four 17GB swap partitions.
> If you reduce NSWAP to 2 you would be able to create approximately
> two 34GB swap partitions. If you reduce NSWAP to 1 you would be able
> to create approximately o
On Fri, Mar 23, 2001 at 08:11:03PM +0100, Adrian Chadd wrote:
> A while back I started running through the undocumented sysctls and
> documenting them. I didn't get through all of them, and the main reason
> I stopped was because there wasn't a nifty way to extract the sysctls
> short of writing
On Fri, Mar 23, 2001, Robert Watson wrote:
>
> On Fri, 23 Mar 2001, Alexey V. Neyman wrote:
>
> > On Thu, 22 Mar 2001, Michael C . Wu wrote:
> >
> > >(Why is vfs.vmiodirenable=1 not enabled by default?)
> > By the way, is there any all-in-one-place description of sysctl tuneables?
> > Looking a
On Fri, 23 Mar 2001, Alexey V. Neyman wrote:
> On Thu, 22 Mar 2001, Michael C . Wu wrote:
>
> >(Why is vfs.vmiodirenable=1 not enabled by default?)
> By the way, is there any all-in-one-place description of sysctl tuneables?
> Looking all the man pages and collecting notices about MIB variables
:(Why is vfs.vmiodirenable=1 not enabled by default?)
:
The only reason it isn't enabled by default is some unresolved
filesystem corruption that occurs very rarely (with or without
it) that Kirk and I are still trying to nail down. I want to get
that figured out first.
It
hello there!
On Thu, 22 Mar 2001, Michael C . Wu wrote:
>(Why is vfs.vmiodirenable=1 not enabled by default?)
By the way, is there any all-in-one-place description of sysctl tuneables?
Looking all the man pages and collecting notices about MIB variables seems
rather tiresome and, I think, pointl
* Michael C . Wu <[EMAIL PROTECTED]> [010322 12:29] wrote:
> Just an update on the lovely loaded BBS server.
> We made our record-breaking number of users last night.
>
> After implementing the changes suggested, and kqueue'ifying
> the BBS daemon. We saw a dramatic increase in server power.
>
Just an update on the lovely loaded BBS server.
We made our record-breaking number of users last night.
After implementing the changes suggested, and kqueue'ifying
the BBS daemon. We saw a dramatic increase in server power.
Top number of users was 4704 users. Serving SSH, HTTP, SMTP, innd, B
On Thu, 22 Mar 2001, thinker wrote:
> On Wed, Mar 21, 2001 at 04:14:32PM -0300, Rik van Riel wrote:
> > The (maybe too lightweight) structure I have in my patch
> > looks like this:
> >
> > struct pte_chain {
> > struct pte_chain * next;
> > pte_t * ptep;
> > };
> >
> > Each pte_chain ha
On Wed, Mar 21, 2001 at 04:14:32PM -0300, Rik van Riel wrote:
> The (maybe too lightweight) structure I have in my patch
> looks like this:
>
> struct pte_chain {
> struct pte_chain * next;
> pte_t * ptep;
> };
>
> Each pte_chain hangs off a page of physical memory and the
> ptep is
On Wed, Mar 21, 2001 at 01:31:45PM -0300, Rik van Riel scribbled:
| On Wed, 21 Mar 2001, Peter Wemm wrote:
For those interested in this system:
I have put up the kernel profiles at
http://zoo.ee.ntu.edu.tw/~keichii/kernel_profiles/
This is kgmon -rb ;sleep 30;kgmon -hp ran every minute on the s
On Wed, 21 Mar 2001, Matt Dillon wrote:
> We've looked at those structures quite a bit. DG and I talked about
> it a year or two ago but we came to the conclusion that the extra
> linkages in our pv_entry gave us significant performance benefits
> during rundowns. Since then Tor
* Matt Dillon <[EMAIL PROTECTED]> [010321 10:20] wrote:
>
> :B) Added 3gb of swap on one drive, 1gb of swap on a raid volume
> : another 1gb swap on another raid volume
> :C) enabled vfs.vmiodirenable and kern.ipc.shm_use_phys
> :
> :--
> :+--
:* Rik van Riel <[EMAIL PROTECTED]> [010321 09:51] wrote:
:> On Wed, 21 Mar 2001, Peter Wemm wrote:
:>
:> > Also, 4MB = 1024 pages, at 28 bytes per mapping == 28k per process.
:>
:> 28 bytes/mapping is a LOT. I've implemented an (admittedly
:> not completely architecture-independent) reverse ma
:B) Added 3gb of swap on one drive, 1gb of swap on a raid volume
: another 1gb swap on another raid volume
:C) enabled vfs.vmiodirenable and kern.ipc.shm_use_phys
:
:--
:+---+
:| [EMAIL PROTECTED] | [EMAIL PROTECTED] |
I
:Hey, talking about large amounts of swap, did you know that:
: 4.2-STABLE FreeBSD 4.2-STABLE #1: Sat Feb 10 01:26:41 PST 2001
:has a max swap limit that's possibly 'low':
:
: b: 159124120 swap# (Cyl.0 - 990*)
: c: 179124120unused0
* Rik van Riel <[EMAIL PROTECTED]> [010321 09:51] wrote:
> On Wed, 21 Mar 2001, Peter Wemm wrote:
>
> > Also, 4MB = 1024 pages, at 28 bytes per mapping == 28k per process.
>
> 28 bytes/mapping is a LOT. I've implemented an (admittedly
> not completely architecture-independent) reverse mapping
>
On Wed, 21 Mar 2001, Peter Wemm wrote:
> Also, 4MB = 1024 pages, at 28 bytes per mapping == 28k per process.
28 bytes/mapping is a LOT. I've implemented an (admittedly
not completely architecture-independent) reverse mapping
patch for Linux with an overhead of 8 bytes/pte...
I wonder how hard/
Matt Dillon wrote:
> :If this is a result of the shared memory, then my sysctl should fix it.
> :
> :Be aware, that it doesn't fix it on the fly! You must drop and recreate
> :the shared memory segments.
> :
> :better to reboot actually and set the variable before any shm is
> :allocated.
> :
> :
:If this is a result of the shared memory, then my sysctl should fix it.
:
:Be aware, that it doesn't fix it on the fly! You must drop and recreate
:the shared memory segments.
:
:better to reboot actually and set the variable before any shm is
:allocated.
:
:--
:-Alfred Perlstein - [[EMAIL PROT
On Tue, Mar 20, 2001 at 10:38:35AM -0800, Matt Dillon scribbled:
|
| :| How big is 'lots'? If the shared memory segment is smallish, e.g.
| :| less then 64MB, you should be ok. If it is larger then you will
| :| have to do some kernel tuning to avoid running out of pmap entries.
| :
:Another problem is that we have around 4000+ processes accessing
:lots of SHM at the same time..
How big is 'lots'? If the shared memory segment is smallish, e.g.
less then 64MB, you should be ok. If it is larger then you will
have to do some kernel tuning to avoid running out of
:| How big is 'lots'? If the shared memory segment is smallish, e.g.
:| less then 64MB, you should be ok. If it is larger then you will
:| have to do some kernel tuning to avoid running out of pmap entries.
:
:This is exactly what happens to us sometimes. We run out of pmap entries.
* Michael C . Wu <[EMAIL PROTECTED]> [010320 10:27] wrote:
> On Tue, Mar 20, 2001 at 10:15:27AM -0800, Matt Dillon scribbled:
> |
> | :Another problem is that we have around 4000+ processes accessing
> | :lots of SHM at the same time..
> |
> | How big is 'lots'? If the shared memory segment
* Matt Dillon <[EMAIL PROTECTED]> [010320 10:16] wrote:
>
> :Another problem is that we have around 4000+ processes accessing
> :lots of SHM at the same time..
>
> How big is 'lots'? If the shared memory segment is smallish, e.g.
> less then 64MB, you should be ok. If it is larger then
On Tue, Mar 20, 2001 at 10:15:27AM -0800, Matt Dillon scribbled:
|
| :Another problem is that we have around 4000+ processes accessing
| :lots of SHM at the same time..
|
| How big is 'lots'? If the shared memory segment is smallish, e.g.
| less then 64MB, you should be ok. If it is la
* Matt Dillon <[EMAIL PROTECTED]> [010320 10:17] wrote:
> :
> :How much SHM? Like, what's the combined size of all segments in
> :the system? You can make SHM non-pageable which results in a lot
> :of saved memory for attached processes.
> :
> :You want to be after this date and have this file:
On Tue, Mar 20, 2001 at 10:09:09AM -0800, Alfred Perlstein scribbled:
| * Michael C . Wu <[EMAIL PROTECTED]> [010320 10:01] wrote:
| > MRTG Graph at
| > http://zoonews.ee.ntu.edu.tw/mrtg/zoo.html
| >
| > |
| > | FreeBSD zoo.ee.ntu.edu.tw 4.2-STABLE FreeBSD 4.2-STABLE
| > | #0: Tue Mar 20 11:10:
:
:How much SHM? Like, what's the combined size of all segments in
:the system? You can make SHM non-pageable which results in a lot
:of saved memory for attached processes.
:
:You want to be after this date and have this file:
:
:
:Revision 1.3.2.3 / (download) - annotate - [select for diffs],
* Michael C . Wu <[EMAIL PROTECTED]> [010320 10:01] wrote:
> MRTG Graph at
> http://zoonews.ee.ntu.edu.tw/mrtg/zoo.html
>
> |
> | FreeBSD zoo.ee.ntu.edu.tw 4.2-STABLE FreeBSD 4.2-STABLE
> | #0: Tue Mar 20 11:10:46 CST 2001 root@:/usr/src/sys/compile/SimFarm i386
> |
> | | > system stats a
MRTG Graph at
http://zoonews.ee.ntu.edu.tw/mrtg/zoo.html
|
| FreeBSD zoo.ee.ntu.edu.tw 4.2-STABLE FreeBSD 4.2-STABLE
| #0: Tue Mar 20 11:10:46 CST 2001 root@:/usr/src/sys/compile/SimFarm i386
|
| | > system stats at
| | > http://zoo.ee.ntu.edu.tw/~keichii/
| md0/MFS is used for caching t
32 matches
Mail list logo