On Mon, Mar 20, 2006 at 08:09:43PM +0100, [EMAIL PROTECTED] wrote:
> 
> >On Sun, Mar 19, 2006 at 10:38:36AM +0100, [EMAIL PROTECTED] wrote:
> >> 
> >> >I didn't try this on a laptop, but here are some numbers from a 2-way
> >> >AMD system running in 64-bit mode showing how much memory gets used by
> >> >the
> >> >kernel in each case.
> >> >
> >> >NCPU    max_ncpus       kernel
> >> >64      2               227MB
> >> >21      21              231MB   - stock Nevada bits
> >> >64      32              233MB
> >> >64      64              242MB
> >> 
> >> Wow; that is quite a bit more than I expected (the strange "21" number
> >> comes from ancient times when apparently 21 "sizeof (struct cpu)" fitten
> >> on (a multiple of?) a page.
> >> 
> >> >So if max_ncpus is set to 64, we'd be throwing away 10Mb (or ~5% of
> >> >all memory on a 256Mb laptop).  The difference between 21 and 32 is
> >> >much smaller.
> >> 
> >> So what exactly uses this 256KB per CPU?
> >
> >kmem cpu caches are about 20k/cpu (64 bytes/cpu/cache * ~300 caches * 
> >max_ncpu)

These have to be pre-allocated; we index directly to the per-CPU structure.

> >FMA looks like it's about 100k/cpu (ERPT_MAX_ERRS * max_ncpu * ERPT_DATA_SZ)

These are for fatal events, possibly on multiple CPUs.

> And they are all preallocated?  Sounds like a bug to me.

Well, if max_ncpus was just properly set to the actual maximum, as Andrei's
follow-on is done correctly, they wouldn't be allocating all this
extra space.  Historically, SPARC has always done so, but x86 hasn't known
how many CPUs it had early enough in the boot process to set it correctly.

Cheers,
- jonathan

-- 
Jonathan Adams, Solaris Kernel Development
_______________________________________________
opensolaris-discuss mailing list
opensolaris-discuss@opensolaris.org

Reply via email to