> Doesn't matter. Could even be dedicated
> point-to-point links between all chips. My
> assumption is that a processor on a chip can access
> the memory controller without sending messages to
> other chips via the xbar/hypertransport links. Of
> course this can't be done naively...
Right. The
> Nah, it "should" be pretty easy. :) To start, I would
> suggest
> just hardcoding what the lgrp_plat_* routines return
> to see if you can get
> more than one lgroup created.
>
Thanks...I'll follow up with how things go. It may be a few weeks before I can
dig into this. We just brought up So
> > Also, to expand on the NUMA configuration I have
> in
> > mind: consider a system with 4 hypothetical
> Niagara+
> > chips connected together (yes, original Niagara
> only
> > supports a Single-CMP). Each Niagara has its own
> > local memory controllers. Threads running on a
> chip
> > shoul
> Also, to expand on the NUMA configuration I have in
> mind: consider a system with 4 hypothetical Niagara+
> chips connected together (yes, original Niagara only
> supports a Single-CMP). Each Niagara has its own
> local memory controllers. Threads running on a chip
> should ideally allocate p
> Right now, Simics tells Solaris that all of the
> memory is on a single board, even though my add-on
> module to Simics actually carries out the timing of
> NUMA. The bottom line is that we currently model
> the timing of NUMA, however Solaris does not do any
> memory placement optimization bec
I posted the same question in the code group. Apparently for SPARC, the
> platform-specific files statically define the lgroups, and that many/most
> of the platform-specific files are *not* included with OpenSolaris.
Not yet. Some of these have been opened up in the last build and I bet
more
Also, to expand on the NUMA configuration I have in mind: consider a system
with 4 hypothetical Niagara+ chips connected together (yes, original Niagara
only supports a Single-CMP). Each Niagara has its own local memory
controllers. Threads running on a chip should ideally allocate physical m
Hi Eric,
> Could you expand on this a bit? Solaris implements
> different policy
> for NUMA and CMT (although affinity and load
> balancing
> tends to be a common theme). What sort of simulation
> / experiements
> did you have in mind?
We use Simics to simulate a wide variety of memory system c
Hi Mike,
> I would like NUMA in-order to simulate future NUMA
> chip-multiprocessors using Virtutech Simics and the
> Wisconsin GEMS toolkit.
Could you expand on this a bit? Solaris implements different policy
for NUMA and CMT (although affinity and load balancing
tends to be a common theme).