Owen asks:
> Given what *we* want to do, and given the recent advances in desktop,
> workstation, and server computing, and given our experiences over the
> last year with things like the Blender Render Farm .. what would be
> the most reasonable way for us to take a step or two toward higher
> performance?
>    - Should we consider buying a fairly high performance linux box?
>    - How about buying a multi-processor/multi-core system?
>    - Do we want to consider a shared Santa Fe Super Cluster?
>    - What public computing facilities could we use?
Well, a fairly high performance Linux box will probably BE a multi-core 
hardware
system. A configurable Santa Fe Community super-computer (as we've been 
considering it)
would probably be composed of individual nodes based on multi-core 
chips, because,
hey, that's what's out there now.  The fraction of public computing 
facilities that we
could use would conceivably not measure up to what we could get new 
off-the-shelf now.

We might remind ourselves that any hardware we get will be 10 times 
slower than what we
can get in 18 months for the same price.  I've read that right now the 
price of each additional
gigaflop is around the price of a cappuccino.

So, I think, some sort of off the shelf multi-core box, probably a Mac 
Pro.  There's no mistake
we can make configuring a Mac Pro that will cost more than a second 
one.   On the other hand,
if we make a mistake configuring a cluster, that might be more 
expensive.  So, in our fear of
making a mistake configuring a cluster, we spend a lot of time doing the 
system requirements
and getting everybody signed up for a node, say, some significant 
fraction of 18 months.

I'm with Marcus on this one.  Start with a Clovertown based Mac Pro.  
Learn how to keep
the multiple cores busy and what the issues are for the different OS's,  
get to the place
where you can say with greater assurance what being a customer for 
scale-up means for
the ABM customer's context.  THEN build the cluster.  In the meantime, 
we can work towards
a specification for a reference ABM workstation.

Oh, and if you need a place to keep it, I might have some ideas  O:-)

Owen further asks,
> And possibly more to the point:
>    - What computing architecture are we interested in?
Well, one of the unspoken things here is that we want to have an 
architecture that's accessible to
folks that want to look into the various flavors of ABM (including those 
that might get invented
down the road), so I think it would be useful to think not only in terms 
of off-the-shelf machinery, but
also in terms of how the architecture might scale *down* as well as up.  
It should be, for example,
be possible for several OLPC machines to grid into a kind of cluster 
that would have enough performance
to create, run and visualize interesting, non-trivial and "useful" ABMs.

Summary:
    Most chips for multi-machine architectures will be based on multi-core.

    Buy one or two reasonably well tricked out multi-core machines 
early, don't go nuts on the HPC
    requirements until we have a better handle on how to get one or two 
machines to make use of those
    cores for the kinds of problems we expect to want to address.

    Work to keep the overall machine acquisition administrative costs 
down. 
   
    Consider accessibility and replicability as requirements (not for 
the first machines, but for the architecture
    they will inform).

Carl




============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org

Reply via email to