Good question, it certainly pays to know your goal.

At the simplest level, the goal is to do what we're currently doing,  
with better performance, or similar performance but with greater  
resources:

   - ABM: We'd love to be able to run really large simulations (the  
city of Santa Fe, for example) with up to 250,000 agents.  This could  
be with existing ABM systems like NetLogo and Repast, or possibly one  
we're not using yet like MASON, or using Processing/Java as we're  
beginning to do now.

   - Visualization: We'd love to be able to run Blender or a similar  
3D modeler at near real time, with the data derived from the ABM  
above.  The render farm approach seems good for building "movies",  
but not for running the 3D modeler in near real time.

   - Decision Theater/Immersive Modeling: We're just starting to use  
some nifty hacks Josh came up with which allow us to project models  
onto tables, letting laser pens shining onto the table become input  
into the model, via having a camera look at the projected image.   
We're not yet running into serious issues, but we may.  It certainly  
pushes us toward real time models with sophisticated interaction.

I think we'd be willing to stick to memory resident systems for now  
-- if we can cram 8 Gig or so into one.  I say this because we're not  
yet trying to go for systems that are handling several million  
agents.  That said, I'm not sure how good the memory systems (bus,  
caches, etc) for multi core/processor systems actually are,  
especially in terms of concurrent access to shared data.  And I'd be  
willing to fudge memory residency by including systems with very good  
swapping algorithms, thus letting us exceed memory capacity onto disks.

In terms of languages: the main issue I think will be for language  
support for the multiple cores/processors -- which I think primarily  
boils down to threads and concurrency, and indirectly includes  
swapping via the OS if that indeed becomes a target.

My bias is to look first at husky workstations/servers before going  
into clusters and grids, mainly because I think they're becoming a  
sweet spot.  We have a modest budget.  We know breaking tasks down  
into independent subtasks works well: parameter scans and building  
individual movie frames.  But we'd certainly have to start getting  
into intelligent scheduling and thread/memory architectures.

So basically we'd like to do more and faster versions of the ABM, Vis  
and Immersion work we're currently doing, intelligently mapped onto  
reasonably affordable modern multi core/processor systems.

Owen

On Oct 8, 2006, at 1:11 AM, Douglas Roberts wrote:

> Owen:
>
> I'm all for practical.  But first, show us your requirements.  A  
> "step or
> two towards higher performance" is a bit vague.  ;-}
>
> What's your goal:  16 million agents, simulated at 80X real time?
>
> Or something less.  Or something more.
>
> Joking aside,
>
> What are your requirements?  How much do you need to scale now; how  
> far do
> you need to scale eventually, how soon do you need to do it, what  
> are your
> agent complexities, output requirements, data I/O needs, post  
> processing
> requirements, what existing designs do you have now, and what are  
> their
> limitations, what is the memory footprint for your existing  
> implementation,
> what are your current run times, etc. etc. etc...
>
> System requirements should come first;  these will lead to  
> suggestions for
> SW & HW implementation environments.
>
> --Doug
> -- 
> Doug Roberts, RTI International
> [EMAIL PROTECTED]
> [EMAIL PROTECTED]
> 505-455-7333 - Office
> 505-670-8195 - Cell
>
> On 10/7/06, Owen Densmore <[EMAIL PROTECTED]> wrote:
>>
>> On Oct 7, 2006, at 10:29 AM, Owen Densmore wrote:
>> > Turns out there is a poll being taken on some mail lists on the  
>> topic
>> > of new parallel hardware and if/how it will be used:
>> >    Parallelism: the next generation -- a small survey
>> >    http://www.nabble.com/A-small-survey-tf2337745.html
>> >
>> >      -- Owen
>>
>> OK, so we've had an interesting interchange on Distribution /
>> Parallelization of ABM's.  But what I'm interested is a bit more
>> practical:
>>
>> Given what *we* want to do, and given the recent advances in desktop,
>> workstation, and server computing, and given our experiences over the
>> last year with things like the Blender Render Farm .. what would be
>> the most reasonable way for us to take a step or two toward higher
>> performance?
>>    - Should we consider buying a fairly high performance linux box?
>>    - How about buying a multi-processor/multi-core system?
>>    - Do we want to consider a shared Santa Fe Super Cluster?
>>    - What public computing facilities could we use?
>>
>> And possibly more to the point:
>>    - What computing architecture are we interested in?
>>
>> I'll say from my experience, I'm mainly interested two approaches:
>>
>>    - Unix based piped systems where I don't have to consider the
>> architecture in my programs, only in the way I use sh/bash to execute
>> them to make sure they work well in parallel.  In plain words: good
>> parameter scanning, or piped tasks (model, visualize, render) using
>> built-in unix piping mechanisms with parallel execution of the
>> programs.  I've done this in the past with dramatic increase in
>> elapsed times.  And its dead simple.
>>
>>    - Java or similar based multi-threaded approaches where I need a
>> bit of awareness in my code as to how I approach parallelism, but
>> *the language supports it*.  I'm not very much interested in exotic
>> and difficult to maintain grid/cluster architectures, I'm not at all
>> convinced for the scale we're approaching that they make sense.  And,
>> yes, Java is good enough.
>>
>> In other words, given Redfish, Commodicast, and other local
>> scientific computing endeavors, what would be interesting systems for
>> our scale of computing?  I.e. reasonable increase in power with
>> modest change in architecture.
>>
>> Owen
>>
>> ============================================================
>> FRIAM Applied Complexity Group listserv
>> Meets Fridays 9a-11:30 at cafe at St. John's College
>> lectures, archives, unsubscribe, maps at http://www.friam.org
>>
> ============================================================
> FRIAM Applied Complexity Group listserv
> Meets Fridays 9a-11:30 at cafe at St. John's College
> lectures, archives, unsubscribe, maps at http://www.friam.org


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org

Reply via email to