If you go to any of the supercomputing centers such as NCSA, SDSC, or PSC, you do not see parallel java apps running on any of their machines (with the occasional exception of a parallel newbie trying, with great difficulty to make something work).  The reasons:
  1. there are few supported message passing toolkits that support parallel java apps,
  2. java runs 3-4 times slower than C, C++, Fortran, and machine time is expensive, and finally
  3. there are well-designed and maintained languages, tookits and APIs for implementing HPC applications, and || developers use them instead of java.
I do have first-hand experience with a researcher who has stubbornly insisted in trying to build a parallel java app using RMI for the message passing interface.  It's just a bad match fo rrunning on distributed memory archtectures.  But, he loves java and doesn't know any HPC friendly object-oriented languages.  He's wasted a whole year so far trying to reimplement a subset of MPI functionality...

--Doug

--
Doug Roberts, RTI International
[EMAIL PROTECTED]
[EMAIL PROTECTED]
505-455-7333 - Office
505-670-8195 - Cell

On 10/6/06, Joshua Thorp <[EMAIL PROTECTED]> wrote:
I came across this interesting doc on garbage collection in java:
http://java.sun.com/docs/hotspot/gc5.0/gc_tuning_5.html

which notes:
"""
...virtual machines for the JavaTM platform up to and including
version 1.3.1 do not have parallel garbage collection, so the impact
of garbage collection on a multiprocessor system grows relative to an
otherwise parallel application.

The graph below models an ideal system that is perfectly scalable
with the exception of garbage collection. The red line is an
application spending only 1% of the time in garbage collection on a
uniprocessor system. This translates to more than a 20% loss in
throughput on 32 processor systems. At 10% of the time in garbage
collection (not considered an outrageous amount of time in garbage
collection in uniprocessor applications) more than 75% of throughput
is lost when scaling up to 32 processors.

"""

I hadn't looked at Java's GC for a while.  It has gotten very
complicated!  I wonder if they have parallelized the GC.  As the
quote above comes from a document for java 5.0 apparently not...

--joshua

On Oct 6, 2006, at 1:36 PM, Stephen Guerin wrote:

> Laszlo sent the same request out to the NAACSOS list, too. Here's a
> response
> that may be interesting to FRIAM-folk.
>
> -Steve
>
>> -----Original Message-----
>> From: Les Gasser [mailto:[EMAIL PROTECTED]]
>> Sent: Friday, October 06, 2006 1:14 PM
>> To: Laszlo Gulyas
>> Cc: [EMAIL PROTECTED]; [EMAIL PROTECTED]
>> Subject: Re: Distribution / Parallelization of ABM's
>>
>> NAACSOS - http://www.casos.cs.cmu.edu/naacsos/
>> Laszlo, below are links to five papers that address various
>> aspects of these issues, part of a stream of work over about
>> a 20 year period.
>> These cover conceptualizations, requirements, approaches,
>> scaling issues, etc. (Also available through
>> http://www.isrl.uiuc.edu/~gasser/papers/).
>>
>> Others have also worked in these areas, going back to Lesser
>> et al.'s work distributing HEARSAY (papers of Lesser &
>> Fennel; Lesser & Erman); Ed Durfee's MS thesis at UMASS in
>> the early 1980s on distributing a distributed problem solving
>> simulator, Dan Corkill's work on parallelizing blackboard
>> systems at UMASS, early 1990s (others worked on this too).
>> References to all this are availble via
>> http://mas.cs.umass.edu/pub/ and it has been quite inspiring
>> to me personally.  More recently there is also Brian Logan
>> and Georgios Theorodopoulos' work on distributing MAS,
>> concerning especially dealing with environment models as
>> points of serialization.
>>
>> Hope this helps,
>>
>> -- Les
>>
>> Les Gasser, Kelvin Kakugawa, Brant Chee and Marc Esteva
>> "Smooth Scaling Ahead: Progressive MAS Simulation from Single
>> PCs to Grids"
>> in Paul Davidsson, Brian Logan, and Keiki Takadama (Eds.)
>> Multi-Agent and Multi-Agent-Based Simulation.
>> Lecture Notes in Computer Science 3415, Springer, 2005
>> http://www.isrl.uiuc.edu/~gasser/papers/gasser-etal-mamabs04-
>> final.pdf
>>
>> Les Gasser and Kelvin Kakugawa.
>> "MACE3J: Fast Flexible Distributed Simulation of Large,
>> Large-Grain Multi-Agent Systems."
>> In Proceedings of AAMAS-2002.
>> [Finalist for Best Paper Award at this conference.]
>> http://www.isrl.uiuc.edu/~gasser/papers/mace3j-aamas02-pap.pdf
>>
>> Les Gasser.
>> "MAS Infrastructure Definitions, Needs, Prospects,"
>> in Thomas Wagner and Omer Rana, editors, Infrastructure for
>> Agents, Multi-Agent Systems, and Scalable Multi-Agent
>> Systems, Springer-Verlag, 2001 Also appears in ICFAI Journal
>> of Managerial Economics, 11:2, May, 2004, pp 35-45.
>> http://www.isrl.uiuc.edu/~gasser/papers/masidnp-08-with-table.pdf
>>
>> Les Gasser.
>> "Agents and Concurrent Objects."
>> IEEE Concurrency, 6(4) pp. 74-77&81, October-December, 1998.
>> http://www.isrl.uiuc.edu/~gasser/papers/AgentsAndObjects-07.html
>>
>> Les Gasser, Carl Braganza, and Nava Herman.
>> "MACE: A Flexible Testbed for Distributed AI Research"
>> in Michael N. Huhns, ed.
>> Distributed Artificial Intelligence
>> Pitman Publishers, 1987, 119-152.
>> http://www.isrl.uiuc.edu/~gasser/papers/gasser-braganza-herman
>> -mace-a-flexible-testbed-for-dai-research-1987.ps
>> http://www.isrl.uiuc.edu/~gasser/papers/gasser-braganza-herman
>> -mace-a-flexible-testbed-for-dai-research-1987.pdf
>>
>>
>> Laszlo Gulyas wrote:
>>> NAACSOS - http://www.casos.cs.cmu.edu/naacsos/
>>> [**** Apologies for cross-postings. ****]
>>>
>>> Dear Colleagues,
>>>
>>> We are compiling a survey on techniques to parallelize agent-based
>>> simulations. We are interested in both in-run and inter-run
>>> parallelizations (i.e., when one distributes the agents and
>> when one
>>> distributes individual runs in a parameter sweep), albeit I
>> think, the
>>> more challenging part is the former.
>>>
>>> We are aware that in-run parallelization is a non-trivial task and,
>>> what's more, it is likely that it cannot be done in general. Our
>>> approach is trying to collect 'communication templates'
>> that may make
>>> distribution  / parallelization feasible. E.g., when the model is
>>> spatial and comminication is (mostly) local, there are
>> already works to do the job.
>>> However, we foresee other cases when the problem can be solved.
>>>
>>> As I said, we are now compiling a survey. We are aware of a few
>>> publications and threads at various lists, but I'd like to
>> ask you all
>>> to send me references to such works if you know about them.
>> (If you do
>>> not have references, but have ideas that you are ready to share,
>>> please, do not hesitate either.) Thank you all in advance!
>>>
>>> For your information, our ultimate goal is to be able to
>> run ABM's on
>>> the grid -- which adds another layer of complication, namely the
>>> uncertainity of resources and slower communication. But we
>> will deal with that later!
>>> ;-)
>>>
>>> Best regards,
>>>
>>> Laszlo Gulyas (aka Gulya)
>> The NAACSOS mailing list is a service of NAACSOS, the North
>> American Association for Computational and Organizational
>> Science (http://www.casos.cs.cmu.edu/naacsos/).
>> To remove yourself from this mailing list, send an email to
>> <[EMAIL PROTECTED]> with the following command
>> in the body of your email message:
>> unsubscribe naacsos-list
>> -
>>
>>
>
>
> ============================================================
> FRIAM Applied Complexity Group listserv
> Meets Fridays 9a-11:30 at cafe at St. John's College
> lectures, archives, unsubscribe, maps at http://www.friam.org


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org



============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org

Reply via email to