Ok I will try it.

Thank you very much.



On 10/20/08, Reuti <re...@staff.uni-marburg.de> wrote:
>
> Am 20.10.2008 um 14:17 schrieb Pedro G:
>
> On 10/20/08, Reuti <re...@staff.uni-marburg.de> wrote: Hi,
>>
>> Am 20.10.2008 um 12:18 schrieb Pedro G:
>>
>> I would like to know if Msc Nastran supports openmpi.
>>
>> I have been searching in google about Nastran and Openmpi but I couldn't
>> find if it works or not.
>>
>> Now I'm using lam/mpi for nastran parallel jobs, but I have some problems
>> with lam, nastran and sge, so I'd like to upgrade to openmpi.
>>
>> do you have the source code of the application? If you have only the
>> binary compiled for LAM, then you can't do anything to change. You checked
>> the LAM/MPI Howto at the SGE website?
>>
>> No, I don't have the source code, the application seems to be able to work
>> with openmpi since it has an option openmpi=yes in the command line, but I
>> couldn't make it to work. Anyway, I think it is not fully supported yet
>> since there is nothing about that option in the user manual
>>
>> About LAM/MPI I have already read the howto and did a tight integration.
>> The problem is that Msc Nastran in parallel jobs start a new lam environment
>> getting out of control of sge.
>>
>> I contacted with Nastran and they told that was a LAM/MPI or SGE problem.
>>
>
> No, IMO it's not. When they start a new LAM/MPI environment, they are
> violating the granted slot allocation. What about the following (although it
> would be more a discussion on the [GE suers] list:
>
> - Suppose you have a tight LAM/MPI integration for other MPI programs.
> - In your jobscript, change the $PATH, so that the found "lamboot" points
> to e.g. /bin/true - i.e. doing nothing.
> - When they then call "mpirun C", they should get the already started LAM
> daemons.
> - If mpiexec is not working, maybe the found mpiexec must also point to a
> script to supply the proper "-np ..." values.
> - When they call "mpiexec" in one-shot-mode, it must also be mapped to a
> script to execute just the program, but not to do a "lamboot".
>
> -- Reuti
>
>
>
>>
>>
>>
>> -- Reuti
>> _______________________________________________
>> users mailing list
>> us...@open-mpi.org
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>
>> _______________________________________________
>> users mailing list
>> us...@open-mpi.org
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>
>
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>

Reply via email to