Reuti wrote:
Am 14.10.2008 um 23:39 schrieb Craig Tierney:
Reuti wrote:
Am 14.10.2008 um 23:18 schrieb Craig Tierney:
Ralph Castain wrote:
I -think- there is...at least here, it does seem to behave that way
on our systems. Not sure if there is something done locally to make
it work.
Also, t
Am 14.10.2008 um 23:39 schrieb Craig Tierney:
Reuti wrote:
Am 14.10.2008 um 23:18 schrieb Craig Tierney:
Ralph Castain wrote:
I -think- there is...at least here, it does seem to behave that
way on our systems. Not sure if there is something done locally
to make it work.
Also, though, I hav
Reuti wrote:
Am 14.10.2008 um 23:18 schrieb Craig Tierney:
Ralph Castain wrote:
I -think- there is...at least here, it does seem to behave that way
on our systems. Not sure if there is something done locally to make
it work.
Also, though, I have noted that LD_LIBRARY_PATH does seem to be
ge
Am 14.10.2008 um 23:18 schrieb Craig Tierney:
Ralph Castain wrote:
I -think- there is...at least here, it does seem to behave that
way on our systems. Not sure if there is something done locally to
make it work.
Also, though, I have noted that LD_LIBRARY_PATH does seem to be
getting forwa
You might want to look at something like the mpi-selector project that
is part of OFED (but is easily separable; it's a small package); it
might be helpful to you...?
http://www.openfabrics.org/git/?p=~jsquyres/mpi-
selector.git;a=summary
On Oct 14, 2008, at 5:18 PM, Craig Tierney wrot
Ralph Castain wrote:
I -think- there is...at least here, it does seem to behave that way on
our systems. Not sure if there is something done locally to make it work.
Also, though, I have noted that LD_LIBRARY_PATH does seem to be getting
forwarded on the 1.3 branch in some environments. OMPI i
Ralph Castain wrote:
You might consider using something like "module" - we use that system
for exactly this reason. Works quite well and solves the multiple
compiler issue.
This is the problem. We use modules to switch compilers/MPI stacks.
When a job is launched, whatever LD_LIBRARY_PATH
In torque/pbs using the #PBS -V command pushes the environment
variables out to the nodes. I don't know if that is what was
happening with slurm.
Doug Reeder
On Oct 14, 2008, at 12:33 PM, Ralph Castain wrote:
I -think- there is...at least here, it does seem to behave that way
on our system
I -think- there is...at least here, it does seem to behave that way on
our systems. Not sure if there is something done locally to make it
work.
Also, though, I have noted that LD_LIBRARY_PATH does seem to be
getting forwarded on the 1.3 branch in some environments. OMPI isn't
doing it di
I use modules too, but they only work locally. Or is there a feature
in "module" to automatically load the list of currently loaded local
modules remotely ?
george.
On Oct 14, 2008, at 3:03 PM, Ralph Castain wrote:
You might consider using something like "module" - we use that
system f
You might consider using something like "module" - we use that system
for exactly this reason. Works quite well and solves the multiple
compiler issue.
Ralph
On Oct 14, 2008, at 12:56 PM, Craig Tierney wrote:
George Bosilca wrote:
The option to expand the remote LD_LIBRARY_PATH, in such a
George Bosilca wrote:
The option to expand the remote LD_LIBRARY_PATH, in such a way that Open
MPI related applications have their dependencies satisfied, is in the
trunk. The fact that the compiler requires some LD_LIBRARY_PATH is out
of the scope of an MPI implementation, and I don't think we
The option to expand the remote LD_LIBRARY_PATH, in such a way that
Open MPI related applications have their dependencies satisfied, is in
the trunk. The fact that the compiler requires some LD_LIBRARY_PATH is
out of the scope of an MPI implementation, and I don't think we should
take care
George Bosilca wrote:
Craig,
This is a problem with the Intel libraries and not the Open MPI ones.
You have to somehow make these libraries available on the compute nodes.
What I usually do (but it's not the best way to solve this problem) is
to copy these libraries somewhere on my home area
Gus Correa wrote:
Hi Craig, George, list
Here is a quick and dirty solution I used before for a similar problem.
Link the Intel libraries statically, using the "-static-intel" flag.
Other shared libraries continue to be dynamically linked.
For instance:
mipf90 -static-intel my_mpi_program.f90
Hi Craig, George, list
Here is a quick and dirty solution I used before for a similar problem.
Link the Intel libraries statically, using the "-static-intel" flag.
Other shared libraries continue to be dynamically linked.
For instance:
mipf90 -static-intel my_mpi_program.f90
What is not clear
Craig,
This is a problem with the Intel libraries and not the Open MPI ones.
You have to somehow make these libraries available on the compute nodes.
What I usually do (but it's not the best way to solve this problem) is
to copy these libraries somewhere on my home area and to add the
dir
I am having problems launching openmpi jobs on my system. I support multiple
versions
of MPI and compilers using GNU Modules. For the default compiler, everything
is fine.
For non-default, I am having problems.
I built Openmpi-1.2.6 (and 1.2.7) with the following configure options:
# module
18 matches
Mail list logo