OMPI_MCA_orte_leave_session_attached=1
Note: this does set limits on scale, though, if the system uses an ssh
launcher. There are system limits on the number of open ssh sessions
you can have at any one time.
For all other launchers, no limit issues exist that I know about.
HTH
Ralph
On O
On Friday 23 October 2009 00:50:00 Ralph Castain wrote:
> Why not just
>
> setenv OMPI_MCA_orte_default_hostfile $PBS_NODEFILE
>
> assuming you are using 1.3.x, of course.
>
> If not, then you can use the equivalent for 1.2 - ompi_info would tell
> you the name of it.
THANKS!
Just what I was l
Why not just
setenv OMPI_MCA_orte_default_hostfile $PBS_NODEFILE
assuming you are using 1.3.x, of course.
If not, then you can use the equivalent for 1.2 - ompi_info would tell
you the name of it.
On Oct 22, 2009, at 4:29 PM, Roy Dragseth wrote:
Hi all.
I'm trying to create a tight inte
Hi all.
I'm trying to create a tight integration between torque and openmpi for cases
where the tm ras and plm isn't compiled into openmpi. This scenario is
common for linux distros that ship openmpi. Of course the ideal solution is
to recompile openmpi with torque support, but this isn't al
SGE might want to be aware that PLPA has now been deprecated -- we're
doing all future work on "hwloc" (hardware locality). That is, hwloc
represents the merger of PLPA and libtopology from INRIA. The
majority of the initial code base came from libtopology; more PLPA-
like features will co
Typically, when seeing errors like this, it can mean that you've got a
mismatch between the Open MPI that you compiled your application with,
the mpirun that you're using to launch the application, and/or
LD_LIBRARY_PATH to load the dynamic library libmpi.so (as Ralph stated/
alluded).
Ens
If you google this list for entries about ubuntu, you will find a
bunch of threads discussing problems on that platform. This sounds
like one of the more common ones - I forget all the issues, but most
dealt with ubuntu coming with a very old OMPI version on it, and
issues with ensuring you
Hello everybody,
I have just installed openmpi v. 1.2.5 under ubuntu 8.04 and I have compiled
the following "hello world" program:
--start code--
#include
#include
int main(int argc, char *argv[]){
int rank, size, len;
char hostname[256] = "";
MPI_Init(&argc, &argv);
MPI_Comm_rank(MPI
On Thu, Aug 27, 2009 at 09:23:20AM +0800, Changsheng Jiang wrote:
> Hi List,
>
> I am learning MPI.
Welcome! sorry for the several-months lateness of my reply: I check in
on OpenMPI only occasionally looking for MPI-IO questions.
> A small code snippet try to open a file by MPI_File_open gets e
Hi:
Sure there is the howto somewhere, though not found. With Debian Linux
amd64 lenny I have a working installation of openmpi-1.2.6, intel
compiled, mpif90 etc in /usr/local/bin. For program OCTOPUS I need
gfortran-compiled openmpi. I did so with openmpi-1.3.3, mpif90 etc (as
symlink to /opt/bin/
You can also see the FAQ entry:
http://www.open-mpi.org/faq/?category=tuning#setting-mca-params
It shows all the ways to set MCA parameters.
On Oct 22, 2009, at 11:31 AM, Mike Hanby wrote:
Thanks for the link to Sun HPC ClusterTools manual. I'll read
through that.
I'll have to conside
Thanks for the link to Sun HPC ClusterTools manual. I'll read through that.
I'll have to consider which approach is best. Our users are 'supposed' to load
the environment module for OpenMPI to properly configure their environment. The
module file would be an easy location to add the variable.
T
Eugene Loh wrote:
Mike Hanby wrote:
My users are having to use this option with mpirun, otherwise the
jobs will normally fail with a 111 communication error:
--mca btl_tcp_if_exclude lo,eth1
Is there a way for me to set that MCA option system wide, perhaps via
an environment variable so th
Yes, on page 14 of the presentation: "Support for OpenMPI and OpenMP
Through -binding [pe|env] linear|striding" -- SGE performs no binding,
but instead it outputs the binding decision to OpenMPI.
Support for OpenMPI's binding is part of the "Job to Core Binding" project.
Rayson
On Thu, Oct 22,
Hi Rayson
You're probably aware: starting with 1.3.4, OMPI will detect and abide
by external bindings. So if grid engine sets a binding, we'll follow it.
Ralph
On Oct 22, 2009, at 9:03 AM, Rayson Ho wrote:
The code for the Job to Core Binding (aka. thread binding, or CPU
binding) feature w
Mike Hanby wrote:
Howdy,
My users are having to use this option with mpirun, otherwise the jobs will
normally fail with a 111 communication error:
--mca btl_tcp_if_exclude lo,eth1
Is there a way for me to set that MCA option system wide, perhaps via an
environment variable so that they don'
The code for the Job to Core Binding (aka. thread binding, or CPU
binding) feature was checked into the Grid Engine project cvs. It uses
OpenMPI's Portable Linux Processor Affinity (PLPA) library, and is
topology and NUMA aware.
The presentation from HPC Software Workshop '09:
http://wikis.sun.com
Howdy,
My users are having to use this option with mpirun, otherwise the jobs will
normally fail with a 111 communication error:
--mca btl_tcp_if_exclude lo,eth1
Is there a way for me to set that MCA option system wide, perhaps via an
environment variable so that they don't have to remember to
18 matches
Mail list logo