nation of OpenMPI 1.2.3, ATLAS,
BLACS, ScalaPack, and MUMPS using the Intel Fortran compiler on two
different Debian Linux systems (3.0r3 on AMD Opterons and 4.0r0 on
Intel Woodcrest/MacPro).
Michael
eMpi2.
Fortunately documented on the web FAQ but not in the BLACS
documentation.
Michael
Thanks,
george.
On Jul 12, 2007, at 2:41 PM, Jeff Squyres wrote:
On Jul 12, 2007, at 2:28 PM, Michael wrote:
In the FAQ <http://www.open-mpi.org/faq/?category=mpi-apps>, section
labele
and the calling code. The only
promotion in Fortran 90 is inline, i.e. x = i * y. Fortran 90 is a
strongly typed language if you use interfaces. Unfortunately I have
yet to see a Fortran 90 compiler that gives a obvious error message
pointing to the specific error for these interfacing errors.
Michael
:
./configure F77=g95 FC=g95 LDFLAGS=-lSystemStubs --with-mpi-f90-
size=small ; make all
I'm not aware if special flags are needed with ifort on OS X, but -
ISystemStubs is required for g95 and might be for ifort as well on OS X.
Michael
single interface.
I'm a little surprised there is any problem at all with OpenMPI &
your configuration as my configuration is more complicated.
Michael
, it's a lot easier
then what it sounds like you are trying to do and it speeds up all
ethernet traffic on the computer. What OS are you trying to do this on.
Michael
How would I test if I have doubled my bandwidth?
Michael
with a very complicated code.
Michael
Quick answer, till you get a complete answer, Yes, OpenMPI has long
supported most of the MPI-2 features.
Michael
On Mar 7, 2008, at 7:44 AM, Jeff Pummill wrote:
Just a quick question...
Does Open MPI 1.2.5 support most or all of the MPI-2 directives and
features?
I have a user who
beled R.
It might be best to get a PowerMac G4 and build OpenMPI on it, but
you'd probably have better luck if you install Linux on the G4 instead
of building OpenMPI on OS X as your final platform is Linux.
Michael
in Advance,
Michael.
mv -f $depbase.Tpo $depbase.Plo
libtool: compile: gcc -DHAVE_CONFIG_H -I. -I../../../../opal/include
-I../../../../orte/include -I../../../../ompi/include -I../../../../op
al/mca/paffinity/linux/plpa/src/libplpa -I../../../.. -D_REENTRANT -O3
-DNDEBUG -finline
with Intel's OS X
Fortran compiler against their Linux compiler to see if there is a
difference there.
Michael
understand this before
I start configuring a new clusters, I was planning for it to run OS X
instead of Linux. At the moment I don't have an OS X system with
enough RAM to test this.
Michael
The problem discussed here is with MPICH2 version of MPI not OpenMPI.
Michael
On Nov 18, 2006, at 9:22 AM, Jeff Squyres wrote:
We do not appear to have the token "save" anywhere in our mpif.h file.
Can you send a copy of the mpif.h file that your compiler is finding
(and ensu
", also I had uninstalled the
previous version prior to installing this version.
Michael
size=small.
Michael
site.
Is there a know problem with MPI_PACK/UNPACK in OpenMPI?
Michael
UNPACK on the structure, but calling those lots of times can't be
more efficient.
Previously I was equivalencing the structure to a integer array and
sending the integer array as a fast dirty solution to get started and
it worked. Not completely portable no doubt.
Michael
ps. I don
f OpenMPI with
LSF since version 1.1.2?
Does anyone in the OpenMPI team have access to a system using the LSF
batch queueing system? Is an machine with Gm and LSF different yet?
Michael
sense?
Is there a flag in this compile line that permits linking an
executable even when the person doing the linking does not have
access to all the libraries, i.e. export-dynamic?
Michael
On Mar 15, 2007, at 12:18 PM, Michael wrote:
I'm having trouble with the portability of executables compiled with
OpenMPI. I suspect the sysadms on the HPC system I'm using changed
something because I think it worked previously.
Apparently there was a misconfiguration, i.
On Mar 22, 2007, at 7:55 AM, Jeff Squyres wrote:
On Mar 15, 2007, at 12:18 PM, Michael wrote:
Situation: I'm compiling my code locally on a machine with just
ethernet interfaces and OpenMPI 1.1.2 that I built.
When I attempt to run that executable on a HPC machine with OpenMPI
1.1.
ibmpi.so's
that I do not have a problem.
I have to periodically check the second system to see if it has been
updated in which I case I have to update my system.
Michael
of using MPI_PACKED inside MPI_BSEND, I was wondering if this could
be a problem, i.e. packing packed data?
Michael
ps. I have to use OpenMPI 1.1.4 to maintain compatibility with a
major HPC center.
used a lot).
Was there a known problem with OpenMPI 1.2 (r14027) and ethernet
communication that got fixed later?
The same executable run at the major center seems fine, but they have
Myrinet.
Michael
assumes the two switches have more ports than you have
nodes.
I have no experience with IEEE 802.3ab, someone else would have to
speak to that.
The question also is which bonding configuration you choose and which
choices would work and which gives the best performance.
Michael
Building Open MPI 1.0.1 on a PowerMac running OS X 10.4.4 using
1) Apple gnu compilers from Xcode 2.2.1
2) fink-installed g77
3) latest g95 "G95 (GCC 4.0.1 (g95!) Jan 23 2006)"
(the binary from G95 Home)
setenv F77 g77
setenv FC g95
./configure
In the G95 section of the configure I get
checkin
perly.
Can you verify that everything is installed properly, and that g95 is
able to link to C libraries?
On Jan 24, 2006, at 3:11 PM, Michael Kluskens wrote:
Building Open MPI 1.0.1 on a PowerMac running OS X 10.4.4 using
1) Apple gnu compilers from Xcode 2.2.1
2) fink-installed g77
3) lates
Question regarding f90 compiling
Using:
USE MPI
instead of
include 'mpif.h'
makes the compilation take an extra two minutes using g95 under OS X
10.4.4 (simple test program 115 seconds versus 0.2 seconds)
Is this normal?
Michael
Thx.John Cary
___
users mailing list
us...@open-mpi.org
Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
Link to this post:
http://www.open-mpi.org/community/lists/users/2014/09/25311.php
--
Michael A. Raymond
SGI MPT Team Leader
1 (651) 683-7523
, Michael Raymond wrote:
Another option is SGI PerfBoost. It will let you run apps compiled
against other ABIs with SGI MPT with practically no performance loss.
$ module load openmpi
$ make
$ module unload openmpi
$ module load mpt perfboost
$ mpiexec_mpt -np 2 perfboost -ompi a.out
On 09/11/2014
m getting
"
[H1:33580] [[41149,0],0] ORTE_ERROR_LOG: Address family not supported by
protocol in file oob_tcp_listener.c at line 120
[h2:33580] [[41149,0],0] ORTE_ERROR_LOG: Address family not supported by
protocol in file oob_tcp_component.c at line 584
"
Any suggestions ?
Thanks !
Michael
Hi Howard,
We have NOT defined IPv6 on the nodes.
Actually I was looking at the location of the code that complains and I
also saw references to IPv6 sockets.
Thanks a lot for the suggestion! I'll try this out tomorrow.
Regards
Michael
On Mon, Oct 6, 2014 at 11:07 PM, Howard Pritchard
OpenMPI compiler wrappers to use the Intel
compiler set? Would there be any issues with compiling C++ / Fortran or
corresponding OMP codes ?
In general, what is clean way to build OpenMPI with a GNU compiler set but
then instruct the wrappers to use Intel compiler set?
Thanks!
Michael
Thanks for the note. How about OMP runtimes though?
Michael
On Mon, Sep 18, 2017 at 3:21 PM, n8tm via users
wrote:
> On Linux and Mac, Intel c and c++ are sufficiently compatible with gcc and
> g++ that this should be possible. This is not so for Fortran libraries or
>
different
compilation environments.
Thank you,
Michael
On Mon, Sep 18, 2017 at 7:35 PM, Gilles Gouaillardet <
gilles.gouaillar...@gmail.com> wrote:
> Even if i do not fully understand the question, keep in mind Open MPI
> does not use OpenMP, so from that point of view, Open MPI is
>
OMP is yet another source of incompatibility between GNU and Intel
environments. So compiling say Fortran OMP code into a library and trying
to link it with Intel Fortran codes just aggravates the problem.
Michael
On Mon, Sep 18, 2017 at 7:35 PM, Gilles Gouaillardet <
gilles.gouail
o the one that I launched the MPI job.
However, if I force SLURM to allocate only the local node (ie, the one on which
salloc was called), everything works fine.
Failing case:
michael@ipc ~ $ salloc -n8 mpirun --display-map ./mpi
JOB MAP
Dat
On 27/01/2011, at 4:51 PM, Michael Curtis wrote:
Some more debugging information:
> Failing case:
> michael@ipc ~ $ salloc -n8 mpirun --display-map ./mpi
> JOB MAP
Backtrace with debugging symbols
#0 0x77bb5c1e in ?? ()
On 28/01/2011, at 8:16 PM, Michael Curtis wrote:
>
> On 27/01/2011, at 4:51 PM, Michael Curtis wrote:
>
> Some more debugging information:
Is anyone able to help with this problem? As far as I can tell it's a
stock-standard recently installed SLURM installation.
I can try 1
> I'll dig a bit further.
Interesting. I'll try a local, vanilla (ie, non-debian) build and report back.
Michael
based whereas the LANL
test environment is not?
Michael
On 07/02/2011, at 12:36 PM, Michael Curtis wrote:
>
> On 04/02/2011, at 9:35 AM, Samuel K. Gutierrez wrote:
>
> Hi,
>
>> I just tried to reproduce the problem that you are experiencing and was
>> unable to.
>>
>> SLURM 2.1.15
>> Open MPI 1.4.3 c
On 09/02/2011, at 2:38 AM, Ralph Castain wrote:
> Another possibility to check - are you sure you are getting the same OMPI
> version on the backend nodes? When I see it work on local node, but fail
> multi-node, the most common problem is that you are picking up a different
> OMPI version due
On 09/02/2011, at 2:17 AM, Samuel K. Gutierrez wrote:
> Hi Michael,
>
> You may have tried to send some debug information to the list, but it appears
> to have been blocked. Compressed text output of the backtrace text is
> sufficient.
Odd, I thought I sent it to you directl
On 09/02/2011, at 9:16 AM, Ralph Castain wrote:
> See below
>
>
> On Feb 8, 2011, at 2:44 PM, Michael Curtis wrote:
>
>>
>> On 09/02/2011, at 2:17 AM, Samuel K. Gutierrez wrote:
>>
>>> Hi Michael,
>>>
>>> You may have tried to sen
I've been looking into OpenMPI's support for RoCE (Mellanox's recent
Infiniband-over-Ethernet) lately. While it's promising, I've hit a
snag: RoCE requires lossless ethernet, and on my switches the only way
to guarantee this is with CoS. RoCE adapters cannot emit CoS priority
tags unless the clie
essing you need to do)
> should be somewhere in the root-level setup for RoCE. Once you set a
> different subnet ID, Open MPI should just use it.
>
>
> On Feb 18, 2011, at 8:17 AM, Michael Shuey wrote:
>
>> I've been looking into OpenMPI's support for RoCE (M
Fri, Feb 18, 2011 at 3:44 PM, Jeff Squyres wrote:
> On Feb 18, 2011, at 1:39 PM, Michael Shuey wrote:
>
>> RoCE HCAs keep a GID table, like normal HCAs. Every time you bring up
>> a vlan interface, another entry gets automatically added to the table.
>> If I select one o
means that in order to select network X or Y, you
>> may use ip/netmask (btl_openib_ipaddr_include) .
>>
>> Pavel (Pasha) Shamis
>> ---
>> Application Performance Tools Group
>> Computer Science and Math Division
>> Oak Ridge National Laboratory
>>
Late yesterday I did have a chance to test the patch Jeff provided
(against 1.4.3 - testing 1.5.x is on the docket for today). While it
works, in that I can specify a gid_index, it doesn't do everything
required - my traffic won't match a lossless CoS on the ethernet
switch. Specifying a GID is o
squy...@cisco.com]
> Sent: Thursday, February 24, 2011 3:45 PM
> To: Michael Shuey
> Cc: Open MPI Users , Mike Dubman
> Subject: Re: [OMPI users] RoCE (IBoE) & OpenMPI
>
> On Feb 24, 2011, at 8:00 AM, Michael Shuey wrote:
>
>> Late yesterday I did have a chance to test t
?
>
>
> On Mar 1, 2011, at 7:35 AM, Michael Shuey wrote:
>
>> So, since RoCE has no SM, and setting an SL is required to get
>> lossless ethernet on Cisco switches (and possibly others), does this
>> mean that RoCE will never work correctly with OpenMPI on Cisco
>> ha
Alternatively, if OpenMPI is really trying to use both ports, you
could force it to use just one port with --mca btl_openib_if_include
mlx4_0:1 (probably)
--
Mike Shuey
On Tue, Mar 1, 2011 at 1:02 PM, Jeff Squyres wrote:
> On Feb 28, 2011, at 12:49 PM, Jagga Soorma wrote:
>
>> -bash-3.2$ mpiex
two versions of libgfortran,
which aren't compatible.
I'm not familiar with OpenMPI myself, but the people using it would like
to know how these warnings can be dealt with.
--
Michael Cugley
School of Engineering IT Support
m.cug...@eng.gla.ac.uk
Please direct IT support queries to itsupp...@eng.gla.ac.uk
shows references to
libgfortran.so.3 and .so.1, but the warnings are gone and the user is
happy, so I'm counting it as a victory.
--
Michael Cugley
School of Engineering IT Support
m.cug...@eng.gla.ac.uk
Please direct IT support queries to itsupp...@eng.gla.ac.uk
I'm using RoCE (or rather, attempting to) and need to select a
non-default GID to get my traffic properly classified. Both 1.4.4rc2
and 1.5.4 support the btl_openib_ipaddr_include option, but only 1.5.4
causes my traffic to use the proper GID and VLAN.
Is there something broken with ipaddr_includ
sar
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
--
Michael A. Raymond
SGI MPT Team Leader
(651) 683-3434
This is for reference and suggestions as this took me several hours to track
down and the previous discussion on "mpivars.sh" failed to cover this point
(nothing in the FAQ):
I successfully build and installed OpenMPI 1.6.3 using the following on Debian
Linux:
./configure --prefix=/opt/openmpi
The Intel Fortran 2013 compiler comes with support for Intel's MPI runtime and
you are getting that instead of OpenMPI. You need to fix your path for all
the shells you use.
On Apr 1, 2013, at 5:12 AM, Pradeep Jha wrote:
> /opt/intel/composer_xe_2013.1.117/mpirt/bin/intel64/mpirun: line 96:
Hello OpenMPI
We area seriously considering deploying OpenMPI 1.6.5 for production (and
1.7.2 for testing) on HPC clusters which consists of nodes with *different
types of networking interfaces*.
1) Interface selection
We are using OpenMPI 1.6.5 and was wondering how one would go about
selectin
lk to each other at FDR speeds and QDR link pairs at QDR).
I guess if we use the RC connection types then it does not matter to
OpenMPI.
thanks ....
Michael
On Fri, Jul 5, 2013 at 4:59 PM, Ralph Castain wrote:
> I can't speak for MVAPICH - you probably need to ask them about this
>
Great ... thanks. We will try it out as soon as the common backbone IB is
in place.
cheers
Michael
On Fri, Jul 5, 2013 at 6:10 PM, Ralph Castain wrote:
> As long as the IB interfaces can communicate to each other, you should be
> fine.
>
> On Jul 5, 2013, at 3:26 PM, Michae
directly without
copying to host memory first?
Or in general, what level of CUDA support is there on 1.6.5 and 1.7.2 ? Do
you support SDK 5.0 and above?
Cheers ...
Michael
you do anything special when the read/write buffers map to physical memory
belonging to Socket 2? Or do you7 avoid using buffers mapping ro memory
that belongs (is accessible via) the other socket?
Has this situation improved with Ivy-Brige systems or Haswell?
Cheers
Michael
thanks,
Do you guys have any plan to support Intel Phi in the future? That is,
running MPI code on the Phi cards or across the multicore and Phi, as Intel
MPI does?
thanks...
Michael
On Sat, Jul 6, 2013 at 2:36 PM, Ralph Castain wrote:
> Rolf will have to answer the question on level
does anything special memory mapping to work
around this. And if with Ivy Bridge (or Haswell) he situation has improved.
thanks
Mike
On Mon, Jul 8, 2013 at 9:57 AM, Jeff Squyres (jsquyres)
wrote:
> On Jul 6, 2013, at 4:59 PM, Michael Thomadakis
> wrote:
>
> > When you stack runs
Thanks ...
Michael
On Mon, Jul 8, 2013 at 8:50 AM, Rolf vandeVaart wrote:
> With respect to the CUDA-aware support, Ralph is correct. The ability to
> send and receive GPU buffers is in the Open MPI 1.7 series. And
> incremental improvements will be added to the Open MPI 1.7 seri
l 8, 2013, at 11:35 AM, Michael Thomadakis
> wrote:
>
> > The issue is that when you read or write PCIe_gen 3 dat to a non-local
> NUMA memory, SandyBridge will use the inter-socket QPIs to get this data
> across to the other socket. I think there is considerable limitation in
&g
Thanks Tom, that sounds good. I will give it a try as soon as our Phi host
here host gets installed.
I assume that all the prerequisite libs and bins on the Phi side are
available when we download the Phi s/w stack from Intel's site, right ?
Cheers
Michael
On Mon, Jul 8, 2013 at 12:
something done by and at the HCA device driver level.
Anyways, as long as the memory performance difference is a the levels you
mentioned then there is no "big" issue. Most likely the device driver get
space from the same numa domain that of the socket the HCA is attached to.
Thanks
Thanks Tom, I will test it out...
regards
Michael
On Mon, Jul 8, 2013 at 1:16 PM, Elken, Tom wrote:
> ** **
>
> Thanks Tom, that sounds good. I will give it a try as soon as our Phi host
> here host gets installed.
>
> ** **
>
> I assume that all the prerequis
e data alongside the regular inter-NUMA memory traffic. It may
be the case that Intel has re-provisioned QPI to be able to accommodate
more PCIe traffic.
Thanks again
Michael
On Mon, Jul 8, 2013 at 1:01 PM, Brice Goglin wrote:
> The driver doesn't allocate much memory here. Maybe
t was not clear to me who and with what logic was
allocating memory. But definitely for IB it makes sense that the user
provides pointers to their memory.
thanks
Michael
On Mon, Jul 8, 2013 at 1:07 PM, Jeff Squyres (jsquyres)
wrote:
> On Jul 8, 2013, at 2:01 PM, Brice Goglin wrote:
>
>
Michael
On Mon, Jul 8, 2013 at 4:30 PM, Tim Carlson wrote:
> On Mon, 8 Jul 2013, Elken, Tom wrote:
>
> It isn't quite so easy.
>
> Out of the box, there is no gcc on the Phi card. You can use the cross
> compiler on the host, but you don't get gcc on the Phi b
guess you should be able to directly do this from the same OpenMPI mpirun
command line ...
thanks
Michael
On Tue, Jul 9, 2013 at 12:18 PM, Tim Carlson wrote:
> On Mon, 8 Jul 2013, Tim Carlson wrote:
>
> Now that I have gone through this process, I'll report that it works with
>
nvironment.
Also is it based on OFED 1.5.4.1 or on which OFED?
Best regards
Michael
If the machine is multi-processor you might want to add the sm btl. That
cleared up some similar problems for me, though I don't use mx so your
millage may vary.
On 7/5/07, SLIM H.A. wrote:
Hello
I have compiled openmpi-1.2.3 with the --with-mx=
configuration and gcc compiler. On testing wi
If you are having difficulty getting openmpi set up yourself, you
might look into OSCAR or Rocks, they make setting up your cluster much
easier and include various mpi packages as well as other utilities for
reducing your management overhead.
I can help you (off list) get set up with OSCAR if you
On 7/17/07, Bill Johnstone wrote:
Thanks for the help. I've replied below.
--- "G.O." wrote:
> 1- Check to make sure that there are no firewalls blocking
> traffic between the nodes.
There is no firewall in-between the nodes. If I run jobs directly via
ssh, e.g. "ssh node4 env" they wo
0 uses some lame mpi library instead of openMPI! Any ideas on where
the problem could be?
Michael
********
Mgr. Michael Komm
Tokamak Department
Institute of Plasma Physics of Academy of Sciences of Czech
Thanks Christian it works just fine now!
I altered LIBRARY_PATH and LD_PATH but not this one :)
Michael
__
> Od: christian.bec...@math.uni-dortmund.de
> Komu: Open MPI Users
> Datum: 07.08.2
wline inside
substitute pattern
Michael Clover
mclo...@san.rr.com
e
is trying to run gcc, in a way that doesn't look correct
OMPI_AS_GLOBAL =
OMPI_AS_LABEL_SUFFIX =
OMPI_CC_ABSOLUTE = DISPLAY known
/usr/bin/gcc
OMPI_CONFIGURE_DATE = Sat Oct 6 16:05:59 PDT 2007
OMPI_CONFIGURE_HOST = michael-clovers-computer.local
OMPI_CONFIGURE_USER = mrc
OMPI_CXX_ABSO
nfstatA1BhUF/subs-3.sed line 33: unterminated `s' command
sed: file ./confstatA1BhUF/subs-4.sed line 4: unterminated `s' command
config.status: creating orte/include/orte/version.h
Michael Clover
mclo...@san.rr.com
arwin8
Thread model: posix
gcc version 4.0.1 (Apple Computer, Inc. build 5367)
I thought the " DISPLAY known" might have been some result of
my .tcshrc file, so I started up sh in a terminal window before
running configure and make, but I still get the same error
Michael Clover
mclo...@
e unterminated newlines for sed either, and also makes
correctly. I must have mistyped something when I grep'ed for
"display" or "known" before my reply to Reuti, since I didn't find it
until your question. thanks for all the help.
Michael Clover
mclo...@san.rr.com
pc, F77=ifort, F90=ifort
Intel-Compiler: both, C and Fortran 10.0.025
Is there any known solution?
Thanks,
Michael
On 06.11.2007, at 10:42, Åke Sandgren wrote:
Hi,
On Tue, 2007-11-06 at 10:28 +0100, Michael Schulz wrote:
Hi,
I've the same problem described by some other users, that I can't
compile anything if I'm using the open-mpi compiled with the Intel-
Compiler.
ompi_info --all
Seg
aper.php?c=6123&;
IMPORTANT DATES
February 4, 2008 - Abstract submissions due
Full paper submission due: April 14, 2008
Acceptance notification: May 3, 2008
Camera-ready due: May 26, 2008
Conference: August 26-29, 2008
CHAIR
Michael Alexander (chair), WU Vienna, Austria
Stephen Childs (co-chai
who are truly bothered by them can always
add %exclude directives if they so choose.
Michael
--
Michael Jennings
Linux Systems and Cluster Admin
UNIX and Cluster Computing Group
u use %exclude in only one of the locations where the file is
listed (presumably the "less correct" one), it will solve the problem.
Michael
--
Michael Jennings
Linux Systems and Cluster Admin
UNIX and Cluster Computing Group
rol, since Perceus would need to take
> over DHCP services to do its magic.
At the risk of being slightly off-topic, Perceus actually has no
problem working with a separate DHCP server. It has to be properly
configured to hand out the payload, of course, but it works fine.
Michael
--
Mich
r'
Hints on how to build on this machine are greatly welcome. I had the
same problems when using openmpi-1.3.3.tar.gz and my normal development
environment (less recent m4 and autotools, and gcc-4.1.2)
Thanks,
Michael
Hello.
Thanks!
On Wed, 2009-09-02 at 10:51 +0300, Jeff Squyres wrote:
> On Aug 27, 2009, at 8:34 PM, Michael Hines wrote:
...
> > ltdl.c:(.text+0x10d3): undefined reference to
> > `lt_libltdlc_LTX_preloaded_symbols'
> >
>
> Hmm. This feels like a mismatch o
On Wed, 2009-09-02 at 10:51 +0300, Jeff Squyres wrote:
> On Aug 27, 2009, at 8:34 PM, Michael Hines wrote:
> > libtool: link: gcc -O3 -DNDEBUG -finline-functions -fno-strict-
> > aliasing
> > -pthread -fvisibility=hidden -o opal_wrapper
> > opal_wrapper.o ../../../op
care of a lot, but is not
really general.
Is there a recommended way?
regards,
Michael
A user could set it in
the job file (or even qalter it post submission):
#PBS -v VARNAME=foo:bar:baz
For VARNAME, I think simply "MODULES" or "EXTRAMODULES" could do.
With best regards,
Michael
On Nov 17, 2009, at 4:29 , David Singleton wrote:
>
> I'm n
i.e., "forward all envars
> -except- the specified one(s)".
The issue with -x is that modules may set any random variable. The reverse
option to -x would be great of course. MPICH2 and Intel MPI pass all but a few
(known to be host-specific) variables by default, and counter that
On Nov 17, 2009, at 10:17 , Michael Sternberg wrote:
On Nov 17, 2009, at 9:10 , Ralph Castain wrote:
Not exactly. It completely depends on how Torque was setup - OMPI
isn't forwarding the environment. Torque is.
I actually tried compiling OMPI with the tm interface a couple of
ver
1 - 100 of 389 matches
Mail list logo