At FASRC Harvard we generally keep up with the latest so we are on 22.05.2.
-Paul Edmon-
On 8/16/2022 9:51 AM, Jeff Squyres (jsquyres) via users wrote:
I have a curiosity question for the Open MPI user community: what
version of SLURM are you using?
I ask because we're honestly cu
not state that about gfortran and intel,
by the way.)
So these guys may be snarky, but they can Fortran, definitely. And if Open MPI
bindings may be compiled by this compiler - they would be likely very
standard-conforming.
Have a nice day and a nice year 2022,
Paul Kapinos
On 12/30/21
ank 3 with PID 0 on node jp1 exited on signal 9
(Killed).
It seems I should set this MCA parameter "orte_base_help_aggregate" to 0 in
order to see the error messages.
How can I do this? I suppose I should do it before running the code. Is this
correct?
Thank you,
Paul
the “slot” although the message
lists four options - four options but zero examples.
Thank you,
Paul
> On Nov 7, 2020, at 8:23 PM, Gilles Gouaillardet via users
> wrote:
>
> Paul,
>
> a "slot" is explicitly defined in the error message you copy/pasted:
>
>
s, cores and threads, but not slots.
What shall I specify instead of "-np 12”?
Thank you,
Paul
As a coda to this I managed to get UCX 1.6.0 built with threading and
OpenMPI 4.0.1 to build using this:
https://github.com/openucx/ucx/issues/4020
That appears to be working.
-Paul Edmon-
On 8/26/19 9:20 PM, Joshua Ladd wrote:
**apropos :-)
On Mon, Aug 26, 2019 at 9:19 PM Joshua Ladd
It's the public source. The one I'm testing with is the latest internal
version. I'm going to cc Pete Mendygral and Julius Donnert on this as
they may be able to provide you the version I'm using (as it is not
ready for public use).
-Paul Edmon-
On 8/26/19 9:20 P
UCX to get MPI_THREAD_MULTIPLE to work at all).
-Paul Edmon-
On 8/23/2019 9:31 PM, Paul Edmon wrote:
Sure. The code I'm using is the latest version of Wombat
(https://bitbucket.org/pmendygral/wombat-public/wiki/Home , I'm using
an unreleased updated version as I know the devs).
reason not to build with MT enabled.
Anyways that's the deeper context.
-Paul Edmon-
On 8/23/2019 5:49 PM, Joshua Ladd via users wrote:
Paul,
Can you provide a repro and command line, please. Also, what network
hardware are you using?
Josh
On Fri, Aug 23, 2019 at 3:35 PM Paul Edmon vi
ly we have just used the regular IB Verbs with no problem. My
guess is that there is either some option in OpenMPI I am missing or
some variable in UCX I am not setting. Any insight on what could be
causing the stalls?
-Paul Edmon-
___
users mailing
On 10/20/2017 12:24 PM, Dave Love wrote:
> Paul Kapinos writes:
>
>> Hi all,
>> sorry for the long long latency - this message was buried in my mailbox for
>> months
>>
>>
>>
>> On 03/16/2017 10:35 AM, Alfio Lazzaro wrote:
>>> Hel
nfiniBand is not prohibited, the MPI_Free_mem() take ages.
(I'm not familiar with CCachegrind so forgive me if I'm not true).
Have a nice day,
Paul Kapinos
--
Dipl.-Inform. Paul Kapinos - High Performance Computing,
RWTH Aachen University, IT Center
Seffenter Weg 23, D 52074 Aach
In 1.10.x series there were 'memory hooks' - Open MPI did take some care abount
the alignment. This was removed in 2.x series, cf. the whole thread on my link.
--
Dipl.-Inform. Paul Kapinos - High Performance Computing,
RWTH Aachen University, IT Center
Seffenter Weg 23, D
seems that something changed starting from version
2.x, and the FDR system performs much worse than with the prior 1.10.x release.
--
Dipl.-Inform. Paul Kapinos - High Performance Computing,
RWTH Aachen University, IT Center
Seffenter Weg 23, D 52074 Aachen (Germany)
Tel: +49 241/80-24915
+#endif
OPAL_CR_EXIT_LIBRARY();
return MPI_SUCCESS;
```
This will at least tell us if the innards of our ALLOC_MEM/FREE_MEM (i.e.,
likely the registration/deregistration) are causing the issue.
On Mar 15, 2017, at 1:27 PM, Dave Love wrote:
Paul Kapinos writes:
Nathan,
unfortunat
Hi,
On 03/16/17 10:35, Alfio Lazzaro wrote:
We would like to ask you which version of CP2K you are using in your tests
Release 4.1
and
if you can share with us your input file and output log.
The question goes to Mr Mathias Schumacher, on CC:
Best
Paul Kapinos
(Our internal ticketing
).
On 03/07/17 20:22, Nathan Hjelm wrote:
If this is with 1.10.x or older run with --mca memory_linux_disable 1. There is
a bad interaction between ptmalloc2 and psm2 support. This problem is not
present in v2.0.x and newer.
-Nathan
On Mar 7, 2017, at 10:30 AM, Paul Kapinos wrote:
Hi Dav
for multi-node jobs, and that
doesn't show the pathological behaviour iff openib is suppressed.
However, it requires ompi 1.10, not 1.8, which I was trying to use.
___
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsort
ntion. We have a (nasty)
workaround, cf.
https://www.mail-archive.com/devel@lists.open-mpi.org/msg00052.html
As far as I can see this issue is on InfiniBand only.
Best
Paul
--
Dipl.-Inform. Paul Kapinos - High Performance Computing,
RWTH Aachen University, IT Center
Seffenter Weg 23, D 5207
nce bug in MPI_Free_mem your application can be horribly slow (seen:
CP2K) if the InfiniBand failback of OPA not disabled manually, see
https://www.mail-archive.com/users@lists.open-mpi.org//msg30593.html
Best,
Paul Kapinos
--
Dipl.-Inform. Paul Kapinos - High Performance Computing,
R
!
Paul Kapinos
On 12/14/16 13:29, Paul Kapinos wrote:
Hello all,
we seem to run into the same issue: 'mpif90' sigsegvs immediately for Open MPI
1.10.4 compiled using Intel compilers 16.0.4.258 and 16.0.3.210, while it works
fine when compiled with 16.0.2.181.
It seems to be a compiler i
ntel libs (as
said changing out these solves/raises the issue) we will do a failback to
16.0.2.181 compiler version. We will try to open a case by Intel - let's see...
Have a nice day,
Paul Kapinos
On 05/06/16 14:10, Jeff Squyres (jsquyres) wrote:
Ok, good.
I asked that question beca
look at the below core dump of 'ompi_info' like below one.
(yes we know that "^tcp,^ib" is a bad idea).
Have a nice day,
Paul Kapinos
P.S. Open MPI: 1.10.4 and 2.0.1 have the same behaviour
--
[lnm001:39
lated one of rules of
Open MPI release series) . Anyway, if there is a simple fix for your
test case for the 1.10 series, I am happy to provide a patch. It might
take me a day or two however.
Edgar
On 12/9/2015 6:24 AM, Paul Kapinos wrote:
Sorry, forgot to mention: 1.10.1
Open
: 1.10.1
OPAL repo revision: v1.10.0-178-gb80f802
OPAL release date: Nov 03, 2015
MPI API: 3.0.0
Ident string: 1.10.1
On 12/09/15 11:26, Gilles Gouaillardet wrote:
Paul,
which OpenMPI version are you using ?
thanks for providing a simple reproducer
amples of divergent behaviour but this one is quite handy.
Is that a bug in OMPIO or did we miss something?
Best
Paul Kapinos
1) http://www.open-mpi.org/faq/?category=ompio
2) http://www.open-mpi.org/community/lists/devel/2015/12/18405.php
3) (ROMIO is default; on local hard drive
lls like an error in the
'hwlock' library.
Is there a way to disable hwlock or to debug it in somehow way?
(besides to build a debug version of hwlock and OpenMPI)
Best
Paul
--
Dipl.-Inform. Paul Kapinos - High Performance Computing,
RWTH Aachen University, Center f
gi/users
___
users mailing list
us...@open-mpi.org <mailto:us...@open-mpi.org>
http://www.open-mpi.org/mailman/listinfo.cgi/users
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.o
performance without being verbose.
Best
Paul
Is there no bug in MPI_THREAD_MULTIPLE implementation in 1.7.2 and 1.7.3? My
test program just hang now
On 10/23/13 19:47, Jeff Hammond wrote:
On Wed, Oct 23, 2013 at 12:02 PM, Barrett, Brian W wrote:
On 10/22/13 10:23 AM, "Jai Dayal&qu
Vanilla Linux ofed from RPM's for Scientific Linux release 6.4 (Carbon) (= RHEL
6.4).
No ofed_info available :-(
On 07/31/13 16:59, Mike Dubman wrote:
Hi,
What OFED vendor and version do you use?
Regards
M
On Tue, Jul 30, 2013 at 8:42 PM, Paul Kapinos mailto:kapi...@rz.rwth-aachen.de>
Best,
Paul Kapinos
P.S. There should be no connection problen somewhere between the nodes; a test
job with 1x process on each node has been ran sucessfully just before starting
the actual job, which also ran OK for a while - until calling MPI_Allt
isturb the production on these nodes (and different MPI
versions for different nodes are doofy).
Best
Paul
--
Dipl.-Inform. Paul Kapinos - High Performance Computing,
RWTH Aachen University, Center for Computing and Communication
Seffenter Weg 23, D 52074 Aachen (Germany)
Tel: +49 24
pilers) or -lmpi_mt instead of -lmpi (other compilers). However, Intel
MPI is not free.
Best,
Paul Kapinos
Also, I recommend to _always_ check what kinda of threading lievel you ordered
and what did you get:
print *, 'hello, world!', MPI_THREAD_MULTIPLE, provided
On 05/31/
usuallu a bit
dusty.
--
Dipl.-Inform. Paul Kapinos - High Performance Computing,
RWTH Aachen University, Center for Computing and Communication
Seffenter Weg 23, D 52074 Aachen (Germany)
Tel: +49 241/80-24915
smime.p7s
Description: S/MIME Cryptographic Signature
tight integration to LSF 8.0 now =)
For future, if you need a testbed, I can grant an user access to you...
best
Paul
--
Dipl.-Inform. Paul Kapinos - High Performance Computing,
RWTH Aachen University, Center for Computing and Communication
Seffenter Weg 23, D 52074 Aachen (Germany)
Tel: +4
g.
Any suggestions?
Thanks
Tim Dunn
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/lis
oducer and send it to the compiler developer team :o)
Best
Paul Kapinos
On 04/05/13 17:56, Siegmar Gross wrote:
PPFC mpi-f08.lo
"../../../../../openmpi-1.7/ompi/mpi/fortran/use-mpi-f08/mpi-f08.F90", Line = 1,
Column = 1: INTERNAL: Interrupt: Segmentation fault
--
Dipl.-
ations. Otherwise we will ignore it, probably...
Best
Paul Kapinos
(*) we've kinda internal test suite in order to check our MPIs...
P.S. $ mpicc -O0 -m32 -o ./mpiIOC32.exe ctest.c -lm
P.S.2 an example cofnigure line:
./configure --with-openib --with-lsf --with-devel-headers
--enable-con
tible from 11.x through 13.x
versions.
So, the recommended solution is to build an own version of Open MPI with any
compiler you use.
Greetings,
Paul
P.S. As Hristo said, changing the Fortran compiler vendor and using the
precompiled Fortran header would never work: the syntax of these .mo
nough-registred-mem" computation for Mellanox HCAs? Any
other idea/hint?
--
Dipl.-Inform. Paul Kapinos - High Performance Computing,
RWTH Aachen University, Center for Computing and Communication
Seffenter Weg 23, D 52074 Aachen (Germany)
Tel: +49 241/80-24915
smime.p7s
Descripti
can run
the same program from each machine in the hostfile. I would still be very
interested to know what kind of MPI situations are likely to cause these kinds
of seg faults….
-Paul
On Feb 11, 2013, at 8:27 AM, Jeff Squyres (jsquyres) wrote:
> Can you provide any more detail?
>
>
> Hello,
> I am getting the following stacktrace when running a simple hello world MPI
> C++ program on 2 machines:
>
>
> mini:mpi_cw paul$ mpirun --prefix /usr/local/Cellar/open-mpi/1.6.3 --hostfile
> hosts_home -np 2 ./pi 100
> rank and name: 0 aka mini.
. Just needed (Bourne shell) to
export LD_RUN_PATH=/gpfs/apps/gcc/v4.7.2/lib64:$LD_RUN_PATH
before configure-ing OpenMPI with the new gcc on the PATH. Thanks to all who
responded to this and pointed me in the right direction.
--
Paul Hatton
High Performance Computing and Visualisation
__
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
--
Dipl.-Inform. Paul Kapinos - High Performance Computing,
RWTH Aachen University, Center for Computing and Communication
Seffenter Weg 23, D 52074 Aachen (Germany)
Tel: +
We 'tune' our Open MPI by setting environment variables....
Best
Paul Kapinos
On 12/19/12 11:44, Number Cruncher wrote:
Having run some more benchmarks, the new default is *really* bad for our
application (2-10x slower), so I've been looking at the source to try and figure
out
Thanks for your help.
--
Paul Hatton
High Performance Computing and Visualisation Specialist
IT Services, The University of Birmingham
Ph: 0121-414-3994 Mob: 07785-977340 Skype: P.S.Hatton
[Service Manager, Birmingham Environment for Academic Research]
[Also Technical Director, IBM Visual and Spati
, where is your libgfortran.so.3?
Does your system have one in /usr/lib64 (assuming you're on a 64-bit system) or
in /usr/projects/hpcsoft/moonlight/gcc/4.7.2/somewhere?
I'll have a play with my setup as well. Should have spotted this myself.
Thanks for your help
--
Paul Hatton
High
failed one attached.
--
Paul Hatton
High Performance Computing and Visualisation Specialist
IT Services, The University of Birmingham
Ph: 0121-414-3994 Mob: 07785-977340 Skype: P.S.Hatton
[Service Manager, Birmingham Environment for Academic Research]
[Also Technical Director, IBM Visual and
Oh, sorry - I tried a build with the system gcc and it worked. I'll repeat the
failed one and get it to you. Sorry about that.
--
Paul Hatton
High Performance Computing and Visualisation Specialist
IT Services, The University of Birmingham
Ph: 0121-414-3994 Mob: 07785-977340 Skype: P.S.H
@bb2login04 openmpi-1.6.3]$ module unload apps/gcc
[appmaint@bb2login04 openmpi-1.6.3]$ which gcc
/usr/bin/gcc
clutching at straws a bit here ... but I have built it with Intel 2013.0.079
which is also installed in the applications area and loaded with a module.
--
Paul Hatton
High Performance Computing
Thanks. zip-ed config.log attached
--
Paul Hatton
High Performance Computing and Visualisation Specialist
IT Services, The University of Birmingham
Ph: 0121-414-3994 Mob: 07785-977340 Skype: P.S.Hatton
[Service Manager, Birmingham Environment for Academic Research]
[Also Technical Director
13.0.079 and
also the system (Scientific Linux 6.3) gcc which is 4.4.6
I've attached the output from the configure command.
Thanks
--
Paul Hatton
High Performance Computing and Visualisation Specialist
IT Services, The University of Birmingham
Ph: 0121-414-3994 Mob: 07785-977340 Skype: P.
unning#mpi-preconnect) there is no such
huge latency outliers for the first sample.
Well, we know about the warm-up and lazy connections.
But 200x ?!
Any comments about that is OK so?
Best,
Paul Kapinos
(*) E.g. HPCC explicitely say in http://icl.cs.utk.edu/hpcc/faq/index.html#132
> Addit
rmance/stability?
Daniel
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
--
Dipl.-Inform. Paul Kapinos - High Performance Computing,
RWTH Aachen University, Center for Computing and Communication
Seffenter
2 -H linuxbdc01,linuxbdc02 /home/pk224850/bin/ulimit_high.sh MPI_FastTest.exe
--
Dipl.-Inform. Paul Kapinos - High Performance Computing,
RWTH Aachen University, Center for Computing and Communication
Seffenter Weg 23, D 52074 Aachen (Germany)
Tel: +49 241/80-24915
ulimit_high
Resolution to this. Upgrading to OpenMPI 1.6.2 and getting Intel
Cluster Studio 2013 did the trick.
-Paul Edmon-
On 9/8/2012 4:59 PM, Paul Edmon wrote:
Interesting. I figured that might be the case. I will have to contact
Intel and find out if we can get a newer version.
Thanks.
-Paul
Interesting. I figured that might be the case. I will have to contact
Intel and find out if we can get a newer version.
Thanks.
-Paul Edmon-
On 9/8/2012 3:18 PM, Jeff Squyres wrote:
Did this ever get a followup? If not...
We've seen problems with specific versions of the Intel com
Yevgeny,
we at RZ Aachen also see problems very similar to described in initial posting
of Yong Qin, on VASP with Open MPI 1.5.3.
We're currently looking for a data set able to reproduce this. I'll write an
email if we gotcha it.
Best,
Paul
On 09/05/12 13:52, Yevgeny Kliteynik w
even tried compiling Intel MPI Benchmark, which failed in a
similar way, which indicates that its a problem specifically with the
interaction of MPI and the intel compiler and not the code I was working
with.
Thanks.
-Paul Edmon-
#ib-low-reg-mem
"Waiting forever" for a single operation is one of symptoms of the problem
especially in 1.5.3.
best,
Paul
P.S. the lower performance with 'big' chinks is known phenomenon, cf.
http://www.scl.ameslab.gov/netpipe/
(image on bottom of the page). But the
:-) It's basically trying to tell you "I couldn't
> find a version of MPI_FILE_READ_AT that matches the parameters you passed."
>
>
>
> On Aug 6, 2012, at 4:09 PM, Paul Romano wrote:
>
> > When I try to use parallel I/O routines like MPI_File_write_at
specific subroutine for the generic
'mpi_file_read_at' at (1)
I'm using Open MPI 1.6 compiled with --with-mpi-f90-size=medium. I've also
tried both gfortran and ifort, and both give the same compilation error.
Has anyone else seen this behavior?
Best regards,
Paul
ouble
over our infiniband network. I'm running a fairly large problem (uses about
18GB), and part way in, I get the following errors:
You say "big footprint"? I hear a bell ringing...
http://www.open-mpi.org/faq/?category=openfabrics#ib-low-reg-mem
--
Dipl.-Inform. Paul Kapin
--
[1]+ Exit 231 /usr/local/bin/mpirun -np 4 ./cpmd.x
1-h2-wave.inp > 1-h2-wave.out
======
I am unable to find out the reason of that error. Please help. My Open-MPI
version is 1.6.
With regards
Abhra Paul
ue? Are you interested in reproduce this?
Best,
Paul Kapinos
P.S: The same test with Intel MPI cannot run using DAPL, but run very fine opef
'ofa' (= native verbs as Open MPI use it). So I believe the problem is rooted in
the communication pattern of the program; it send very LARGE messag
HI,
I'm running openmpi on Rackspace cloud over Internet using MPI_Spawn. IT means,
I run the parent on my PC and the children on Rackspace cloud machines.
Rackspace provides direct IP addresses of the machines (no NAT), that is why it
is possible.
Now, there is a communicator involving only the
e or both of these values and try again.
--
$ ssh linuxbdc01 cat /proc/cpuinfo | grep processor | wc -l
24
$ cat /proc/cpuinfo | grep processor | wc -l
4
Best,
Paul
P.S. Using Open MPI 1.5.3, waiting for 1.5.5 :o)
P.S.2. any u
able].
Ralph, Jeff, anybody - any interest in reproducing this issue?
Best wishes,
Paul Kapinos
P.S. Open MPI 1.5.3 used - still waiting for 1.5.5 ;-)
Some error messages:
with 6 procs over 6 Nodes:
--
mlx4:
RANK is? (This would make
sense as it looks like the OMPI_COMM_WORLD_SIZE, OMPI_COMM_WORLD_RANK pair.)
If yes, maybe it also should be documented in the Wiki page.
2) OMPI_COMM_WORLD_NODE_RANK - is that just a double of
OMPI_COMM_WORLD_LOCAL_RANK ?
Best wishes,
Paul Kapinos
--
Dipl.-
Try out the attached wrapper:
$ mpiexec -np 2 masterstdout
mpirun -n 2
Is there a way to have mpirun just merger STDOUT of one process to its
STDOUT stream?
--
Dipl.-Inform. Paul Kapinos - High Performance Computing,
RWTH Aachen University, Center for Computing and Communication
r 1.5.x is a good idea; but it is always a bit
tedious... Would 1.5.5 arrive the next time?
Best wishes,
Paul Kapinos
Ralph Castain wrote:
I don't see anything in the code that limits the number of procs in a rankfile.
> Are the attached rankfiles the ones you are trying to use?
I
s computer dimension is a bit too big for the pinning
infrasructure now. A bug?
Best wishes,
Paul Kapinos
P.S. see the attached .tgz for some logzz
--
Rankfiles
Rankfiles provide a means for specifying detailed i
precisely answer are impossible
without seeing any codes snippet and/or logs.
Best,
Paul
Anas Al-Trad wrote:
Dear people,
In my application, I have the segmentation fault of
Integer Divide-by-zero when calling MPI_cart_sub routine. My program is
as follows, I have 128 ranks, I make
.conf file."
Well. Any suggestions? Does OpenMPI ever able to use DAPL 2.0 on Linux?
Merry Christmas from westernest Germany,
Paul
Paul Kapinos wrote:
Good morning,
We've never recommended the use of dapl on Linux. I think it might
have worked at one time, but I don't thi
We have reported this before. We are still not able to do it, fully.
However partially successful, now. We have used a machine with static IP address
and modified the router settings by opening all ssh ports. Master runs on this
machine and the slaves on EC2.
Now we can run the "Hello world" ove
ould wear sackcloth and
ashes... :-/
Best,
Paul
Anyway, since 1.2.8 here I build 5, sometimes more versions,
all from the same tarball, but on separate build directories,
as Jeff suggests.
[VPATH] Works for me.
My two cents.
Gus Correa
Jeff Squyres wrote:
Ah -- Ralph pointed out the rel
---
Because of the anticipated performance gain we would be very keen on
using DAPL with Open MPI. Does somebody have any idea what could be
wrong and what to check?
On Dec 2, 2011, at 1:21 PM, Paul Kapinos wrote:
Dear Open MPI developer,
OFED 1.5.4 will
FOBA -x BAFO -x RR -x ZZ"
Well, this are my user's dreams; but maybe this give an inspiration for
Open MPI programmers. As said, the situation when a [long] list of
envvars is to be provided is quite common, and typing everything on the
command line is tedious and error-prone.
Best wi
Hello Jeff, Ralph, all!
Meaning that per my output from above, what Paul was trying should have worked, no?
I.e., setenv'ing OMPI_, and those env vars should magically show up
in the launched process.
In the -launched process- yes. However, his problem was that they do not show
up fo
on as
possible)
Best wishes and an nice weekend,
Paul
http://www.openfabrics.org/downloads/OFED/release_notes/OFED_1.5.4_release_notes
--
Dipl.-Inform. Paul Kapinos - High Performance Computing,
RWTH Aachen University, Center for Computing and Communication
Seffenter Weg 23
Ralph Castain open-mpi.org> writes:
>
> This has come up before - I would suggest doing a quick search of "ec2" on our
user list. Here is one solution:
> On Jun 14, 2011, at 10:50 AM, Barnet Wagman wrote:I've put together a simple
system for running OMPI on EC2 (Amazon's cloud computing service)
Jeff Squyres cisco.com> writes:
>
> On Nov 30, 2011, at 6:03 AM, Jaison Paul wrote:
>
> > Yes, we have set up .ssh file on remote EC2 hosts. Is there anything else
that we should be taking care of when
> dealing with EC2?
>
> I have heard that Open MPI's
Ralph Castain open-mpi.org> writes:
>
> This has come up before - I would suggest doing a quick search of "ec2" on our
user list. Here is one solution:
> On Jun 14, 2011, at 10:50 AM, Barnet Wagman wrote:I've put together a simple
system for running OMPI on EC2 (Amazon's cloud computing service)
Jeff Squyres cisco.com> writes:
>
> On Nov 30, 2011, at 6:03 AM, Jaison Paul wrote:
>
> > Yes, we have set up .ssh file on remote EC2 hosts. Is there anything else
that we should be taking care of when
> dealing with EC2?
>
> I have heard that Open MPI's
that but failed. Would try again.
Yes, we have set up .ssh file on remote EC2 hosts. Is there anything else that
we should be taking care of when dealing with EC2?
Jaison
> > Hi,
> >
> > Am 24.11.2011 um 05:26 schrieb Jaison Paul:
> >
> >> I am trying to access O
ovided and thus treated *differently* than other envvars:
$ man mpiexec
Exported Environment Variables
All environment variables that are named in the form OMPI_* will
automatically be exported to new processes on the local and remote
nodes.
So, tells the man page lies, or this
command line options. This should not be so?
(I also tried to advise to provide the envvars by -x
OMPI_MCA_oob_tcp_if_include -x OMPI_MCA_btl_tcp_if_include - nothing
changed. Well, they are OMPI_ variables and should be provided in any case).
Best wishes and many thanks for all,
Paul K
Hi all,
I am trying to access OpenMPI processes over Internet using ssh and not
quite successful, yet. I believe that I should be able to do it.
I have to run one process on my PC and the rest on a remote cluster over
internet. I have set the public keys (at .ssh/authorized_keys) to access
r
above command should disable
the usage of eth0 for MPI communication itself, but it hangs just before
the MPI is started, isn't it? (because one process lacks, the MPI_INIT
cannot be passed)
Now a question: is there a way to forbid the mpiexec to use some
interfaces at all?
Best wishes,
Pau
constellation. The next thing I will try will be the installation of
1.5.4 :o)
Best,
Paul
P.S. started:
$ /opt/MPI/openmpi-1.5.3/linux/intel/bin/mpiexec --hostfile
hostfile-mini -mca odls_base_verbose 5 --leave-session-attached
--display-map helloworld 2>&1 | tee helloworld.txt
ea what is gonna on?
Best,
Paul Kapinos
P.S: no alias names used, all names are real ones
--
Dipl.-Inform. Paul Kapinos - High Performance Computing,
RWTH Aachen University, Center for Computing and Communication
Seffenter Weg 23, D 52074 Aachen (Germany)
Tel: +49 241/80-24915
l
[long] list of variables.
Is there someone envvar, by setting which to a list of names of other
envvars the same effect could be achieved as by setting -x on command
line of mpirun?
Best wishes
Paul Kapinos
--
Dipl.-Inform. Paul Kapinos - High Performance Computing,
RWTH Aachen Univers
workarounded the problem by switching our production to
1.5.3 this issue is not a "burning" one; but I decieded still to post
this because any issue on such fundamental things may be interesting for
developers.
Best wishes,
Paul Kapinos
(*) http://www.netlib.org/ben
will trigger our admins...
Best wishes,
Paul
m4 (GNU M4) 1.4.13 (OK)
autoconf (GNU Autoconf) 2.63 (Need: 2.65, NOK)
automake (GNU automake) 1.11.1 (OK)
ltmain.sh (GNU libtool) 2.2.6b (OK)
On Jul 22, 2011, at 9:12 AM, Paul Kapinos wrote:
Dear OpenMPI volks,
currently I have a probl
re string below. With the
Intel, gcc and Studio compiles, the very same installations were happily
through.
Maybe someone can give me a hint about this is an issue by openmpi, pgi
or somehow else...
Best wishes,
Paul
P.S.
again, more logs downloadable:
https://gigamove.rz.rwth-aachen.de/d/id
UT, in the configure line (below) I get the -m32 flag!! So, where is
the -m32 thing lost? Did I do something in a wrong way?
Best wishes and a nice weekend,
Paul Kapinos
P.S. again, the some more logs downloadable from here:
https://gigamove.rz.rwth-aachen.de/d/id/xoQ2
warnings:
pgCC-Warning-prelink_objects switch is deprecated
pgCC-Warning-instantiation_dir switch is deprecated
coming from the below-noted call.
I do not know about this is a Libtool or a libtool usage (=OpenMPI
issue, but I do not want to keep secret this...
Best wishes
Paul Kapinos
/OFF). The same error arise in all 16 versions.
Can someone give a hint about how to avoid this issue? Thanks!
Best wishes,
Paul
Some logs and configure are downloadable here:
https://gigamove.rz.rwth-aachen.de/d/id/2jM6MEa2nveJJD
The configure line is in RUNME.sh, the
logs of configure and b
tification modi. The32bit version
works with the NIS-autentificated part of our cluster, only.
Thanks for your help!
Best wishes
Paul Kapinos
Reuti wrote:
Hi,
Am 15.07.2011 um 21:14 schrieb Terry Dontje:
On 7/15/2011 1:46 PM, Paul Kapinos wrote:
Hi OpenMPI volks (and Oracle/Sun experts),
Suns MPI compatible with LDAP autotentification method at all?
Best wishes,
Paul
P.S. in both parts if the cluster, me (login marked as x here) can
login to any node by ssh without need to type the password.
--
The u
1 - 100 of 173 matches
Mail list logo