Re: [OMPI users] Oldest version of SLURM in use?

2022-08-16 Thread Paul Edmon via users
At FASRC Harvard we generally keep up with the latest so we are on 22.05.2. -Paul Edmon- On 8/16/2022 9:51 AM, Jeff Squyres (jsquyres) via users wrote: I have a curiosity question for the Open MPI user community: what version of SLURM are you using? I ask because we're honestly cu

Re: [OMPI users] NAG Fortran 2018 bindings with Open MPI 4.1.2

2022-01-04 Thread Paul Kapinos via users
not state that about gfortran and intel, by the way.) So these guys may be snarky, but they can Fortran, definitely. And if Open MPI bindings may be compiled by this compiler - they would be likely very standard-conforming. Have a nice day and a nice year 2022, Paul Kapinos On 12/30/21

[OMPI users] MCA parameter "orte_base_help_aggregate"

2021-01-25 Thread Paul Cizmas via users
ank 3 with PID 0 on node jp1 exited on signal 9 (Killed). It seems I should set this MCA parameter "orte_base_help_aggregate" to 0 in order to see the error messages. How can I do this? I suppose I should do it before running the code. Is this correct? Thank you, Paul

Re: [OMPI users] 4.0.5 on Linux Pop!_OS

2020-11-07 Thread Paul Cizmas via users
the “slot” although the message lists four options - four options but zero examples. Thank you, Paul > On Nov 7, 2020, at 8:23 PM, Gilles Gouaillardet via users > wrote: > > Paul, > > a "slot" is explicitly defined in the error message you copy/pasted: > >

[OMPI users] 4.0.5 on Linux Pop!_OS

2020-11-07 Thread Paul Cizmas via users
s, cores and threads, but not slots. What shall I specify instead of "-np 12”? Thank you, Paul

Re: [OMPI users] UCX and MPI_THREAD_MULTIPLE

2019-09-06 Thread Paul Edmon via users
As a coda to this I managed to get UCX 1.6.0 built with threading and OpenMPI 4.0.1 to build using this: https://github.com/openucx/ucx/issues/4020 That appears to be working. -Paul Edmon- On 8/26/19 9:20 PM, Joshua Ladd wrote: **apropos  :-) On Mon, Aug 26, 2019 at 9:19 PM Joshua Ladd

Re: [OMPI users] UCX and MPI_THREAD_MULTIPLE

2019-08-27 Thread Paul Edmon via users
It's the public source.  The one I'm testing with is the latest internal version.  I'm going to cc Pete Mendygral and Julius Donnert on this as they may be able to provide you the version I'm using (as it is not ready for public use). -Paul Edmon- On 8/26/19 9:20 P

Re: [OMPI users] UCX and MPI_THREAD_MULTIPLE

2019-08-23 Thread Paul Edmon via users
UCX to get MPI_THREAD_MULTIPLE to work at all). -Paul Edmon- On 8/23/2019 9:31 PM, Paul Edmon wrote: Sure.  The code I'm using is the latest version of Wombat (https://bitbucket.org/pmendygral/wombat-public/wiki/Home , I'm using an unreleased updated version as I know the devs).

Re: [OMPI users] UCX and MPI_THREAD_MULTIPLE

2019-08-23 Thread Paul Edmon via users
reason not to build with MT enabled.  Anyways that's the deeper context. -Paul Edmon- On 8/23/2019 5:49 PM, Joshua Ladd via users wrote: Paul, Can you provide a repro and command line, please. Also, what network hardware are you using? Josh On Fri, Aug 23, 2019 at 3:35 PM Paul Edmon vi

[OMPI users] UCX and MPI_THREAD_MULTIPLE

2019-08-23 Thread Paul Edmon via users
ly we have just used the regular IB Verbs with no problem.  My guess is that there is either some option in OpenMPI I am missing or some variable in UCX I am not setting.  Any insight on what could be causing the stalls? -Paul Edmon- ___ users mailing

Re: [OMPI users] openib/mpi_alloc_mem pathology [#20160912-1315]

2017-10-20 Thread Paul Kapinos
On 10/20/2017 12:24 PM, Dave Love wrote: > Paul Kapinos writes: > >> Hi all, >> sorry for the long long latency - this message was buried in my mailbox for >> months >> >> >> >> On 03/16/2017 10:35 AM, Alfio Lazzaro wrote: >>> Hel

Re: [OMPI users] openib/mpi_alloc_mem pathology [#20160912-1315]

2017-10-19 Thread Paul Kapinos
nfiniBand is not prohibited, the MPI_Free_mem() take ages. (I'm not familiar with CCachegrind so forgive me if I'm not true). Have a nice day, Paul Kapinos -- Dipl.-Inform. Paul Kapinos - High Performance Computing, RWTH Aachen University, IT Center Seffenter Weg 23, D 52074 Aach

Re: [OMPI users] Performance issues: 1.10.x vs 2.x

2017-05-05 Thread Paul Kapinos
In 1.10.x series there were 'memory hooks' - Open MPI did take some care abount the alignment. This was removed in 2.x series, cf. the whole thread on my link. -- Dipl.-Inform. Paul Kapinos - High Performance Computing, RWTH Aachen University, IT Center Seffenter Weg 23, D

Re: [OMPI users] Performance issues: 1.10.x vs 2.x

2017-05-04 Thread Paul Kapinos
seems that something changed starting from version 2.x, and the FDR system performs much worse than with the prior 1.10.x release. -- Dipl.-Inform. Paul Kapinos - High Performance Computing, RWTH Aachen University, IT Center Seffenter Weg 23, D 52074 Aachen (Germany) Tel: +49 241/80-24915

Re: [OMPI users] openib/mpi_alloc_mem pathology

2017-03-16 Thread Paul Kapinos
+#endif OPAL_CR_EXIT_LIBRARY(); return MPI_SUCCESS; ``` This will at least tell us if the innards of our ALLOC_MEM/FREE_MEM (i.e., likely the registration/deregistration) are causing the issue. On Mar 15, 2017, at 1:27 PM, Dave Love wrote: Paul Kapinos writes: Nathan, unfortunat

Re: [OMPI users] openib/mpi_alloc_mem pathology [#20160912-1315]

2017-03-16 Thread Paul Kapinos
Hi, On 03/16/17 10:35, Alfio Lazzaro wrote: We would like to ask you which version of CP2K you are using in your tests Release 4.1 and if you can share with us your input file and output log. The question goes to Mr Mathias Schumacher, on CC: Best Paul Kapinos (Our internal ticketing

Re: [OMPI users] openib/mpi_alloc_mem pathology

2017-03-13 Thread Paul Kapinos
). On 03/07/17 20:22, Nathan Hjelm wrote: If this is with 1.10.x or older run with --mca memory_linux_disable 1. There is a bad interaction between ptmalloc2 and psm2 support. This problem is not present in v2.0.x and newer. -Nathan On Mar 7, 2017, at 10:30 AM, Paul Kapinos wrote: Hi Dav

Re: [OMPI users] openib/mpi_alloc_mem pathology

2017-03-07 Thread Paul Kapinos
for multi-node jobs, and that doesn't show the pathological behaviour iff openib is suppressed. However, it requires ompi 1.10, not 1.8, which I was trying to use. ___ users mailing list users@lists.open-mpi.org https://rfd.newmexicoconsort

Re: [OMPI users] Is building with "--enable-mpi-thread-multiple" recommended?

2017-03-03 Thread Paul Kapinos
ntion. We have a (nasty) workaround, cf. https://www.mail-archive.com/devel@lists.open-mpi.org/msg00052.html As far as I can see this issue is on InfiniBand only. Best Paul -- Dipl.-Inform. Paul Kapinos - High Performance Computing, RWTH Aachen University, IT Center Seffenter Weg 23, D 5207

Re: [OMPI users] Is building with "--enable-mpi-thread-multiple" recommended?

2017-03-03 Thread Paul Kapinos
nce bug in MPI_Free_mem your application can be horribly slow (seen: CP2K) if the InfiniBand failback of OPA not disabled manually, see https://www.mail-archive.com/users@lists.open-mpi.org//msg30593.html Best, Paul Kapinos -- Dipl.-Inform. Paul Kapinos - High Performance Computing, R

Re: [OMPI users] Segmentation Fault (Core Dumped) on mpif90 -v

2016-12-23 Thread Paul Kapinos
! Paul Kapinos On 12/14/16 13:29, Paul Kapinos wrote: Hello all, we seem to run into the same issue: 'mpif90' sigsegvs immediately for Open MPI 1.10.4 compiled using Intel compilers 16.0.4.258 and 16.0.3.210, while it works fine when compiled with 16.0.2.181. It seems to be a compiler i

Re: [OMPI users] Segmentation Fault (Core Dumped) on mpif90 -v

2016-12-14 Thread Paul Kapinos
ntel libs (as said changing out these solves/raises the issue) we will do a failback to 16.0.2.181 compiler version. We will try to open a case by Intel - let's see... Have a nice day, Paul Kapinos On 05/06/16 14:10, Jeff Squyres (jsquyres) wrote: Ok, good. I asked that question beca

[OMPI users] funny SIGSEGV in 'ompi_info'

2016-11-14 Thread Paul Kapinos
look at the below core dump of 'ompi_info' like below one. (yes we know that "^tcp,^ib" is a bad idea). Have a nice day, Paul Kapinos P.S. Open MPI: 1.10.4 and 2.0.1 have the same behaviour -- [lnm001:39

Re: [OMPI users] OMPIO correctnes issues

2015-12-09 Thread Paul Kapinos
lated one of rules of Open MPI release series) . Anyway, if there is a simple fix for your test case for the 1.10 series, I am happy to provide a patch. It might take me a day or two however. Edgar On 12/9/2015 6:24 AM, Paul Kapinos wrote: Sorry, forgot to mention: 1.10.1 Open

Re: [OMPI users] OMPIO correctnes issues

2015-12-09 Thread Paul Kapinos
: 1.10.1 OPAL repo revision: v1.10.0-178-gb80f802 OPAL release date: Nov 03, 2015 MPI API: 3.0.0 Ident string: 1.10.1 On 12/09/15 11:26, Gilles Gouaillardet wrote: Paul, which OpenMPI version are you using ? thanks for providing a simple reproducer

[OMPI users] OMPIO correctnes issues

2015-12-09 Thread Paul Kapinos
amples of divergent behaviour but this one is quite handy. Is that a bug in OMPIO or did we miss something? Best Paul Kapinos 1) http://www.open-mpi.org/faq/?category=ompio 2) http://www.open-mpi.org/community/lists/devel/2015/12/18405.php 3) (ROMIO is default; on local hard drive

[OMPI users] SIGSEGV in opal_hwlock152_hwlock_bitmap_or.A // Bug in 'hwlock" ?

2013-10-31 Thread Paul Kapinos
lls like an error in the 'hwlock' library. Is there a way to disable hwlock or to debug it in somehow way? (besides to build a debug version of hwlock and OpenMPI) Best Paul -- Dipl.-Inform. Paul Kapinos - High Performance Computing, RWTH Aachen University, Center f

Re: [OMPI users] MPI_Init_thread hangs in OpenMPI 1.7.1 when using --enable-mpi-thread-multiple

2013-10-23 Thread Paul Kapinos
gi/users ___ users mailing list us...@open-mpi.org <mailto:us...@open-mpi.org> http://www.open-mpi.org/mailman/listinfo.cgi/users ___ users mailing list us...@open-mpi.org http://www.open-mpi.o

Re: [OMPI users] [EXTERNAL] MPI_THREAD_SINGLE vs. MPI_THREAD_FUNNELED

2013-10-23 Thread Paul Kapinos
performance without being verbose. Best Paul Is there no bug in MPI_THREAD_MULTIPLE implementation in 1.7.2 and 1.7.3? My test program just hang now On 10/23/13 19:47, Jeff Hammond wrote: On Wed, Oct 23, 2013 at 12:02 PM, Barrett, Brian W wrote: On 10/22/13 10:23 AM, "Jai Dayal&qu

Re: [OMPI users] Big job, InfiniBand, MPI_Alltoallv and ibv_create_qp failed

2013-08-01 Thread Paul Kapinos
Vanilla Linux ofed from RPM's for Scientific Linux release 6.4 (Carbon) (= RHEL 6.4). No ofed_info available :-( On 07/31/13 16:59, Mike Dubman wrote: Hi, What OFED vendor and version do you use? Regards M On Tue, Jul 30, 2013 at 8:42 PM, Paul Kapinos mailto:kapi...@rz.rwth-aachen.de>

[OMPI users] Big job, InfiniBand, MPI_Alltoallv and ibv_create_qp failed

2013-07-30 Thread Paul Kapinos
Best, Paul Kapinos P.S. There should be no connection problen somewhere between the nodes; a test job with 1x process on each node has been ran sucessfully just before starting the actual job, which also ran OK for a while - until calling MPI_Allt

Re: [OMPI users] knem/openmpi performance?

2013-07-15 Thread Paul Kapinos
isturb the production on these nodes (and different MPI versions for different nodes are doofy). Best Paul -- Dipl.-Inform. Paul Kapinos - High Performance Computing, RWTH Aachen University, Center for Computing and Communication Seffenter Weg 23, D 52074 Aachen (Germany) Tel: +49 24

Re: [OMPI users] 1.7.1 Hang with MPI_THREAD_MULTIPLE set

2013-06-03 Thread Paul Kapinos
pilers) or -lmpi_mt instead of -lmpi (other compilers). However, Intel MPI is not free. Best, Paul Kapinos Also, I recommend to _always_ check what kinda of threading lievel you ordered and what did you get: print *, 'hello, world!', MPI_THREAD_MULTIPLE, provided On 05/31/

Re: [OMPI users] basic questions about compiling OpenMPI

2013-05-22 Thread Paul Kapinos
usuallu a bit dusty. -- Dipl.-Inform. Paul Kapinos - High Performance Computing, RWTH Aachen University, Center for Computing and Communication Seffenter Weg 23, D 52074 Aachen (Germany) Tel: +49 241/80-24915 smime.p7s Description: S/MIME Cryptographic Signature

Re: [OMPI users] Building Open MPI with LSF

2013-05-07 Thread Paul Kapinos
tight integration to LSF 8.0 now =) For future, if you need a testbed, I can grant an user access to you... best Paul -- Dipl.-Inform. Paul Kapinos - High Performance Computing, RWTH Aachen University, Center for Computing and Communication Seffenter Weg 23, D 52074 Aachen (Germany) Tel: +4

Re: [OMPI users] OMPI v1.7.1 fails to build on RHEL 5 and RHEL 6

2013-04-18 Thread Paul Kapinos
g. Any suggestions? Thanks Tim Dunn ___ users mailing list us...@open-mpi.org http://www.open-mpi.org/mailman/listinfo.cgi/users ___ users mailing list us...@open-mpi.org http://www.open-mpi.org/mailman/lis

Re: [OMPI users] cannot build 32-bit openmpi-1.7 on Linux

2013-04-05 Thread Paul Kapinos
oducer and send it to the compiler developer team :o) Best Paul Kapinos On 04/05/13 17:56, Siegmar Gross wrote: PPFC mpi-f08.lo "../../../../../openmpi-1.7/ompi/mpi/fortran/use-mpi-f08/mpi-f08.F90", Line = 1, Column = 1: INTERNAL: Interrupt: Segmentation fault -- Dipl.-

[OMPI users] OpenMPI 1.6.4, MPI I/O on Lustre, 32bit: bug?

2013-03-25 Thread Paul Kapinos
ations. Otherwise we will ignore it, probably... Best Paul Kapinos (*) we've kinda internal test suite in order to check our MPIs... P.S. $ mpicc -O0 -m32 -o ./mpiIOC32.exe ctest.c -lm P.S.2 an example cofnigure line: ./configure --with-openib --with-lsf --with-devel-headers --enable-con

Re: [OMPI users] bug in mpif90? OMPI_FC envvar does not work with 'use mpi'

2013-03-13 Thread Paul Kapinos
tible from 11.x through 13.x versions. So, the recommended solution is to build an own version of Open MPI with any compiler you use. Greetings, Paul P.S. As Hristo said, changing the Fortran compiler vendor and using the precompiled Fortran header would never work: the syntax of these .mo

Re: [OMPI users] openmpi, 1.6.3, mlx4_core, log_num_mtt and Debian/vanilla kernel

2013-02-21 Thread Paul Kapinos
nough-registred-mem" computation for Mellanox HCAs? Any other idea/hint? -- Dipl.-Inform. Paul Kapinos - High Performance Computing, RWTH Aachen University, Center for Computing and Communication Seffenter Weg 23, D 52074 Aachen (Germany) Tel: +49 241/80-24915 smime.p7s Descripti

Re: [OMPI users] Fwd: an error when running MPI on 2 machines

2013-02-11 Thread Paul Gribelyuk
can run the same program from each machine in the hostfile. I would still be very interested to know what kind of MPI situations are likely to cause these kinds of seg faults…. -Paul On Feb 11, 2013, at 8:27 AM, Jeff Squyres (jsquyres) wrote: > Can you provide any more detail? > >

[OMPI users] Fwd: an error when running MPI on 2 machines

2013-02-09 Thread Paul Gribelyuk
> Hello, > I am getting the following stacktrace when running a simple hello world MPI > C++ program on 2 machines: > > > mini:mpi_cw paul$ mpirun --prefix /usr/local/Cellar/open-mpi/1.6.3 --hostfile > hosts_home -np 2 ./pi 100 > rank and name: 0 aka mini.

[OMPI users] FW: error configuring OpenMPI 1.6.3 with gcc 4.7.2

2013-02-04 Thread Paul Hatton
. Just needed (Bourne shell) to export LD_RUN_PATH=/gpfs/apps/gcc/v4.7.2/lib64:$LD_RUN_PATH before configure-ing OpenMPI with the new gcc on the PATH. Thanks to all who responded to this and pointed me in the right direction. -- Paul Hatton High Performance Computing and Visualisation

Re: [OMPI users] Initializing OMPI with invoking the array constructor on Fortran derived types causes the executable to crash

2013-01-11 Thread Paul Kapinos
__ users mailing list us...@open-mpi.org http://www.open-mpi.org/mailman/listinfo.cgi/users -- Dipl.-Inform. Paul Kapinos - High Performance Computing, RWTH Aachen University, Center for Computing and Communication Seffenter Weg 23, D 52074 Aachen (Germany) Tel: +

Re: [OMPI users] MPI_Alltoallv performance regression 1.6.0 to 1.6.1

2012-12-19 Thread Paul Kapinos
We 'tune' our Open MPI by setting environment variables.... Best Paul Kapinos On 12/19/12 11:44, Number Cruncher wrote: Having run some more benchmarks, the new default is *really* bad for our application (2-10x slower), so I've been looking at the source to try and figure out

Re: [OMPI users] error configuring OpenMPI 1.6.3 with gcc 4.7.2

2012-12-06 Thread Paul Hatton
Thanks for your help. -- Paul Hatton High Performance Computing and Visualisation Specialist IT Services, The University of Birmingham Ph: 0121-414-3994  Mob: 07785-977340  Skype: P.S.Hatton [Service Manager, Birmingham Environment for Academic Research] [Also Technical Director, IBM Visual and Spati

Re: [OMPI users] error configuring OpenMPI 1.6.3 with gcc 4.7.2

2012-12-06 Thread Paul Hatton
, where is your libgfortran.so.3? Does your system have one in /usr/lib64 (assuming you're on a 64-bit system) or in /usr/projects/hpcsoft/moonlight/gcc/4.7.2/somewhere? I'll have a play with my setup as well. Should have spotted this myself. Thanks for your help -- Paul Hatton High

Re: [OMPI users] error configuring OpenMPI 1.6.3 with gcc 4.7.2

2012-12-06 Thread Paul Hatton
failed one attached. -- Paul Hatton High Performance Computing and Visualisation Specialist IT Services, The University of Birmingham Ph: 0121-414-3994  Mob: 07785-977340  Skype: P.S.Hatton [Service Manager, Birmingham Environment for Academic Research] [Also Technical Director, IBM Visual and

Re: [OMPI users] error configuring OpenMPI 1.6.3 with gcc 4.7.2

2012-12-06 Thread Paul Hatton
Oh, sorry - I tried a build with the system gcc and it worked. I'll repeat the failed one and get it to you. Sorry about that. -- Paul Hatton High Performance Computing and Visualisation Specialist IT Services, The University of Birmingham Ph: 0121-414-3994  Mob: 07785-977340  Skype: P.S.H

Re: [OMPI users] error configuring OpenMPI 1.6.3 with gcc 4.7.2

2012-12-06 Thread Paul Hatton
@bb2login04 openmpi-1.6.3]$ module unload apps/gcc [appmaint@bb2login04 openmpi-1.6.3]$ which gcc /usr/bin/gcc clutching at straws a bit here ... but I have built it with Intel 2013.0.079 which is also installed in the applications area and loaded with a module. -- Paul Hatton High Performance Computing

Re: [OMPI users] error configuring OpenMPI 1.6.3 with gcc 4.7.2

2012-12-06 Thread Paul Hatton
Thanks. zip-ed config.log attached -- Paul Hatton High Performance Computing and Visualisation Specialist IT Services, The University of Birmingham Ph: 0121-414-3994  Mob: 07785-977340  Skype: P.S.Hatton [Service Manager, Birmingham Environment for Academic Research] [Also Technical Director

[OMPI users] error configuring OpenMPI 1.6.3 with gcc 4.7.2

2012-12-06 Thread Paul Hatton
13.0.079 and also the system (Scientific Linux 6.3) gcc which is 4.4.6 I've attached the output from the configure command. Thanks -- Paul Hatton High Performance Computing and Visualisation Specialist IT Services, The University of Birmingham Ph: 0121-414-3994  Mob: 07785-977340  Skype: P.

[OMPI users] Multirail + Open MPI 1.6.1 = very big latency for the first communication

2012-10-31 Thread Paul Kapinos
unning#mpi-preconnect) there is no such huge latency outliers for the first sample. Well, we know about the warm-up and lazy connections. But 200x ?! Any comments about that is OK so? Best, Paul Kapinos (*) E.g. HPCC explicitely say in http://icl.cs.utk.edu/hpcc/faq/index.html#132 > Addit

Re: [OMPI users] Performance/stability impact of thread support

2012-10-30 Thread Paul Kapinos
rmance/stability? Daniel ___ users mailing list us...@open-mpi.org http://www.open-mpi.org/mailman/listinfo.cgi/users -- Dipl.-Inform. Paul Kapinos - High Performance Computing, RWTH Aachen University, Center for Computing and Communication Seffenter

[OMPI users] too much stack size: _silently_ failback to IPoIB

2012-10-05 Thread Paul Kapinos
2 -H linuxbdc01,linuxbdc02 /home/pk224850/bin/ulimit_high.sh MPI_FastTest.exe -- Dipl.-Inform. Paul Kapinos - High Performance Computing, RWTH Aachen University, Center for Computing and Communication Seffenter Weg 23, D 52074 Aachen (Germany) Tel: +49 241/80-24915 ulimit_high

Re: [OMPI users] OpenMPI 1.6.1 with Intel Cluster Studio 2012

2012-09-28 Thread Paul Edmon
Resolution to this. Upgrading to OpenMPI 1.6.2 and getting Intel Cluster Studio 2013 did the trick. -Paul Edmon- On 9/8/2012 4:59 PM, Paul Edmon wrote: Interesting. I figured that might be the case. I will have to contact Intel and find out if we can get a newer version. Thanks. -Paul

Re: [OMPI users] OpenMPI 1.6.1 with Intel Cluster Studio 2012

2012-09-08 Thread Paul Edmon
Interesting. I figured that might be the case. I will have to contact Intel and find out if we can get a newer version. Thanks. -Paul Edmon- On 9/8/2012 3:18 PM, Jeff Squyres wrote: Did this ever get a followup? If not... We've seen problems with specific versions of the Intel com

Re: [OMPI users] OMPI 1.6.x Hang on khugepaged 100% CPU time

2012-09-05 Thread Paul Kapinos
Yevgeny, we at RZ Aachen also see problems very similar to described in initial posting of Yong Qin, on VASP with Open MPI 1.5.3. We're currently looking for a data set able to reproduce this. I'll write an email if we gotcha it. Best, Paul On 09/05/12 13:52, Yevgeny Kliteynik w

[OMPI users] OpenMPI 1.6.1 with Intel Cluster Studio 2012

2012-08-29 Thread Paul Edmon
even tried compiling Intel MPI Benchmark, which failed in a similar way, which indicates that its a problem specifically with the interaction of MPI and the intel compiler and not the code I was working with. Thanks. -Paul Edmon-

Re: [OMPI users] Infiniband performance Problem and stalling

2012-08-28 Thread Paul Kapinos
#ib-low-reg-mem "Waiting forever" for a single operation is one of symptoms of the problem especially in 1.5.3. best, Paul P.S. the lower performance with 'big' chinks is known phenomenon, cf. http://www.scl.ameslab.gov/netpipe/ (image on bottom of the page). But the

Re: [OMPI users] Parallel I/O doesn't work for derived datatypes with Fortran 90 interface

2012-08-07 Thread Paul Romano
:-) It's basically trying to tell you "I couldn't > find a version of MPI_FILE_READ_AT that matches the parameters you passed." > > > > On Aug 6, 2012, at 4:09 PM, Paul Romano wrote: > > > When I try to use parallel I/O routines like MPI_File_write_at

[OMPI users] Parallel I/O doesn't work for derived datatypes with Fortran 90 interface

2012-08-06 Thread Paul Romano
specific subroutine for the generic 'mpi_file_read_at' at (1) I'm using Open MPI 1.6 compiled with --with-mpi-f90-size=medium. I've also tried both gfortran and ifort, and both give the same compilation error. Has anyone else seen this behavior? Best regards, Paul

Re: [OMPI users] Re :Re: OpenMP and OpenMPI Issue

2012-07-23 Thread Paul Kapinos
ouble over our infiniband network. I'm running a fairly large problem (uses about 18GB), and part way in, I get the following errors: You say "big footprint"? I hear a bell ringing... http://www.open-mpi.org/faq/?category=openfabrics#ib-low-reg-mem -- Dipl.-Inform. Paul Kapin

[OMPI users] mpirun command gives ERROR

2012-07-19 Thread Abhra Paul
-- [1]+  Exit 231    /usr/local/bin/mpirun -np 4 ./cpmd.x 1-h2-wave.inp > 1-h2-wave.out ====== I am unable to find out the reason of that error. Please help. My Open-MPI version is 1.6. With regards Abhra Paul

[OMPI users] Still bothered / cannot run an application

2012-07-12 Thread Paul Kapinos
ue? Are you interested in reproduce this? Best, Paul Kapinos P.S: The same test with Intel MPI cannot run using DAPL, but run very fine opef 'ofa' (= native verbs as Open MPI use it). So I believe the problem is rooted in the communication pattern of the program; it send very LARGE messag

[OMPI users] Naming MPI_Spawn children

2012-06-18 Thread Jaison Paul Mulerikkal
HI, I'm running openmpi on Rackspace cloud over Internet using MPI_Spawn. IT means, I run the parent on my PC and the children on Rackspace cloud machines. Rackspace provides direct IP addresses of the machines (no NAT), that is why it is possible. Now, there is a communicator involving only the

Re: [OMPI users] Hybrid OpenMPI / OpenMP programming

2012-03-02 Thread Paul Kapinos
e or both of these values and try again. -- $ ssh linuxbdc01 cat /proc/cpuinfo | grep processor | wc -l 24 $ cat /proc/cpuinfo | grep processor | wc -l 4 Best, Paul P.S. Using Open MPI 1.5.3, waiting for 1.5.5 :o) P.S.2. any u

[OMPI users] Problem running over IB with huge data set

2012-02-27 Thread Paul Kapinos
able]. Ralph, Jeff, anybody - any interest in reproducing this issue? Best wishes, Paul Kapinos P.S. Open MPI 1.5.3 used - still waiting for 1.5.5 ;-) Some error messages: with 6 procs over 6 Nodes: -- mlx4:

[OMPI users] Environment variables [documentation]

2012-02-27 Thread Paul Kapinos
RANK is? (This would make sense as it looks like the OMPI_COMM_WORLD_SIZE, OMPI_COMM_WORLD_RANK pair.) If yes, maybe it also should be documented in the Wiki page. 2) OMPI_COMM_WORLD_NODE_RANK - is that just a double of OMPI_COMM_WORLD_LOCAL_RANK ? Best wishes, Paul Kapinos -- Dipl.-

Re: [OMPI users] Mpirun: How to print STDOUT of just one process?

2012-02-01 Thread Paul Kapinos
Try out the attached wrapper: $ mpiexec -np 2 masterstdout mpirun -n 2 Is there a way to have mpirun just merger STDOUT of one process to its STDOUT stream? -- Dipl.-Inform. Paul Kapinos - High Performance Computing, RWTH Aachen University, Center for Computing and Communication

Re: [OMPI users] rankfiles on really big nodes broken?

2012-01-23 Thread Paul Kapinos
r 1.5.x is a good idea; but it is always a bit tedious... Would 1.5.5 arrive the next time? Best wishes, Paul Kapinos Ralph Castain wrote: I don't see anything in the code that limits the number of procs in a rankfile. > Are the attached rankfiles the ones you are trying to use? I&#x

[OMPI users] rankfiles on really big nodes broken?

2012-01-20 Thread Paul Kapinos
s computer dimension is a bit too big for the pinning infrasructure now. A bug? Best wishes, Paul Kapinos P.S. see the attached .tgz for some logzz -- Rankfiles Rankfiles provide a means for specifying detailed i

Re: [OMPI users] SIGV at MPI_Cart_sub

2012-01-10 Thread Paul Kapinos
precisely answer are impossible without seeing any codes snippet and/or logs. Best, Paul Anas Al-Trad wrote: Dear people, In my application, I have the segmentation fault of Integer Divide-by-zero when calling MPI_cart_sub routine. My program is as follows, I have 128 ranks, I make

Re: [OMPI users] Open MPI and DAPL 2.0.34 are incompatible?

2011-12-22 Thread Paul Kapinos
.conf file." Well. Any suggestions? Does OpenMPI ever able to use DAPL 2.0 on Linux? Merry Christmas from westernest Germany, Paul Paul Kapinos wrote: Good morning, We've never recommended the use of dapl on Linux. I think it might have worked at one time, but I don't thi

[OMPI users] Accessing OpenMPI processes on EC2 machine over Internet using ssh

2011-12-18 Thread Jaison Paul
We have reported this before. We are still not able to do it, fully. However partially successful, now. We have used a machine with static IP address and modified the router settings by opening all ssh ports. Master runs on this machine and the slaves on EC2. Now we can run the "Hello world" ove

Re: [OMPI users] Cofigure(?) problem building /1.5.3 on ScientificLinux6.0

2011-12-09 Thread Paul Kapinos
ould wear sackcloth and ashes... :-/ Best, Paul Anyway, since 1.2.8 here I build 5, sometimes more versions, all from the same tarball, but on separate build directories, as Jeff suggests. [VPATH] Works for me. My two cents. Gus Correa Jeff Squyres wrote: Ah -- Ralph pointed out the rel

Re: [OMPI users] Open MPI and DAPL 2.0.34 are incompatible?

2011-12-06 Thread Paul Kapinos
--- Because of the anticipated performance gain we would be very keen on using DAPL with Open MPI. Does somebody have any idea what could be wrong and what to check? On Dec 2, 2011, at 1:21 PM, Paul Kapinos wrote: Dear Open MPI developer, OFED 1.5.4 will

Re: [OMPI users] wiki and "man mpirun" odds, and a question

2011-12-06 Thread Paul Kapinos
FOBA -x BAFO -x RR -x ZZ" Well, this are my user's dreams; but maybe this give an inspiration for Open MPI programmers. As said, the situation when a [long] list of envvars is to be provided is quite common, and typing everything on the command line is tedious and error-prone. Best wi

Re: [OMPI users] How are the Open MPI processes spawned?

2011-12-06 Thread Paul Kapinos
Hello Jeff, Ralph, all! Meaning that per my output from above, what Paul was trying should have worked, no? I.e., setenv'ing OMPI_, and those env vars should magically show up in the launched process. In the -launched process- yes. However, his problem was that they do not show up fo

[OMPI users] Open MPI and DAPL 2.0.34 are incompatible?

2011-12-02 Thread Paul Kapinos
on as possible) Best wishes and an nice weekend, Paul http://www.openfabrics.org/downloads/OFED/release_notes/OFED_1.5.4_release_notes -- Dipl.-Inform. Paul Kapinos - High Performance Computing, RWTH Aachen University, Center for Computing and Communication Seffenter Weg 23

Re: [OMPI users] Accessing OpenMPI processes over Internet using ssh

2011-11-30 Thread Jaison Paul
Ralph Castain open-mpi.org> writes: > > This has come up before - I would suggest doing a quick search of "ec2" on our user list. Here is one solution: > On Jun 14, 2011, at 10:50 AM, Barnet Wagman wrote:I've put together a simple system for running OMPI on EC2 (Amazon's cloud computing service)

Re: [OMPI users] Accessing OpenMPI processes over Internet using ssh

2011-11-30 Thread Jaison Paul
Jeff Squyres cisco.com> writes: > > On Nov 30, 2011, at 6:03 AM, Jaison Paul wrote: > > > Yes, we have set up .ssh file on remote EC2 hosts. Is there anything else that we should be taking care of when > dealing with EC2? > > I have heard that Open MPI's

Re: [OMPI users] Accessing OpenMPI processes over Internet using ssh

2011-11-30 Thread Jaison Paul
Ralph Castain open-mpi.org> writes: > > This has come up before - I would suggest doing a quick search of "ec2" on our user list. Here is one solution: > On Jun 14, 2011, at 10:50 AM, Barnet Wagman wrote:I've put together a simple system for running OMPI on EC2 (Amazon's cloud computing service)

Re: [OMPI users] Accessing OpenMPI processes over Internet using ssh

2011-11-30 Thread Jaison Paul
Jeff Squyres cisco.com> writes: > > On Nov 30, 2011, at 6:03 AM, Jaison Paul wrote: > > > Yes, we have set up .ssh file on remote EC2 hosts. Is there anything else that we should be taking care of when > dealing with EC2? > > I have heard that Open MPI's

Re: [OMPI users] Accessing OpenMPI processes over Internet using ssh

2011-11-30 Thread Jaison Paul
that but failed. Would try again. Yes, we have set up .ssh file on remote EC2 hosts. Is there anything else that we should be taking care of when dealing with EC2? Jaison > > Hi, > > > > Am 24.11.2011 um 05:26 schrieb Jaison Paul: > > > >> I am trying to access O

Re: [OMPI users] How are the Open MPI processes spawned?

2011-11-25 Thread Paul Kapinos
ovided and thus treated *differently* than other envvars: $ man mpiexec Exported Environment Variables All environment variables that are named in the form OMPI_* will automatically be exported to new processes on the local and remote nodes. So, tells the man page lies, or this

Re: [OMPI users] How are the Open MPI processes spawned?

2011-11-24 Thread Paul Kapinos
command line options. This should not be so? (I also tried to advise to provide the envvars by -x OMPI_MCA_oob_tcp_if_include -x OMPI_MCA_btl_tcp_if_include - nothing changed. Well, they are OMPI_ variables and should be provided in any case). Best wishes and many thanks for all, Paul K

[OMPI users] Accessing OpenMPI processes over Internet using ssh

2011-11-23 Thread Jaison Paul
Hi all, I am trying to access OpenMPI processes over Internet using ssh and not quite successful, yet. I believe that I should be able to do it. I have to run one process on my PC and the rest on a remote cluster over internet. I have set the public keys (at .ssh/authorized_keys) to access r

Re: [OMPI users] How are the Open MPI processes spawned?

2011-11-23 Thread Paul Kapinos
above command should disable the usage of eth0 for MPI communication itself, but it hangs just before the MPI is started, isn't it? (because one process lacks, the MPI_INIT cannot be passed) Now a question: is there a way to forbid the mpiexec to use some interfaces at all? Best wishes, Pau

Re: [OMPI users] How are the Open MPI processes spawned?

2011-11-22 Thread Paul Kapinos
constellation. The next thing I will try will be the installation of 1.5.4 :o) Best, Paul P.S. started: $ /opt/MPI/openmpi-1.5.3/linux/intel/bin/mpiexec --hostfile hostfile-mini -mca odls_base_verbose 5 --leave-session-attached --display-map helloworld 2>&1 | tee helloworld.txt

[OMPI users] How are the Open MPI processes spawned?

2011-11-21 Thread Paul Kapinos
ea what is gonna on? Best, Paul Kapinos P.S: no alias names used, all names are real ones -- Dipl.-Inform. Paul Kapinos - High Performance Computing, RWTH Aachen University, Center for Computing and Communication Seffenter Weg 23, D 52074 Aachen (Germany) Tel: +49 241/80-24915 l

[OMPI users] wiki and "man mpirun" odds, and a question

2011-11-10 Thread Paul Kapinos
[long] list of variables. Is there someone envvar, by setting which to a list of names of other envvars the same effect could be achieved as by setting -x on command line of mpirun? Best wishes Paul Kapinos -- Dipl.-Inform. Paul Kapinos - High Performance Computing, RWTH Aachen Univers

[OMPI users] problems with Intel 12.x compilers and OpenMPI (1.4.3)

2011-09-23 Thread Paul Kapinos
workarounded the problem by switching our production to 1.5.3 this issue is not a "burning" one; but I decieded still to post this because any issue on such fundamental things may be interesting for developers. Best wishes, Paul Kapinos (*) http://www.netlib.org/ben

Re: [OMPI users] Cofigure(?) problem building /1.5.3 on ScientificLinux6.0

2011-07-22 Thread Paul Kapinos
will trigger our admins... Best wishes, Paul m4 (GNU M4) 1.4.13 (OK) autoconf (GNU Autoconf) 2.63 (Need: 2.65, NOK) automake (GNU automake) 1.11.1 (OK) ltmain.sh (GNU libtool) 2.2.6b (OK) On Jul 22, 2011, at 9:12 AM, Paul Kapinos wrote: Dear OpenMPI volks, currently I have a probl

[OMPI users] and the next one (3th today!) PGI+OpenMPI issue

2011-07-22 Thread Paul Kapinos
re string below. With the Intel, gcc and Studio compiles, the very same installations were happily through. Maybe someone can give me a hint about this is an issue by openmpi, pgi or somehow else... Best wishes, Paul P.S. again, more logs downloadable: https://gigamove.rz.rwth-aachen.de/d/id

[OMPI users] One more pgi+libtool issue

2011-07-22 Thread Paul Kapinos
UT, in the configure line (below) I get the -m32 flag!! So, where is the -m32 thing lost? Did I do something in a wrong way? Best wishes and a nice weekend, Paul Kapinos P.S. again, the some more logs downloadable from here: https://gigamove.rz.rwth-aachen.de/d/id/xoQ2

[OMPI users] Usage of PGI compilers (Libtool or OpenMPI issue?)

2011-07-22 Thread Paul Kapinos
warnings: pgCC-Warning-prelink_objects switch is deprecated pgCC-Warning-instantiation_dir switch is deprecated coming from the below-noted call. I do not know about this is a Libtool or a libtool usage (=OpenMPI issue, but I do not want to keep secret this... Best wishes Paul Kapinos

[OMPI users] Cofigure(?) problem building /1.5.3 on ScientificLinux6.0

2011-07-22 Thread Paul Kapinos
/OFF). The same error arise in all 16 versions. Can someone give a hint about how to avoid this issue? Thanks! Best wishes, Paul Some logs and configure are downloadable here: https://gigamove.rz.rwth-aachen.de/d/id/2jM6MEa2nveJJD The configure line is in RUNME.sh, the logs of configure and b

Re: [OMPI users] Does Oracle Cluster Tools aka Sun's MPI work with LDAP?

2011-07-20 Thread Paul Kapinos
tification modi. The32bit version works with the NIS-autentificated part of our cluster, only. Thanks for your help! Best wishes Paul Kapinos Reuti wrote: Hi, Am 15.07.2011 um 21:14 schrieb Terry Dontje: On 7/15/2011 1:46 PM, Paul Kapinos wrote: Hi OpenMPI volks (and Oracle/Sun experts),

[OMPI users] Does Oracle Cluster Tools aka Sun's MPI work with LDAP?

2011-07-15 Thread Paul Kapinos
Suns MPI compatible with LDAP autotentification method at all? Best wishes, Paul P.S. in both parts if the cluster, me (login marked as x here) can login to any node by ssh without need to type the password. -- The u

  1   2   >