[OMPI users] funny SIGSEGV in 'ompi_info'

2016-11-14 Thread Paul Kapinos
look at the below core dump of 'ompi_info' like below one. (yes we know that "^tcp,^ib" is a bad idea). Have a nice day, Paul Kapinos P.S. Open MPI: 1.10.4 and 2.0.1 have the same behaviour -- [lnm001:39

Re: [OMPI users] Segmentation Fault (Core Dumped) on mpif90 -v

2016-12-14 Thread Paul Kapinos
ntel libs (as said changing out these solves/raises the issue) we will do a failback to 16.0.2.181 compiler version. We will try to open a case by Intel - let's see... Have a nice day, Paul Kapinos On 05/06/16 14:10, Jeff Squyres (jsquyres) wrote: Ok, good. I asked that question beca

Re: [OMPI users] Segmentation Fault (Core Dumped) on mpif90 -v

2016-12-23 Thread Paul Kapinos
! Paul Kapinos On 12/14/16 13:29, Paul Kapinos wrote: Hello all, we seem to run into the same issue: 'mpif90' sigsegvs immediately for Open MPI 1.10.4 compiled using Intel compilers 16.0.4.258 and 16.0.3.210, while it works fine when compiled with 16.0.2.181. It seems to be a compiler i

Re: [OMPI users] Is building with "--enable-mpi-thread-multiple" recommended?

2017-03-03 Thread Paul Kapinos
nce bug in MPI_Free_mem your application can be horribly slow (seen: CP2K) if the InfiniBand failback of OPA not disabled manually, see https://www.mail-archive.com/users@lists.open-mpi.org//msg30593.html Best, Paul Kapinos -- Dipl.-Inform. Paul Kapinos - High Performance Computing, R

Re: [OMPI users] Is building with "--enable-mpi-thread-multiple" recommended?

2017-03-03 Thread Paul Kapinos
ntion. We have a (nasty) workaround, cf. https://www.mail-archive.com/devel@lists.open-mpi.org/msg00052.html As far as I can see this issue is on InfiniBand only. Best Paul -- Dipl.-Inform. Paul Kapinos - High Performance Computing, RWTH Aachen University, IT Center Seffenter Weg 23, D 5207

Re: [OMPI users] openib/mpi_alloc_mem pathology

2017-03-07 Thread Paul Kapinos
for multi-node jobs, and that doesn't show the pathological behaviour iff openib is suppressed. However, it requires ompi 1.10, not 1.8, which I was trying to use. ___ users mailing list users@lists.open-mpi.org https://rfd.newmexicoconsort

Re: [OMPI users] openib/mpi_alloc_mem pathology

2017-03-13 Thread Paul Kapinos
). On 03/07/17 20:22, Nathan Hjelm wrote: If this is with 1.10.x or older run with --mca memory_linux_disable 1. There is a bad interaction between ptmalloc2 and psm2 support. This problem is not present in v2.0.x and newer. -Nathan On Mar 7, 2017, at 10:30 AM, Paul Kapinos wrote: Hi Dav

Re: [OMPI users] openib/mpi_alloc_mem pathology [#20160912-1315]

2017-03-16 Thread Paul Kapinos
Hi, On 03/16/17 10:35, Alfio Lazzaro wrote: We would like to ask you which version of CP2K you are using in your tests Release 4.1 and if you can share with us your input file and output log. The question goes to Mr Mathias Schumacher, on CC: Best Paul Kapinos (Our internal ticketing

Re: [OMPI users] openib/mpi_alloc_mem pathology

2017-03-16 Thread Paul Kapinos
+#endif OPAL_CR_EXIT_LIBRARY(); return MPI_SUCCESS; ``` This will at least tell us if the innards of our ALLOC_MEM/FREE_MEM (i.e., likely the registration/deregistration) are causing the issue. On Mar 15, 2017, at 1:27 PM, Dave Love wrote: Paul Kapinos writes: Nathan, unfortunat

Re: [OMPI users] Performance issues: 1.10.x vs 2.x

2017-05-04 Thread Paul Kapinos
seems that something changed starting from version 2.x, and the FDR system performs much worse than with the prior 1.10.x release. -- Dipl.-Inform. Paul Kapinos - High Performance Computing, RWTH Aachen University, IT Center Seffenter Weg 23, D 52074 Aachen (Germany) Tel: +49 241/80-24915

Re: [OMPI users] Performance issues: 1.10.x vs 2.x

2017-05-05 Thread Paul Kapinos
In 1.10.x series there were 'memory hooks' - Open MPI did take some care abount the alignment. This was removed in 2.x series, cf. the whole thread on my link. -- Dipl.-Inform. Paul Kapinos - High Performance Computing, RWTH Aachen University, IT Center Seffenter Weg 23, D

[OMPI users] is there a way to bring to light _all_ configure options in a ready installation?

2010-08-24 Thread Paul Kapinos
n I see would these flags set or would not? In other words: is it possible to get _all_ flags of configure from an "ready" installation in without having the compilation dirs (with configure logs) any more? Many thanks Paul -- Dipl.-Inform. Paul Kapinos - High Performance Com

Re: [OMPI users] is there a way to bring to light _all_ configure options in a ready installation?

2010-08-24 Thread Paul Kapinos
what configure options were for a given installation! "./configure --help" helps but to guess which all of the options are used in a release, is a hard job.. --td On Aug 24, 2010, at 7:40 AM, Paul Kapinos wrote: Hello OpenMPI developers, I am searching for a way to discover _all

[OMPI users] a question about [MPI]IO on systems without network filesystem

2010-09-29 Thread Paul Kapinos
wishes Paul -- Dipl.-Inform. Paul Kapinos - High Performance Computing, RWTH Aachen University, Center for Computing and Communication Seffenter Weg 23, D 52074 Aachen (Germany) Tel: +49 241/80-24915 smime.p7s Description: S/MIME Cryptographic Signature

[OMPI users] v1.5.1 build failed with PGI compiler

2011-01-04 Thread Paul Kapinos
-32/10.9/lib -tp -Wl,-rpath=/../PGI/PGI_10.9_CENTOS_64/linux86-64/10.9/libso -Wl,-rpath=/../PGI/PGI_10.9_CENTOS_64/linux86-64/10.9/lib -Wl,-rpath=/../PGI/PGI_10.9_CENTOS_64/linux86-32/10.9/lib -Wl,-soname -Wl,libopen-pal.so.1 -o .libs/libopen-pal.so.1.0.0 Best wishes, Paul

[OMPI users] v1.5.1: configuration failed if compiling on CentOS 5.5 with defauld GCC

2011-01-04 Thread Paul Kapinos
CentOS 5.5 is still a problem, also other versions of GCC seem not to have the same issue. Best wishes, Paul -- Dipl.-Inform. Paul Kapinos - High Performance Computing, RWTH Aachen University, Center for Computing and Communication Seffenter Weg 23, D 52074 Aachen (Germany) Tel: +49 241/80

[OMPI users] Configure fail: OpenMPI/1.5.3 with Support for LSF using Sun Studio compilers

2011-04-07 Thread Paul Kapinos
for the availability of `ceil' for the C compiler (see config.log.ceil). This check says `ceil' is *available* for the "cc" Compiler, which is *wrong*, cf. (4). So, is there an error in the configure stage? Or either the checks in config.log.ceil does not rely on the av

Re: [OMPI users] Configure fail: OpenMPI/1.5.3 with Support for LSF using Sun Studio compilers

2011-04-07 Thread Paul Kapinos
the "cc" compiler without the need for -lm flag - and this is *wrong*, "cc" need -lm. It seem for me to be an configure issue. Greetings Paul -- Dipl.-Inform. Paul Kapinos - High Performance Computing, RWTH Aachen University, Center for Computing and Communication Sef

[OMPI users] --enable-progress-threads broken in 1.5.3?

2011-04-28 Thread Paul Kapinos
same way. Best wishes, Paul -- Dipl.-Inform. Paul Kapinos - High Performance Computing, RWTH Aachen University, Center for Computing and Communication Seffenter Weg 23, D 52074 Aachen (Germany) Tel: +49 241/80-24915 smime.p7s Description: S/MIME Cryptographic Signature

[OMPI users] How to use a wrapper for ssh?

2011-07-12 Thread Paul Kapinos
x27;ssh' stat64("/opt/lsf/8.0/linux2.6-glibc2.3-x86_64/bin/ssh", 0x8324) = -1 ENOENT (No such file or directory) ===> OMPI_MCA_orte_rsh_agent does not work?! -- Dipl.-Inform. Paul Kapinos - High Performance Computing, RWTH Aachen University, Center for Computing and Co

Re: [OMPI users] How to use a wrapper for ssh?

2011-07-13 Thread Paul Kapinos
orrect. Maybe someone can correct it? This would save some time for people like me... Best wishes Paul Kapinos -- Dipl.-Inform. Paul Kapinos - High Performance Computing, RWTH Aachen University, Center for Computing and Communication Seffenter Weg 23, D 52074 Aachen (Germany) Tel: +49

[OMPI users] Does Oracle Cluster Tools aka Sun's MPI work with LDAP?

2011-07-15 Thread Paul Kapinos
in file plm_rsh_module.c at line 1058 -- -- Dipl.-Inform. Paul Kapinos - High Performance Computing, RWTH Aachen University, Center for Computing and Communication Seffenter Weg 23, D 52074 Aachen (Germany) Tel: +49

Re: [OMPI users] Does Oracle Cluster Tools aka Sun's MPI work with LDAP?

2011-07-20 Thread Paul Kapinos
tification modi. The32bit version works with the NIS-autentificated part of our cluster, only. Thanks for your help! Best wishes Paul Kapinos Reuti wrote: Hi, Am 15.07.2011 um 21:14 schrieb Terry Dontje: On 7/15/2011 1:46 PM, Paul Kapinos wrote: Hi OpenMPI volks (and Oracle/Sun experts),

[OMPI users] Cofigure(?) problem building /1.5.3 on ScientificLinux6.0

2011-07-22 Thread Paul Kapinos
M4_CONFIG_COMPONENT is expanded from... config/ompi_mca.m4:326: MCA_CONFIGURE_FRAMEWORK is expanded from... config/ompi_mca.m4:247: MCA_CONFIGURE_PROJECT is expanded from... configure.ac:953: warning: AC_RUN_IFELSE was called before AC_USE_SYSTEM_EXTENSIONS -- Dipl.-Inform. Paul Kapinos - High

[OMPI users] Usage of PGI compilers (Libtool or OpenMPI issue?)

2011-07-22 Thread Paul Kapinos
warnings: pgCC-Warning-prelink_objects switch is deprecated pgCC-Warning-instantiation_dir switch is deprecated coming from the below-noted call. I do not know about this is a Libtool or a libtool usage (=OpenMPI issue, but I do not want to keep secret this... Best wishes Paul Kapinos

[OMPI users] One more pgi+libtool issue

2011-07-22 Thread Paul Kapinos
UT, in the configure line (below) I get the -m32 flag!! So, where is the -m32 thing lost? Did I do something in a wrong way? Best wishes and a nice weekend, Paul Kapinos P.S. again, the some more logs downloadable from here: https://gigamove.rz.rwth-aachen.de/d/id/xoQ2

[OMPI users] and the next one (3th today!) PGI+OpenMPI issue

2011-07-22 Thread Paul Kapinos
/WNk69nPr4w7svT -- Dipl.-Inform. Paul Kapinos - High Performance Computing, RWTH Aachen University, Center for Computing and Communication Seffenter Weg 23, D 52074 Aachen (Germany) Tel: +49 241/80-24915 smime.p7s Description: S/MIME Cryptographic Signature

Re: [OMPI users] Cofigure(?) problem building /1.5.3 on ScientificLinux6.0

2011-07-22 Thread Paul Kapinos
will trigger our admins... Best wishes, Paul m4 (GNU M4) 1.4.13 (OK) autoconf (GNU Autoconf) 2.63 (Need: 2.65, NOK) automake (GNU automake) 1.11.1 (OK) ltmain.sh (GNU libtool) 2.2.6b (OK) On Jul 22, 2011, at 9:12 AM, Paul Kapinos wrote: Dear OpenMPI volks, currently I have a probl

[OMPI users] problems with Intel 12.x compilers and OpenMPI (1.4.3)

2011-09-23 Thread Paul Kapinos
workarounded the problem by switching our production to 1.5.3 this issue is not a "burning" one; but I decieded still to post this because any issue on such fundamental things may be interesting for developers. Best wishes, Paul Kapinos (*) http://www.netlib.org/ben

[OMPI users] wiki and "man mpirun" odds, and a question

2011-11-10 Thread Paul Kapinos
[long] list of variables. Is there someone envvar, by setting which to a list of names of other envvars the same effect could be achieved as by setting -x on command line of mpirun? Best wishes Paul Kapinos -- Dipl.-Inform. Paul Kapinos - High Performance Computing, RWTH Aachen Univers

[OMPI users] How are the Open MPI processes spawned?

2011-11-21 Thread Paul Kapinos
ea what is gonna on? Best, Paul Kapinos P.S: no alias names used, all names are real ones -- Dipl.-Inform. Paul Kapinos - High Performance Computing, RWTH Aachen University, Center for Computing and Communication Seffenter Weg 23, D 52074 Aachen (Germany) Tel: +49 241/80-24915 l

Re: [OMPI users] How are the Open MPI processes spawned?

2011-11-22 Thread Paul Kapinos
constellation. The next thing I will try will be the installation of 1.5.4 :o) Best, Paul P.S. started: $ /opt/MPI/openmpi-1.5.3/linux/intel/bin/mpiexec --hostfile hostfile-mini -mca odls_base_verbose 5 --leave-session-attached --display-map helloworld 2>&1 | tee helloworld.txt

Re: [OMPI users] How are the Open MPI processes spawned?

2011-11-23 Thread Paul Kapinos
above command should disable the usage of eth0 for MPI communication itself, but it hangs just before the MPI is started, isn't it? (because one process lacks, the MPI_INIT cannot be passed) Now a question: is there a way to forbid the mpiexec to use some interfaces at all? Best wishes, Pau

Re: [OMPI users] How are the Open MPI processes spawned?

2011-11-24 Thread Paul Kapinos
command line options. This should not be so? (I also tried to advise to provide the envvars by -x OMPI_MCA_oob_tcp_if_include -x OMPI_MCA_btl_tcp_if_include - nothing changed. Well, they are OMPI_ variables and should be provided in any case). Best wishes and many thanks for all, Paul K

Re: [OMPI users] How are the Open MPI processes spawned?

2011-11-25 Thread Paul Kapinos
ovided and thus treated *differently* than other envvars: $ man mpiexec Exported Environment Variables All environment variables that are named in the form OMPI_* will automatically be exported to new processes on the local and remote nodes. So, tells the man page lies, or this

[OMPI users] Open MPI and DAPL 2.0.34 are incompatible?

2011-12-02 Thread Paul Kapinos
on as possible) Best wishes and an nice weekend, Paul http://www.openfabrics.org/downloads/OFED/release_notes/OFED_1.5.4_release_notes -- Dipl.-Inform. Paul Kapinos - High Performance Computing, RWTH Aachen University, Center for Computing and Communication Seffenter Weg 23

Re: [OMPI users] How are the Open MPI processes spawned?

2011-12-06 Thread Paul Kapinos
s! Best wishes and a nice evening/day/whatever time you have! Paul Kapinos -- Dipl.-Inform. Paul Kapinos - High Performance Computing, RWTH Aachen University, Center for Computing and Communication Seffenter Weg 23, D 52074 Aachen (Germany) Tel: +49 241/80-24915 smime.p7s Description: S/MIME Cryptographic Signature

Re: [OMPI users] wiki and "man mpirun" odds, and a question

2011-12-06 Thread Paul Kapinos
FOBA -x BAFO -x RR -x ZZ" Well, this are my user's dreams; but maybe this give an inspiration for Open MPI programmers. As said, the situation when a [long] list of envvars is to be provided is quite common, and typing everything on the command line is tedious and error-prone. Best wi

Re: [OMPI users] Open MPI and DAPL 2.0.34 are incompatible?

2011-12-06 Thread Paul Kapinos
--- Because of the anticipated performance gain we would be very keen on using DAPL with Open MPI. Does somebody have any idea what could be wrong and what to check? On Dec 2, 2011, at 1:21 PM, Paul Kapinos wrote: Dear Open MPI developer, OFED 1.5.4 will

Re: [OMPI users] Cofigure(?) problem building /1.5.3 on ScientificLinux6.0

2011-12-09 Thread Paul Kapinos
totools). I suspect that if you do this: - tar xf openmpi-1.5.3.tar.bz2 cd openmpi-1.5.3 ./configure etc. - everything will work just fine. On Jul 22, 2011, at 11:12 AM, Paul Kapinos wrote: Dear OpenMPI volks, currently I have a problem by building the version 1.5.3 of OpenMPI on

Re: [OMPI users] Open MPI and DAPL 2.0.34 are incompatible?

2011-12-22 Thread Paul Kapinos
.conf file." Well. Any suggestions? Does OpenMPI ever able to use DAPL 2.0 on Linux? Merry Christmas from westernest Germany, Paul Paul Kapinos wrote: Good morning, We've never recommended the use of dapl on Linux. I think it might have worked at one time, but I don't thi

Re: [OMPI users] SIGV at MPI_Cart_sub

2012-01-10 Thread Paul Kapinos
help me in solving this? Regards, Anas ___ users mailing list us...@open-mpi.org http://www.open-mpi.org/mailman/listinfo.cgi/users -- Dipl.-Inform. Paul Kapinos - High

[OMPI users] rankfiles on really big nodes broken?

2012-01-20 Thread Paul Kapinos
s computer dimension is a bit too big for the pinning infrasructure now. A bug? Best wishes, Paul Kapinos P.S. see the attached .tgz for some logzz -- Rankfiles Rankfiles provide a means for specifying detailed i

Re: [OMPI users] rankfiles on really big nodes broken?

2012-01-23 Thread Paul Kapinos
r 1.5.x is a good idea; but it is always a bit tedious... Would 1.5.5 arrive the next time? Best wishes, Paul Kapinos Ralph Castain wrote: I don't see anything in the code that limits the number of procs in a rankfile. > Are the attached rankfiles the ones you are trying to use? I&#x

Re: [OMPI users] Mpirun: How to print STDOUT of just one process?

2012-02-01 Thread Paul Kapinos
Try out the attached wrapper: $ mpiexec -np 2 masterstdout mpirun -n 2 Is there a way to have mpirun just merger STDOUT of one process to its STDOUT stream? -- Dipl.-Inform. Paul Kapinos - High Performance Computing, RWTH Aachen University, Center for Computing and Communication

[OMPI users] Environment variables [documentation]

2012-02-27 Thread Paul Kapinos
RANK is? (This would make sense as it looks like the OMPI_COMM_WORLD_SIZE, OMPI_COMM_WORLD_RANK pair.) If yes, maybe it also should be documented in the Wiki page. 2) OMPI_COMM_WORLD_NODE_RANK - is that just a double of OMPI_COMM_WORLD_LOCAL_RANK ? Best wishes, Paul Kapinos -- Dipl.-

[OMPI users] Problem running over IB with huge data set

2012-02-27 Thread Paul Kapinos
able]. Ralph, Jeff, anybody - any interest in reproducing this issue? Best wishes, Paul Kapinos P.S. Open MPI 1.5.3 used - still waiting for 1.5.5 ;-) Some error messages: with 6 procs over 6 Nodes: -- mlx4:

Re: [OMPI users] Hybrid OpenMPI / OpenMP programming

2012-03-02 Thread Paul Kapinos
an/listinfo.cgi/users ___ users mailing list us...@open-mpi.org http://www.open-mpi.org/mailman/listinfo.cgi/users -- Dipl.-Inform. Paul Kapinos - High Performance Computing, RWTH Aachen University, Center for Computing and Communication Seffenter Weg 23, D 52074 Aachen (Germany) Tel: +49 241/80-24915 smime.p7s Description: S/MIME Cryptographic Signature

[OMPI users] Still bothered / cannot run an application

2012-07-12 Thread Paul Kapinos
ue? Are you interested in reproduce this? Best, Paul Kapinos P.S: The same test with Intel MPI cannot run using DAPL, but run very fine opef 'ofa' (= native verbs as Open MPI use it). So I believe the problem is rooted in the communication pattern of the program; it send very LARGE messag

Re: [OMPI users] Re :Re: OpenMP and OpenMPI Issue

2012-07-23 Thread Paul Kapinos
ouble over our infiniband network. I'm running a fairly large problem (uses about 18GB), and part way in, I get the following errors: You say "big footprint"? I hear a bell ringing... http://www.open-mpi.org/faq/?category=openfabrics#ib-low-reg-mem -- Dipl.-Inform. Paul Kapin

Re: [OMPI users] Infiniband performance Problem and stalling

2012-08-28 Thread Paul Kapinos
chunk size of 64k is fairly small -- Dipl.-Inform. Paul Kapinos - High Performance Computing, RWTH Aachen University, Center for Computing and Communication Seffenter Weg 23, D 52074 Aachen (Germany) Tel: +49 241/80-24915 smime.p7s Description: S/MIME Cryptographic Signature

Re: [OMPI users] OMPI 1.6.x Hang on khugepaged 100% CPU time

2012-09-05 Thread Paul Kapinos
rote: I'm checking it with OFED folks, but I doubt that there are some dedicated tests for THP. So do you see it only with a specific application and only on a specific data set? Wonder if I can somehow reproduce it in-house... -- Dipl.-Inform. Paul Kapinos - High Performance Computi

[OMPI users] too much stack size: _silently_ failback to IPoIB

2012-10-05 Thread Paul Kapinos
2 -H linuxbdc01,linuxbdc02 /home/pk224850/bin/ulimit_high.sh MPI_FastTest.exe -- Dipl.-Inform. Paul Kapinos - High Performance Computing, RWTH Aachen University, Center for Computing and Communication Seffenter Weg 23, D 52074 Aachen (Germany) Tel: +49 241/80-24915 ulimit_high

Re: [OMPI users] Performance/stability impact of thread support

2012-10-30 Thread Paul Kapinos
rmance/stability? Daniel ___ users mailing list us...@open-mpi.org http://www.open-mpi.org/mailman/listinfo.cgi/users -- Dipl.-Inform. Paul Kapinos - High Performance Computing, RWTH Aachen University, Center for Computing and Communication Seffenter

[OMPI users] Multirail + Open MPI 1.6.1 = very big latency for the first communication

2012-10-31 Thread Paul Kapinos
unning#mpi-preconnect) there is no such huge latency outliers for the first sample. Well, we know about the warm-up and lazy connections. But 200x ?! Any comments about that is OK so? Best, Paul Kapinos (*) E.g. HPCC explicitely say in http://icl.cs.utk.edu/hpcc/faq/index.html#132 > Addit

Re: [OMPI users] MPI_Alltoallv performance regression 1.6.0 to 1.6.1

2012-12-19 Thread Paul Kapinos
We 'tune' our Open MPI by setting environment variables.... Best Paul Kapinos On 12/19/12 11:44, Number Cruncher wrote: Having run some more benchmarks, the new default is *really* bad for our application (2-10x slower), so I've been looking at the source to try and figure out

Re: [OMPI users] Initializing OMPI with invoking the array constructor on Fortran derived types causes the executable to crash

2013-01-11 Thread Paul Kapinos
__ users mailing list us...@open-mpi.org http://www.open-mpi.org/mailman/listinfo.cgi/users -- Dipl.-Inform. Paul Kapinos - High Performance Computing, RWTH Aachen University, Center for Computing and Communication Seffenter Weg 23, D 52074 Aachen (Germany) Tel: +

Re: [OMPI users] openmpi, 1.6.3, mlx4_core, log_num_mtt and Debian/vanilla kernel

2013-02-21 Thread Paul Kapinos
nough-registred-mem" computation for Mellanox HCAs? Any other idea/hint? -- Dipl.-Inform. Paul Kapinos - High Performance Computing, RWTH Aachen University, Center for Computing and Communication Seffenter Weg 23, D 52074 Aachen (Germany) Tel: +49 241/80-24915 smime.p7s Descripti

Re: [OMPI users] bug in mpif90? OMPI_FC envvar does not work with 'use mpi'

2013-03-13 Thread Paul Kapinos
e meantime). -- Dipl.-Inform. Paul Kapinos - High Performance Computing, RWTH Aachen University, Center for Computing and Communication Seffenter Weg 23, D 52074 Aachen (Germany) Tel: +49 241/80-24915 smime.p7s Description: S/MIME Cryptographic Signature

[OMPI users] OpenMPI 1.6.4, MPI I/O on Lustre, 32bit: bug?

2013-03-25 Thread Paul Kapinos
ations. Otherwise we will ignore it, probably... Best Paul Kapinos (*) we've kinda internal test suite in order to check our MPIs... P.S. $ mpicc -O0 -m32 -o ./mpiIOC32.exe ctest.c -lm P.S.2 an example cofnigure line: ./configure --with-openib --with-lsf --with-devel-headers --enable-con

Re: [OMPI users] cannot build 32-bit openmpi-1.7 on Linux

2013-04-05 Thread Paul Kapinos
oducer and send it to the compiler developer team :o) Best Paul Kapinos On 04/05/13 17:56, Siegmar Gross wrote: PPFC mpi-f08.lo "../../../../../openmpi-1.7/ompi/mpi/fortran/use-mpi-f08/mpi-f08.F90", Line = 1, Column = 1: INTERNAL: Interrupt: Segmentation fault -- Dipl.-

Re: [OMPI users] OMPI v1.7.1 fails to build on RHEL 5 and RHEL 6

2013-04-18 Thread Paul Kapinos
g. Any suggestions? Thanks Tim Dunn ___ users mailing list us...@open-mpi.org http://www.open-mpi.org/mailman/listinfo.cgi/users ___ users mailing list us...@open-mpi.org http://www.open-mpi.org/mailman/lis

Re: [OMPI users] Building Open MPI with LSF

2013-05-07 Thread Paul Kapinos
tight integration to LSF 8.0 now =) For future, if you need a testbed, I can grant an user access to you... best Paul -- Dipl.-Inform. Paul Kapinos - High Performance Computing, RWTH Aachen University, Center for Computing and Communication Seffenter Weg 23, D 52074 Aachen (Germany) Tel: +4

Re: [OMPI users] basic questions about compiling OpenMPI

2013-05-22 Thread Paul Kapinos
usuallu a bit dusty. -- Dipl.-Inform. Paul Kapinos - High Performance Computing, RWTH Aachen University, Center for Computing and Communication Seffenter Weg 23, D 52074 Aachen (Germany) Tel: +49 241/80-24915 smime.p7s Description: S/MIME Cryptographic Signature

Re: [OMPI users] 1.7.1 Hang with MPI_THREAD_MULTIPLE set

2013-06-03 Thread Paul Kapinos
pilers) or -lmpi_mt instead of -lmpi (other compilers). However, Intel MPI is not free. Best, Paul Kapinos Also, I recommend to _always_ check what kinda of threading lievel you ordered and what did you get: print *, 'hello, world!', MPI_THREAD_MULTIPLE, provided On 05/31/

Re: [OMPI users] knem/openmpi performance?

2013-07-15 Thread Paul Kapinos
isturb the production on these nodes (and different MPI versions for different nodes are doofy). Best Paul -- Dipl.-Inform. Paul Kapinos - High Performance Computing, RWTH Aachen University, Center for Computing and Communication Seffenter Weg 23, D 52074 Aachen (Germany) Tel: +49 24

[OMPI users] Big job, InfiniBand, MPI_Alltoallv and ibv_create_qp failed

2013-07-30 Thread Paul Kapinos
Best, Paul Kapinos P.S. There should be no connection problen somewhere between the nodes; a test job with 1x process on each node has been ran sucessfully just before starting the actual job, which also ran OK for a while - until calling MPI_Allt

Re: [OMPI users] Big job, InfiniBand, MPI_Alltoallv and ibv_create_qp failed

2013-08-01 Thread Paul Kapinos
Vanilla Linux ofed from RPM's for Scientific Linux release 6.4 (Carbon) (= RHEL 6.4). No ofed_info available :-( On 07/31/13 16:59, Mike Dubman wrote: Hi, What OFED vendor and version do you use? Regards M On Tue, Jul 30, 2013 at 8:42 PM, Paul Kapinos mailto:kapi...@rz.rwth-aachen.de>

Re: [OMPI users] [EXTERNAL] MPI_THREAD_SINGLE vs. MPI_THREAD_FUNNELED

2013-10-23 Thread Paul Kapinos
y reasonable for me to be wrong about all of this. Jeff -- Dipl.-Inform. Paul Kapinos - High Performance Computing, RWTH Aachen University, Center for Computing and Communication Seffenter Weg 23, D 52074 Aachen (Germany) Tel: +49 241/80-24915 smime.p7s Description: S/MIME Cryptographic Signature

Re: [OMPI users] MPI_Init_thread hangs in OpenMPI 1.7.1 when using --enable-mpi-thread-multiple

2013-10-23 Thread Paul Kapinos
gi/users ___ users mailing list us...@open-mpi.org <mailto:us...@open-mpi.org> http://www.open-mpi.org/mailman/listinfo.cgi/users ___ users mailing list us...@open-mpi.org http://www.open-mpi.o

[OMPI users] SIGSEGV in opal_hwlock152_hwlock_bitmap_or.A // Bug in 'hwlock" ?

2013-10-31 Thread Paul Kapinos
lls like an error in the 'hwlock' library. Is there a way to disable hwlock or to debug it in somehow way? (besides to build a debug version of hwlock and OpenMPI) Best Paul -- Dipl.-Inform. Paul Kapinos - High Performance Computing, RWTH Aachen University, Center f

[OMPI users] Problems with compilig of OpenMPI 1.2.7

2008-08-29 Thread Paul Kapinos
terminate declarations. "mpicxx.cc", line 293: Error: A declaration was expected instead of "0x01". 3 Error(s) detected ... + So, it seems to me, there is somewhat nasty with one or more declarations somewhere... Does somebody have any idea what I

[OMPI users] Why compilig in global paths (only) for configuretion files?

2008-09-08 Thread Paul Kapinos
t I mean, is that the paths for the configuration files, which opal_wrapper need, may be setted locally like ../share/openmpi/*** without affectiong the integrity of OpenMPI. Maybe there were were more places where the usage of local paths may be needed to allowe movable (relocable) OpenMPI. What do

Re: [OMPI users] Need help resolving No route to host error with OpenMPI 1.1.2

2008-09-09 Thread Paul Kapinos
Hi, First, consider to update to newer OpenMPI. Second, look on your environment on the box you startts OpenMPI (runs mpirun ...). Type ulimit -n to explore how many file descriptors your envirinment have. (ulimit -a for all limits). Note, every process on older versions of OpenMPI (prior 1

Re: [OMPI users] Why compilig in global paths (only) for configuretion files?

2008-09-15 Thread Paul Kapinos
upgraded? Best regards Paul Kapinos On Sep 8, 2008, at 5:33 AM, Paul Kapinos wrote: Hi all! We are using OpenMPI on an variety of machines (running Linux, Solaris/Sparc and /Opteron) using couple of compilers (GCC, Sun Studio, Intel, PGI, 32 and 64 bit...) so we have at least 15

Re: [OMPI users] Why compilig in global paths (only) for configuretion files?

2008-09-17 Thread Paul Kapinos
this makes the package really not relocable without parsing the configure files. Did you (or anyone reading this message) have any contact to SUN developers to point to this circumstance? *Why* do them use hard-coded paths? :o) best regards, Paul Kapinos # # Default word-size (used

Re: [OMPI users] Why compilig in global paths (only) for configuretion files?

2008-09-17 Thread Paul Kapinos
ry/Rolf -- can you comment? I will write an separate eMail to ct-feedb...@sun.com Best regards, Paul Kapinos <> smime.p7s Description: S/MIME Cryptographic Signature

Re: [OMPI users] Why compilig in global paths (only) for configuretion files?

2008-09-17 Thread Paul Kapinos
envvars, and parsing configuretion files I think installing everyting to hard-coded paths is somewhat inflexible. Maybe you may provide relocatable RPMs somewhere in the future? But as mentioned above, our main goal is to have both versions of CT on same sythem working. Best regards, Paul K

[OMPI users] Errors compiling OpenMPI 1.2.8 with SUN Studio express (2008/07/10) in 32bit modus

2008-10-16 Thread Paul Kapinos
We use Scientific Linux 5.1 which is an Red Hat Enterprice 5 Linux. $ uname -a Linux linuxhtc01.rz.RWTH-Aachen.DE 2.6.18-53.1.14.el5_lustre.1.6.5custom #1 SMP Wed Jun 25 12:17:09 CEST 2008 x86_64 x86_64 x86_64 GNU/Linux configured with: ./configure --enable-static --with-devel-headers CF

[OMPI users] an MPI process using about 12 file descriptors per neighbour processes - isn't it a bit too much?

2009-08-14 Thread Paul Kapinos
;-mca opal_set_max_sys_limits 1" to the command line), but we does not see any change of behaviour). What is your meaning? Best regards, Paul Kapinos RZ RWTH Aachen # /opt/SUNWhpc/HPC8.2/intel/bin/mpiexec -mca opal_set_max_sys_limits 1 -np

[OMPI users] an environment variable with same meaning than the -x option of mpiexec

2009-11-06 Thread Paul Kapinos
meaning? The writing of environmnet variables on the command line is ugly and tedious... I've searched for this info on OpenMPI web pages for about an hour and didn't find the ansver :-/ Thanking you in anticipation, Paul -- Dipl.-Inform. Paul Kapinos - High Performance Compu

Re: [OMPI users] an environment variable with same meaning than the -x option of mpiexec

2009-11-10 Thread Paul Kapinos
not? I can add it to the "to-do" list for a rainy day :-) That would be great :-) Thanks for your help! Paul Kapinos with the -x option of mpiexec there is a way to distribute environmnet variables: -x Export the specified environment variables to the remote

Re: [OMPI users] an environment variable with same meaning than the-x option of mpiexec

2009-11-10 Thread Paul Kapinos
is. This is a bit ugly, but working workaround. What i wanted to achieve with my mail, was a less ugly solution :o) Thanks for your help, Paul Kapinos Not at the moment - though I imagine we could create one. It is a tad tricky in that we allow multiple -x options on the cmd line, but we obviousl

[OMPI users] exceedingly virtual memory consumption of MPI environment if higher-setting "ulimit -s"

2009-11-19 Thread Paul Kapinos
unt of stack size for each process? And, why consuming the virtual memory at all? We guess this virtual memory is allocated for the stack (why else it will be related to the stack size ulimit). But, is such allocation really needed? Is there a way to avoid the vaste of virtual memory? best regards,

Re: [OMPI users] exceedingly virtual memory consumption of MPI, environment if higher-setting "ulimit -s"

2009-12-03 Thread Paul Kapinos
? Though I am > not sure why it would expand based on stack size?. > > --td >> Date: Thu, 19 Nov 2009 19:21:46 +0100 >> From: Paul Kapinos >> Subject: [OMPI users] exceedingly virtual memory consumption of MPI >> environment if higher-setting "ulimit -s

[OMPI users] MPI_Comm_set_errhandler: error in Fortran90 Interface mpi.mod

2010-05-03 Thread Paul Kapinos
RLD should be possible which it is currently not. Best wishes, Paul Kapinos -- Dipl.-Inform. Paul Kapinos - High Performance Computing, RWTH Aachen University, Center for Computing and Communication Seffenter Weg 23, D 52074 Aachen (Germany) Tel: +49 241/80-24915 PROGRAM sunerr USE MPI

Re: [OMPI users] Fortran derived types

2010-05-06 Thread Paul Kapinos
er after the receive. -- Prentice ___ users mailing list us...@open-mpi.org http://www.open-mpi.org/mailman/listinfo.cgi/users ___ users mailing list us...@open-mpi.org http://www.open-mpi.org/mailman/listinfo.cgi/users -- Dipl.-Inform. Paul Kapinos -

[OMPI users] OMPIO correctnes issues

2015-12-09 Thread Paul Kapinos
amples of divergent behaviour but this one is quite handy. Is that a bug in OMPIO or did we miss something? Best Paul Kapinos 1) http://www.open-mpi.org/faq/?category=ompio 2) http://www.open-mpi.org/community/lists/devel/2015/12/18405.php 3) (ROMIO is default; on local hard drive

Re: [OMPI users] OMPIO correctnes issues

2015-12-09 Thread Paul Kapinos
, that will make things much easier from now. (and at first glance, that might not be a very tricky bug) Cheers, Gilles On Wednesday, December 9, 2015, Paul Kapinos mailto:kapi...@itc.rwth-aachen.de>> wrote: Dear Open MPI developers, did OMPIO (1) reached 'usable-stable' s

Re: [OMPI users] OMPIO correctnes issues

2015-12-09 Thread Paul Kapinos
lated one of rules of Open MPI release series) . Anyway, if there is a simple fix for your test case for the 1.10 series, I am happy to provide a patch. It might take me a day or two however. Edgar On 12/9/2015 6:24 AM, Paul Kapinos wrote: Sorry, forgot to mention: 1.10.1 Open

Re: [OMPI users] openib/mpi_alloc_mem pathology [#20160912-1315]

2017-10-19 Thread Paul Kapinos
nfiniBand is not prohibited, the MPI_Free_mem() take ages. (I'm not familiar with CCachegrind so forgive me if I'm not true). Have a nice day, Paul Kapinos -- Dipl.-Inform. Paul Kapinos - High Performance Computing, RWTH Aachen University, IT Center Seffenter Weg 23, D 52074 Aach

Re: [OMPI users] openib/mpi_alloc_mem pathology [#20160912-1315]

2017-10-20 Thread Paul Kapinos
On 10/20/2017 12:24 PM, Dave Love wrote: > Paul Kapinos writes: > >> Hi all, >> sorry for the long long latency - this message was buried in my mailbox for >> months >> >> >> >> On 03/16/2017 10:35 AM, Alfio Lazzaro wrote: >>> Hel

[OMPI users] OpenMPI 1.2.8 on Solaris: configure problems

2008-10-17 Thread Paul Kapinos
ment) So, we think that somewhat is not OK with ./configure script. Attend to the fact, that we were able to install 1.2.5 and 1.2.6 some time ago on same boxes without problems. Or maybe we do somewhat wrong? best regards, Paul Kapinos HPC Group RZ RWTH Aachen P.S. Folks, does some

Re: [OMPI users] NAG Fortran 2018 bindings with Open MPI 4.1.2

2022-01-04 Thread Paul Kapinos via users
not state that about gfortran and intel, by the way.) So these guys may be snarky, but they can Fortran, definitely. And if Open MPI bindings may be compiled by this compiler - they would be likely very standard-conforming. Have a nice day and a nice year 2022, Paul Kapinos On 12/30/21