look at the below core dump of 'ompi_info' like below one.
(yes we know that "^tcp,^ib" is a bad idea).
Have a nice day,
Paul Kapinos
P.S. Open MPI: 1.10.4 and 2.0.1 have the same behaviour
--
[lnm001:39
ntel libs (as
said changing out these solves/raises the issue) we will do a failback to
16.0.2.181 compiler version. We will try to open a case by Intel - let's see...
Have a nice day,
Paul Kapinos
On 05/06/16 14:10, Jeff Squyres (jsquyres) wrote:
Ok, good.
I asked that question beca
!
Paul Kapinos
On 12/14/16 13:29, Paul Kapinos wrote:
Hello all,
we seem to run into the same issue: 'mpif90' sigsegvs immediately for Open MPI
1.10.4 compiled using Intel compilers 16.0.4.258 and 16.0.3.210, while it works
fine when compiled with 16.0.2.181.
It seems to be a compiler i
nce bug in MPI_Free_mem your application can be horribly slow (seen:
CP2K) if the InfiniBand failback of OPA not disabled manually, see
https://www.mail-archive.com/users@lists.open-mpi.org//msg30593.html
Best,
Paul Kapinos
--
Dipl.-Inform. Paul Kapinos - High Performance Computing,
R
ntion. We have a (nasty)
workaround, cf.
https://www.mail-archive.com/devel@lists.open-mpi.org/msg00052.html
As far as I can see this issue is on InfiniBand only.
Best
Paul
--
Dipl.-Inform. Paul Kapinos - High Performance Computing,
RWTH Aachen University, IT Center
Seffenter Weg 23, D 5207
for multi-node jobs, and that
doesn't show the pathological behaviour iff openib is suppressed.
However, it requires ompi 1.10, not 1.8, which I was trying to use.
___
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsort
).
On 03/07/17 20:22, Nathan Hjelm wrote:
If this is with 1.10.x or older run with --mca memory_linux_disable 1. There is
a bad interaction between ptmalloc2 and psm2 support. This problem is not
present in v2.0.x and newer.
-Nathan
On Mar 7, 2017, at 10:30 AM, Paul Kapinos wrote:
Hi Dav
Hi,
On 03/16/17 10:35, Alfio Lazzaro wrote:
We would like to ask you which version of CP2K you are using in your tests
Release 4.1
and
if you can share with us your input file and output log.
The question goes to Mr Mathias Schumacher, on CC:
Best
Paul Kapinos
(Our internal ticketing
+#endif
OPAL_CR_EXIT_LIBRARY();
return MPI_SUCCESS;
```
This will at least tell us if the innards of our ALLOC_MEM/FREE_MEM (i.e.,
likely the registration/deregistration) are causing the issue.
On Mar 15, 2017, at 1:27 PM, Dave Love wrote:
Paul Kapinos writes:
Nathan,
unfortunat
seems that something changed starting from version
2.x, and the FDR system performs much worse than with the prior 1.10.x release.
--
Dipl.-Inform. Paul Kapinos - High Performance Computing,
RWTH Aachen University, IT Center
Seffenter Weg 23, D 52074 Aachen (Germany)
Tel: +49 241/80-24915
In 1.10.x series there were 'memory hooks' - Open MPI did take some care abount
the alignment. This was removed in 2.x series, cf. the whole thread on my link.
--
Dipl.-Inform. Paul Kapinos - High Performance Computing,
RWTH Aachen University, IT Center
Seffenter Weg 23, D
n I see
would these flags set or would not?
In other words: is it possible to get _all_ flags of configure from an
"ready" installation in without having the compilation dirs (with
configure logs) any more?
Many thanks
Paul
--
Dipl.-Inform. Paul Kapinos - High Performance Com
what configure options were for a given
installation! "./configure --help" helps but to guess which all of the
options are used in a release, is a hard job..
--td
On Aug 24, 2010, at 7:40 AM, Paul Kapinos wrote:
Hello OpenMPI developers,
I am searching for a way to discover _all
wishes
Paul
--
Dipl.-Inform. Paul Kapinos - High Performance Computing,
RWTH Aachen University, Center for Computing and Communication
Seffenter Weg 23, D 52074 Aachen (Germany)
Tel: +49 241/80-24915
smime.p7s
Description: S/MIME Cryptographic Signature
-32/10.9/lib -tp
-Wl,-rpath=/../PGI/PGI_10.9_CENTOS_64/linux86-64/10.9/libso
-Wl,-rpath=/../PGI/PGI_10.9_CENTOS_64/linux86-64/10.9/lib
-Wl,-rpath=/../PGI/PGI_10.9_CENTOS_64/linux86-32/10.9/lib
-Wl,-soname -Wl,libopen-pal.so.1 -o .libs/libopen-pal.so.1.0.0
Best wishes,
Paul
CentOS 5.5 is still a problem, also
other versions of GCC seem not to have the same issue.
Best wishes,
Paul
--
Dipl.-Inform. Paul Kapinos - High Performance Computing,
RWTH Aachen University, Center for Computing and Communication
Seffenter Weg 23, D 52074 Aachen (Germany)
Tel: +49 241/80
for the availability of `ceil' for the C compiler (see
config.log.ceil). This check says `ceil' is *available* for the "cc"
Compiler, which is *wrong*, cf. (4).
So, is there an error in the configure stage? Or either the checks in
config.log.ceil does not rely on the av
the "cc" compiler without
the need for -lm flag - and this is *wrong*, "cc" need -lm.
It seem for me to be an configure issue.
Greetings
Paul
--
Dipl.-Inform. Paul Kapinos - High Performance Computing,
RWTH Aachen University, Center for Computing and Communication
Sef
same way.
Best wishes,
Paul
--
Dipl.-Inform. Paul Kapinos - High Performance Computing,
RWTH Aachen University, Center for Computing and Communication
Seffenter Weg 23, D 52074 Aachen (Germany)
Tel: +49 241/80-24915
smime.p7s
Description: S/MIME Cryptographic Signature
x27;ssh'
stat64("/opt/lsf/8.0/linux2.6-glibc2.3-x86_64/bin/ssh", 0x8324) = -1
ENOENT (No such file or directory)
===> OMPI_MCA_orte_rsh_agent does not work?!
--
Dipl.-Inform. Paul Kapinos - High Performance Computing,
RWTH Aachen University, Center for Computing and Co
orrect. Maybe someone can correct it? This
would save some time for people like me...
Best wishes
Paul Kapinos
--
Dipl.-Inform. Paul Kapinos - High Performance Computing,
RWTH Aachen University, Center for Computing and Communication
Seffenter Weg 23, D 52074 Aachen (Germany)
Tel: +49
in file plm_rsh_module.c at line 1058
--
--
Dipl.-Inform. Paul Kapinos - High Performance Computing,
RWTH Aachen University, Center for Computing and Communication
Seffenter Weg 23, D 52074 Aachen (Germany)
Tel: +49
tification modi. The32bit version
works with the NIS-autentificated part of our cluster, only.
Thanks for your help!
Best wishes
Paul Kapinos
Reuti wrote:
Hi,
Am 15.07.2011 um 21:14 schrieb Terry Dontje:
On 7/15/2011 1:46 PM, Paul Kapinos wrote:
Hi OpenMPI volks (and Oracle/Sun experts),
M4_CONFIG_COMPONENT is expanded
from...
config/ompi_mca.m4:326: MCA_CONFIGURE_FRAMEWORK is expanded from...
config/ompi_mca.m4:247: MCA_CONFIGURE_PROJECT is expanded from...
configure.ac:953: warning: AC_RUN_IFELSE was called before
AC_USE_SYSTEM_EXTENSIONS
--
Dipl.-Inform. Paul Kapinos - High
warnings:
pgCC-Warning-prelink_objects switch is deprecated
pgCC-Warning-instantiation_dir switch is deprecated
coming from the below-noted call.
I do not know about this is a Libtool or a libtool usage (=OpenMPI
issue, but I do not want to keep secret this...
Best wishes
Paul Kapinos
UT, in the configure line (below) I get the -m32 flag!! So, where is
the -m32 thing lost? Did I do something in a wrong way?
Best wishes and a nice weekend,
Paul Kapinos
P.S. again, the some more logs downloadable from here:
https://gigamove.rz.rwth-aachen.de/d/id/xoQ2
/WNk69nPr4w7svT
--
Dipl.-Inform. Paul Kapinos - High Performance Computing,
RWTH Aachen University, Center for Computing and Communication
Seffenter Weg 23, D 52074 Aachen (Germany)
Tel: +49 241/80-24915
smime.p7s
Description: S/MIME Cryptographic Signature
will trigger our admins...
Best wishes,
Paul
m4 (GNU M4) 1.4.13 (OK)
autoconf (GNU Autoconf) 2.63 (Need: 2.65, NOK)
automake (GNU automake) 1.11.1 (OK)
ltmain.sh (GNU libtool) 2.2.6b (OK)
On Jul 22, 2011, at 9:12 AM, Paul Kapinos wrote:
Dear OpenMPI volks,
currently I have a probl
workarounded the problem by switching our production to
1.5.3 this issue is not a "burning" one; but I decieded still to post
this because any issue on such fundamental things may be interesting for
developers.
Best wishes,
Paul Kapinos
(*) http://www.netlib.org/ben
[long] list of variables.
Is there someone envvar, by setting which to a list of names of other
envvars the same effect could be achieved as by setting -x on command
line of mpirun?
Best wishes
Paul Kapinos
--
Dipl.-Inform. Paul Kapinos - High Performance Computing,
RWTH Aachen Univers
ea what is gonna on?
Best,
Paul Kapinos
P.S: no alias names used, all names are real ones
--
Dipl.-Inform. Paul Kapinos - High Performance Computing,
RWTH Aachen University, Center for Computing and Communication
Seffenter Weg 23, D 52074 Aachen (Germany)
Tel: +49 241/80-24915
l
constellation. The next thing I will try will be the installation of
1.5.4 :o)
Best,
Paul
P.S. started:
$ /opt/MPI/openmpi-1.5.3/linux/intel/bin/mpiexec --hostfile
hostfile-mini -mca odls_base_verbose 5 --leave-session-attached
--display-map helloworld 2>&1 | tee helloworld.txt
above command should disable
the usage of eth0 for MPI communication itself, but it hangs just before
the MPI is started, isn't it? (because one process lacks, the MPI_INIT
cannot be passed)
Now a question: is there a way to forbid the mpiexec to use some
interfaces at all?
Best wishes,
Pau
command line options. This should not be so?
(I also tried to advise to provide the envvars by -x
OMPI_MCA_oob_tcp_if_include -x OMPI_MCA_btl_tcp_if_include - nothing
changed. Well, they are OMPI_ variables and should be provided in any case).
Best wishes and many thanks for all,
Paul K
ovided and thus treated *differently* than other envvars:
$ man mpiexec
Exported Environment Variables
All environment variables that are named in the form OMPI_* will
automatically be exported to new processes on the local and remote
nodes.
So, tells the man page lies, or this
on as
possible)
Best wishes and an nice weekend,
Paul
http://www.openfabrics.org/downloads/OFED/release_notes/OFED_1.5.4_release_notes
--
Dipl.-Inform. Paul Kapinos - High Performance Computing,
RWTH Aachen University, Center for Computing and Communication
Seffenter Weg 23
s!
Best wishes and a nice evening/day/whatever time you have!
Paul Kapinos
--
Dipl.-Inform. Paul Kapinos - High Performance Computing,
RWTH Aachen University, Center for Computing and Communication
Seffenter Weg 23, D 52074 Aachen (Germany)
Tel: +49 241/80-24915
smime.p7s
Description: S/MIME Cryptographic Signature
FOBA -x BAFO -x RR -x ZZ"
Well, this are my user's dreams; but maybe this give an inspiration for
Open MPI programmers. As said, the situation when a [long] list of
envvars is to be provided is quite common, and typing everything on the
command line is tedious and error-prone.
Best wi
---
Because of the anticipated performance gain we would be very keen on
using DAPL with Open MPI. Does somebody have any idea what could be
wrong and what to check?
On Dec 2, 2011, at 1:21 PM, Paul Kapinos wrote:
Dear Open MPI developer,
OFED 1.5.4 will
totools).
I suspect that if you do this:
-
tar xf openmpi-1.5.3.tar.bz2
cd openmpi-1.5.3
./configure etc.
-
everything will work just fine.
On Jul 22, 2011, at 11:12 AM, Paul Kapinos wrote:
Dear OpenMPI volks,
currently I have a problem by building the version 1.5.3 of OpenMPI on
.conf file."
Well. Any suggestions? Does OpenMPI ever able to use DAPL 2.0 on Linux?
Merry Christmas from westernest Germany,
Paul
Paul Kapinos wrote:
Good morning,
We've never recommended the use of dapl on Linux. I think it might
have worked at one time, but I don't thi
help me in solving this?
Regards,
Anas
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
--
Dipl.-Inform. Paul Kapinos - High
s computer dimension is a bit too big for the pinning
infrasructure now. A bug?
Best wishes,
Paul Kapinos
P.S. see the attached .tgz for some logzz
--
Rankfiles
Rankfiles provide a means for specifying detailed i
r 1.5.x is a good idea; but it is always a bit
tedious... Would 1.5.5 arrive the next time?
Best wishes,
Paul Kapinos
Ralph Castain wrote:
I don't see anything in the code that limits the number of procs in a rankfile.
> Are the attached rankfiles the ones you are trying to use?
I
Try out the attached wrapper:
$ mpiexec -np 2 masterstdout
mpirun -n 2
Is there a way to have mpirun just merger STDOUT of one process to its
STDOUT stream?
--
Dipl.-Inform. Paul Kapinos - High Performance Computing,
RWTH Aachen University, Center for Computing and Communication
RANK is? (This would make
sense as it looks like the OMPI_COMM_WORLD_SIZE, OMPI_COMM_WORLD_RANK pair.)
If yes, maybe it also should be documented in the Wiki page.
2) OMPI_COMM_WORLD_NODE_RANK - is that just a double of
OMPI_COMM_WORLD_LOCAL_RANK ?
Best wishes,
Paul Kapinos
--
Dipl.-
able].
Ralph, Jeff, anybody - any interest in reproducing this issue?
Best wishes,
Paul Kapinos
P.S. Open MPI 1.5.3 used - still waiting for 1.5.5 ;-)
Some error messages:
with 6 procs over 6 Nodes:
--
mlx4:
an/listinfo.cgi/users
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
--
Dipl.-Inform. Paul Kapinos - High Performance Computing,
RWTH Aachen University, Center for Computing and Communication
Seffenter Weg 23, D 52074 Aachen (Germany)
Tel: +49 241/80-24915
smime.p7s
Description: S/MIME Cryptographic Signature
ue? Are you interested in reproduce this?
Best,
Paul Kapinos
P.S: The same test with Intel MPI cannot run using DAPL, but run very fine opef
'ofa' (= native verbs as Open MPI use it). So I believe the problem is rooted in
the communication pattern of the program; it send very LARGE messag
ouble
over our infiniband network. I'm running a fairly large problem (uses about
18GB), and part way in, I get the following errors:
You say "big footprint"? I hear a bell ringing...
http://www.open-mpi.org/faq/?category=openfabrics#ib-low-reg-mem
--
Dipl.-Inform. Paul Kapin
chunk size of 64k is fairly small
--
Dipl.-Inform. Paul Kapinos - High Performance Computing,
RWTH Aachen University, Center for Computing and Communication
Seffenter Weg 23, D 52074 Aachen (Germany)
Tel: +49 241/80-24915
smime.p7s
Description: S/MIME Cryptographic Signature
rote:
I'm checking it with OFED folks, but I doubt that there are some dedicated
tests for THP.
So do you see it only with a specific application and only on a specific
data set? Wonder if I can somehow reproduce it in-house...
--
Dipl.-Inform. Paul Kapinos - High Performance Computi
2 -H linuxbdc01,linuxbdc02 /home/pk224850/bin/ulimit_high.sh MPI_FastTest.exe
--
Dipl.-Inform. Paul Kapinos - High Performance Computing,
RWTH Aachen University, Center for Computing and Communication
Seffenter Weg 23, D 52074 Aachen (Germany)
Tel: +49 241/80-24915
ulimit_high
rmance/stability?
Daniel
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
--
Dipl.-Inform. Paul Kapinos - High Performance Computing,
RWTH Aachen University, Center for Computing and Communication
Seffenter
unning#mpi-preconnect) there is no such
huge latency outliers for the first sample.
Well, we know about the warm-up and lazy connections.
But 200x ?!
Any comments about that is OK so?
Best,
Paul Kapinos
(*) E.g. HPCC explicitely say in http://icl.cs.utk.edu/hpcc/faq/index.html#132
> Addit
We 'tune' our Open MPI by setting environment variables....
Best
Paul Kapinos
On 12/19/12 11:44, Number Cruncher wrote:
Having run some more benchmarks, the new default is *really* bad for our
application (2-10x slower), so I've been looking at the source to try and figure
out
__
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
--
Dipl.-Inform. Paul Kapinos - High Performance Computing,
RWTH Aachen University, Center for Computing and Communication
Seffenter Weg 23, D 52074 Aachen (Germany)
Tel: +
nough-registred-mem" computation for Mellanox HCAs? Any
other idea/hint?
--
Dipl.-Inform. Paul Kapinos - High Performance Computing,
RWTH Aachen University, Center for Computing and Communication
Seffenter Weg 23, D 52074 Aachen (Germany)
Tel: +49 241/80-24915
smime.p7s
Descripti
e meantime).
--
Dipl.-Inform. Paul Kapinos - High Performance Computing,
RWTH Aachen University, Center for Computing and Communication
Seffenter Weg 23, D 52074 Aachen (Germany)
Tel: +49 241/80-24915
smime.p7s
Description: S/MIME Cryptographic Signature
ations. Otherwise we will ignore it, probably...
Best
Paul Kapinos
(*) we've kinda internal test suite in order to check our MPIs...
P.S. $ mpicc -O0 -m32 -o ./mpiIOC32.exe ctest.c -lm
P.S.2 an example cofnigure line:
./configure --with-openib --with-lsf --with-devel-headers
--enable-con
oducer and send it to the compiler developer team :o)
Best
Paul Kapinos
On 04/05/13 17:56, Siegmar Gross wrote:
PPFC mpi-f08.lo
"../../../../../openmpi-1.7/ompi/mpi/fortran/use-mpi-f08/mpi-f08.F90", Line = 1,
Column = 1: INTERNAL: Interrupt: Segmentation fault
--
Dipl.-
g.
Any suggestions?
Thanks
Tim Dunn
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/lis
tight integration to LSF 8.0 now =)
For future, if you need a testbed, I can grant an user access to you...
best
Paul
--
Dipl.-Inform. Paul Kapinos - High Performance Computing,
RWTH Aachen University, Center for Computing and Communication
Seffenter Weg 23, D 52074 Aachen (Germany)
Tel: +4
usuallu a bit
dusty.
--
Dipl.-Inform. Paul Kapinos - High Performance Computing,
RWTH Aachen University, Center for Computing and Communication
Seffenter Weg 23, D 52074 Aachen (Germany)
Tel: +49 241/80-24915
smime.p7s
Description: S/MIME Cryptographic Signature
pilers) or -lmpi_mt instead of -lmpi (other compilers). However, Intel
MPI is not free.
Best,
Paul Kapinos
Also, I recommend to _always_ check what kinda of threading lievel you ordered
and what did you get:
print *, 'hello, world!', MPI_THREAD_MULTIPLE, provided
On 05/31/
isturb the production on these nodes (and different MPI
versions for different nodes are doofy).
Best
Paul
--
Dipl.-Inform. Paul Kapinos - High Performance Computing,
RWTH Aachen University, Center for Computing and Communication
Seffenter Weg 23, D 52074 Aachen (Germany)
Tel: +49 24
Best,
Paul Kapinos
P.S. There should be no connection problen somewhere between the nodes; a test
job with 1x process on each node has been ran sucessfully just before starting
the actual job, which also ran OK for a while - until calling MPI_Allt
Vanilla Linux ofed from RPM's for Scientific Linux release 6.4 (Carbon) (= RHEL
6.4).
No ofed_info available :-(
On 07/31/13 16:59, Mike Dubman wrote:
Hi,
What OFED vendor and version do you use?
Regards
M
On Tue, Jul 30, 2013 at 8:42 PM, Paul Kapinos mailto:kapi...@rz.rwth-aachen.de>
y reasonable for me to be
wrong about all of this.
Jeff
--
Dipl.-Inform. Paul Kapinos - High Performance Computing,
RWTH Aachen University, Center for Computing and Communication
Seffenter Weg 23, D 52074 Aachen (Germany)
Tel: +49 241/80-24915
smime.p7s
Description: S/MIME Cryptographic Signature
gi/users
___
users mailing list
us...@open-mpi.org <mailto:us...@open-mpi.org>
http://www.open-mpi.org/mailman/listinfo.cgi/users
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.o
lls like an error in the
'hwlock' library.
Is there a way to disable hwlock or to debug it in somehow way?
(besides to build a debug version of hwlock and OpenMPI)
Best
Paul
--
Dipl.-Inform. Paul Kapinos - High Performance Computing,
RWTH Aachen University, Center f
terminate declarations.
"mpicxx.cc", line 293: Error: A declaration was expected instead of "0x01".
3 Error(s) detected
...
+
So, it seems to me, there is somewhat nasty with one or more
declarations somewhere...
Does somebody have any idea what I
t I mean, is that the paths for the configuration files, which
opal_wrapper need, may be setted locally like ../share/openmpi/***
without affectiong the integrity of OpenMPI. Maybe there were were more
places where the usage of local paths may be needed to allowe movable
(relocable) OpenMPI.
What do
Hi,
First, consider to update to newer OpenMPI.
Second, look on your environment on the box you startts OpenMPI (runs
mpirun ...).
Type
ulimit -n
to explore how many file descriptors your envirinment have. (ulimit -a
for all limits). Note, every process on older versions of OpenMPI (prior
1
upgraded?
Best regards Paul Kapinos
On Sep 8, 2008, at 5:33 AM, Paul Kapinos wrote:
Hi all!
We are using OpenMPI on an variety of machines (running Linux,
Solaris/Sparc and /Opteron) using couple of compilers (GCC, Sun
Studio, Intel, PGI, 32 and 64 bit...) so we have at least 15
this makes the package really not relocable without parsing
the configure files.
Did you (or anyone reading this message) have any contact to SUN
developers to point to this circumstance? *Why* do them use hard-coded
paths? :o)
best regards,
Paul Kapinos
#
# Default word-size (used
ry/Rolf -- can you comment?
I will write an separate eMail to ct-feedb...@sun.com
Best regards,
Paul Kapinos
<>
smime.p7s
Description: S/MIME Cryptographic Signature
envvars, and parsing configuretion files
I think installing everyting to hard-coded paths is somewhat inflexible.
Maybe you may provide relocatable RPMs somewhere in the future?
But as mentioned above, our main goal is to have both versions of CT on
same sythem working.
Best regards,
Paul K
We use Scientific Linux 5.1 which is an Red Hat Enterprice 5 Linux.
$ uname -a
Linux linuxhtc01.rz.RWTH-Aachen.DE 2.6.18-53.1.14.el5_lustre.1.6.5custom
#1 SMP Wed Jun 25 12:17:09 CEST 2008 x86_64 x86_64 x86_64 GNU/Linux
configured with:
./configure --enable-static --with-devel-headers CF
;-mca opal_set_max_sys_limits 1" to the command line), but
we does not see any change of behaviour).
What is your meaning?
Best regards,
Paul Kapinos
RZ RWTH Aachen
#
/opt/SUNWhpc/HPC8.2/intel/bin/mpiexec -mca opal_set_max_sys_limits 1
-np
meaning? The
writing of environmnet variables on the command line is ugly and tedious...
I've searched for this info on OpenMPI web pages for about an hour and
didn't find the ansver :-/
Thanking you in anticipation,
Paul
--
Dipl.-Inform. Paul Kapinos - High Performance Compu
not?
I can add it to the "to-do" list for a rainy day :-)
That would be great :-)
Thanks for your help!
Paul Kapinos
with the -x option of mpiexec there is a way to distribute environmnet
variables:
-x Export the specified environment variables to the remote
is. This is a bit ugly, but working
workaround. What i wanted to achieve with my mail, was a less ugly
solution :o)
Thanks for your help,
Paul Kapinos
Not at the moment - though I imagine we could create one. It is a tad
tricky in that we allow multiple -x options on the cmd line, but we
obviousl
unt of stack size for each process?
And, why consuming the virtual memory at all? We guess this virtual
memory is allocated for the stack (why else it will be related to the
stack size ulimit). But, is such allocation really needed? Is there a
way to avoid the vaste of virtual memory?
best regards,
? Though I am
> not sure why it would expand based on stack size?.
>
> --td
>> Date: Thu, 19 Nov 2009 19:21:46 +0100
>> From: Paul Kapinos
>> Subject: [OMPI users] exceedingly virtual memory consumption of MPI
>> environment if higher-setting "ulimit -s
RLD should
be possible which it is currently not.
Best wishes,
Paul Kapinos
--
Dipl.-Inform. Paul Kapinos - High Performance Computing,
RWTH Aachen University, Center for Computing and Communication
Seffenter Weg 23, D 52074 Aachen (Germany)
Tel: +49 241/80-24915
PROGRAM sunerr
USE MPI
er after the receive.
--
Prentice
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
--
Dipl.-Inform. Paul Kapinos -
amples of divergent behaviour but this one is quite handy.
Is that a bug in OMPIO or did we miss something?
Best
Paul Kapinos
1) http://www.open-mpi.org/faq/?category=ompio
2) http://www.open-mpi.org/community/lists/devel/2015/12/18405.php
3) (ROMIO is default; on local hard drive
, that will make things much easier from
now.
(and at first glance, that might not be a very tricky bug)
Cheers,
Gilles
On Wednesday, December 9, 2015, Paul Kapinos mailto:kapi...@itc.rwth-aachen.de>> wrote:
Dear Open MPI developers,
did OMPIO (1) reached 'usable-stable' s
lated one of rules of
Open MPI release series) . Anyway, if there is a simple fix for your
test case for the 1.10 series, I am happy to provide a patch. It might
take me a day or two however.
Edgar
On 12/9/2015 6:24 AM, Paul Kapinos wrote:
Sorry, forgot to mention: 1.10.1
Open
nfiniBand is not prohibited, the MPI_Free_mem() take ages.
(I'm not familiar with CCachegrind so forgive me if I'm not true).
Have a nice day,
Paul Kapinos
--
Dipl.-Inform. Paul Kapinos - High Performance Computing,
RWTH Aachen University, IT Center
Seffenter Weg 23, D 52074 Aach
On 10/20/2017 12:24 PM, Dave Love wrote:
> Paul Kapinos writes:
>
>> Hi all,
>> sorry for the long long latency - this message was buried in my mailbox for
>> months
>>
>>
>>
>> On 03/16/2017 10:35 AM, Alfio Lazzaro wrote:
>>> Hel
ment)
So, we think that somewhat is not OK with ./configure script. Attend to
the fact, that we were able to install 1.2.5 and 1.2.6 some time ago on
same boxes without problems.
Or maybe we do somewhat wrong?
best regards,
Paul Kapinos
HPC Group RZ RWTH Aachen
P.S. Folks, does some
not state that about gfortran and intel,
by the way.)
So these guys may be snarky, but they can Fortran, definitely. And if Open MPI
bindings may be compiled by this compiler - they would be likely very
standard-conforming.
Have a nice day and a nice year 2022,
Paul Kapinos
On 12/30/21
94 matches
Mail list logo