ingle
instrumentation library. Other MPI implementations (I won't say names
here) also have this C/Fortran "separation", which requires us to
generate two instrumentation libraries, one for C and another for
Fortran apps.
Thank you!
Cheers,
Gilles
On Friday, November 6,
Dear all,
we develop an instrumentation package based on PMPI and we've
observed that PMPI_Ibarrier and PMPI_Ibcast invoke regular MPI_Comm_size
and MPI_Comm_rank instead to the PMPI symbols (i.e. PMPI_Comm_size and
PMPI_Comm_rank) in OpenMPI 1.10.0.
I have attached simple example that d
one month ago.
Cheers,
Gilles
On 9/25/2015 8:07 PM, Harald Servat wrote:
Dear all,
I'd like to note you that the manual pages for the C-syntax
MPI_Ibarrier in OpenMPI v1.10.0 misses the pointer in the MPI_Request.
See:
https://www.open-mpi.org/doc/v1.10/man3/MPI_Ibarrier.3.php
https:
Dear all,
I'd like to note you that the manual pages for the C-syntax
MPI_Ibarrier in OpenMPI v1.10.0 misses the pointer in the MPI_Request.
See:
https://www.open-mpi.org/doc/v1.10/man3/MPI_Ibarrier.3.php
https://www.open-mpi.org/doc/v1.10/man3/MPI_Barrier.3.php
Best,
WARNING / LEGAL
for another MPI profiler but I couldn't find any. HPCToolkit
looks like it is not maintain anymore, Vampir does not maintain any more
the tool that instrument the application. I will probably give a try to
Paraver.
Best regards,
Cristian Ruiz
On 07/22/2015 09:44 AM, Harald Servat
Cristian,
you might observe super-speedup heres because in 8 nodes you have 8
times the cache you have in only 1 node. You can also validate that by
checking for cache miss activity using the tools that I mentioned in my
other email.
Best regards.
On 22/07/15 09:42, Crisitan RUIZ wrote:
Dear Cristian,
as you probably know C class is one of the large classes for the NAS
benchmarks. That is likely to mean that the application is taking much
more time to do the actual computation rather than communication. This
could explain why you see this little difference between the two
Hello list,
we have several questions regarding calls to collectives using
intercommunicators. In man for MPI_Bcast, there is a notice for the
inter-communicator case that reads the text below our questions.
If an I is an intercomunicator for communicattors C1={p1,p2,p3} and
C2={p4,p5,p6
Hello,
may I suggest you to apply this small patch to fix a typo in the
MPI_Intercomm_merge entry manual?
Best regards
WARNING / LEGAL TEXT: This message is intended only for the use of the
individual or entity to which it is addressed and may contain
information which is privileged, confi
El dl 18 de 06 de 2012 a les 11:39 -0400, en/na Jeff Squyres va
escriure:
> On Jun 18, 2012, at 11:12 AM, Harald Servat wrote:
>
> > Thank you Jeff. Now with the following commands starts, but it gets
> > blocked before starting. May be this problem of firewalls? Do I need
>
El dl 18 de 06 de 2012 a les 10:56 -0400, en/na Jeff Squyres va
escriure:
> On Jun 18, 2012, at 10:45 AM, Harald Servat wrote:
>
> > # $HOME/aplic/openmpi/1.6/bin/mpirun -np 1 -host
> > localhost ./init_barrier_fini : -x
> > LD_LIBRARY_PATH=/home/Computational/hara
;
> > Try adding "-x LD_LIBRARY_PATH=" to your mpirun cmd line
> >
> >
> > On Jun 18, 2012, at 7:11 AM, Harald Servat wrote:
> >
> >> Hello list,
> >>
> >> I'd like to use OpenMPI to execute an MPI application in two differe
Hello list,
I'd like to use OpenMPI to execute an MPI application in two different
machines.
Up to now, I've configured and installed OpenMPI 1.6 in my two systems
(each on a different directory). When I execute binaries within a system
(in any) the application works well. However when I try
a charm!
Regards,
>
> Of course, You may play with -rpath in conjunction with
> --with-wrapper-ldflags.
>
> More info on this in:
> http://www.open-mpi.org/faq/?category=mpi-apps#override-wrappers-after-v1.0
>
>
> Hope this helps,
> Rainer
>
>
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hello to all,
I'm working on SGI Altix machine (IA64) which provides its own MPI
library. It is installed under /usr.
I've installed OpenMPI 1.2 on my own user area in order to do some tests.
However, when I link the test application with the
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hello,
I forgot to attach the sample code.
Regards,
Harald Servat wrote:
> Hello,
>
> I'm using the PERUSE API from OpenMPI in order to know when messages
> arrive. I've executed some simple tests in three different m
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hello,
I'm using the PERUSE API from OpenMPI in order to know when messages
arrive. I've executed some simple tests in three different machines
(FreeBSD/x86 1cpu, SGI Altix/IA64 128cpus, Linux/ppc 4cpus) always using
OpenMPI 1.2 and I see that somet
D comm = 134517424 buf = 0x0 count = 0 peer = 0 tag =
1001
RANK 2: total_msg_arrived = 1
RANK 3: total_msg_arrived = 0
I'm running OpenMPI 1.2 on a FreeBSD 6.2 machine with a single processor.
Thank you!
> Thanks,
> george.
>
>
> On Apr 19, 2007, at 11:16 AM, Harald S
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hello,
I'm interested on gathering MSG_ARRIVED events through the PERUSE API
offered by OpenMPI 1.2.
I've written an small MPIC C program that performs some communication,
and although I receive some MSG_ARRIVED events, however I'm loosing some
e
he configure with the
peruse enabled (at least it is stated on the config.log).
Regards,
--
========
o//o Harald Servat Gelabert (harald at cepba dot upc dot edu)
o//o Centre Europeu de Paral.lelisme de Barcelona (
20 matches
Mail list logo