Gilles
On 2015/01/08 10:12, Ralph Castain wrote:
Hmmm…I confess this API gets little, if any, testing as it is so seldom used,
so it is quite possible that a buglet has crept into it. I’ll take a look and
try to have something in 1.8.5.
Thanks!
Ralph
On Jan 7, 2015, at 3:14 AM, Bernard Secher
Hello,
With the version openmpi-1.4.5 I got an error when I tried to publish
the same name twice with the MPI_Publish_name routine
With the version openmpi-1.8.4 I got no error when I published the same
name twice with the MPI_Publish_name routine
I used the attached script and source code t
Hello,
This feature is very important for my project. It is managing coupling
parallel codes.
Thanks to correct this bug as soon as possible.
Best
Bernard
Open MPI a écrit :
#2681: ompi-server publish name broken in 1.5.x
---+--
ast
Bcast complete: srv=1
Server calling MPI_Comm_accept
Bcast complete: srv=1
Server calling MPI_Comm_accept
[hang -- because everyone's in accept, not connect]
On Jan 7, 2011, at 4:17 AM, Bernard Secher - SFME/LGLS wrote:
Jeff,
Only the processes of the program where process 0 successed
The accept and connect tests are OK with version openmpi 1.4.1.
I think there is a bug in version 1.5.1
Best
Bernard
Bernard Secher - SFME/LGLS a écrit :
I get the same dead lock with openmpi tests: pubsub, accept and
connect with version 1.5.1
Bernard Secher - SFME/LGLS a écrit :
Jeff
I get the same dead lock with openmpi tests: pubsub, accept and connect
with version 1.5.1
Bernard Secher - SFME/LGLS a écrit :
Jeff,
The dead lock is not in MPI_Comm_accept and MPI_Comm_connect, but
before in MPI_Publish_name and MPI_Lookup_name.
So the broadcast of srv is not involved in
Jeff,
The dead lock is not in MPI_Comm_accept and MPI_Comm_connect, but before
in MPI_Publish_name and MPI_Lookup_name.
So the broadcast of srv is not involved in the dead lock.
Best
Bernard
Bernard Secher - SFME/LGLS a écrit :
Jeff,
Only the processes of the program where process 0
Is it different whith openmpi 1.5.1 ?
Best
Bernard
Jeff Squyres a écrit :
On Jan 5, 2011, at 10:36 AM, Bernard Secher - SFME/LGLS wrote:
MPI_Comm remoteConnect(int myrank, int *srv, char *port_name, char* service)
{
int clt=0;
MPI_Request request; /* requete pour communication non bloq
Is it a bug in openmpi V1.5.1 ?
Bernard
Bernard Secher - SFME/LGLS a écrit :
Hello,
What are the changes between openMPI 1.4.1 and 1.5.1 about MPI2
service of publishing name.
I have 2 programs which connect them via MPI_Publish_name and
MPI_Lookup_name subroutines and ompi-server.
That
Hello,
What are the changes between openMPI 1.4.1 and 1.5.1 about MPI2 service
of publishing name.
I have 2 programs which connect them via MPI_Publish_name and
MPI_Lookup_name subroutines and ompi-server.
That's OK with 1.4.1 version , but I have a deadlock with 1.5.1 version
inside the subro
en you can compile it with the compiler option -fopenmp (in gcc)
Jody
On Thu, Dec 16, 2010 at 11:56 AM, Bernard Secher - SFME/LGLS
wrote:
I get the following error message when I compile openmpi V1.5.1:
CXXotfprofile-otfprofile.o
../../../../../../../../../openmpi-1.5.1-src/ompi/con
I get the following error message when I compile openmpi V1.5.1:
CXXotfprofile-otfprofile.o
../../../../../../../../../openmpi-1.5.1-src/ompi/contrib/vt/vt/extlib/otf/tools/otfprofile/otfprofile.cpp:11:18:
erreur: omp.h : Aucun fichier ou dossier de ce type
../../../../../../../../../openmp
08 AM, Bernard Secher - SFME/LGLS
wrote:
Thanks Jody for your answer.
I launch 2 instances of my program on 2 processes each instance, on the same
machine.
I use MPI_Publish_name, MPI_Lookup_name to create a global communicator on
the 4 processes.
Then the 4 processes exchange data.
The main pr
I_Finalize should solve the =
problem.
george.
On Jan 23, 2009, at 06:00 , Bernard Secher - SFME/LGLS wrote:
No i didn't run this program whith Open-MPI 1.2.X because one said =
to me there were many changes between 1.2.X version and 1.3 version =
about MPI_publish_name, MPI_Lookup
t all MPI_Sends are matched by
corresponding MPI_Recvs.
Jody
On Fri, Jan 23, 2009 at 11:08 AM, Bernard Secher - SFME/LGLS
wrote:
Thanks Jody for your answer.
I launch 2 instances of my program on 2 processes each instance, on the same
machine.
I use MPI_Publish_name, MPI_Lookup_name to cre
esn't work",
nobody can give you any help whatsoever.
Jody
On Fri, Jan 23, 2009 at 9:33 AM, Bernard Secher - SFME/LGLS
wrote:
Hello Jeff,
I don't understand what you mean by "A _detailed_ description of what is
failing".
The problem is a dead lock in MPI_Finalize
#x27;t work; so please include as much information
detailed in your initial e-mail as possible."
Additionally:
"The best way to get help is to provide a "recipie" for reproducing
the problem."
Thanks!
On Jan 22, 2009, at 8:53 AM, Bernard Secher - SFME/LGLS wrote:
H
Hello Tim,
I send you the information in join files.
Bernard
Tim Mattox a écrit :
Can you send all the information listed here:
http://www.open-mpi.org/community/help/
On Wed, Jan 21, 2009 at 8:58 AM, Bernard Secher - SFME/LGLS
wrote:
Hello,
I have a case wher i have a dead lock in
Hello,
I have a case wher i have a dead lock in MPI_Finalize() function with
openMPI v1.3.
Can some body help me please?
Bernard
Hello,
I have the following error at the beginning of my mpi code:
[is124684:07869] [[38040,0],0] ORTE_ERROR_LOG: Data unpack would read
past end of buffer in file orted/orted_comm.c at line 448
Anybody can help me to solve this pb?
Bernard
i-mca-params.conf file, if
you want.
Ralph
On Jan 6, 2009, at 4:36 AM, Bernard Secher - SFME/LGLS wrote:
Hello,
I take 1.3 version from svn base.
The default hostfile in etc/openmpi-default-hostfile is not taken. I
must give to mpirun the -hostfile option to take this file. Is there
any c
Hello,
I take 1.3 version from svn base.
The default hostfile in etc/openmpi-default-hostfile is not taken. I
must give to mpirun the -hostfile option to take this file. Is there any
change in 1.3 version?
Regards
Bernard
I have the same pb with 1.2.9rc1 version.
I don't see any orte-clean utility in this version.
But the best is i use the 1.3 version. Thanks to give me more details
about ompi-server in the 1.3 version.
Regards
Bernard
Bernard Secher - SFME/LGLS a écrit :
I use first 1.2.5 version then
.
Regards,
Aurelien
--
* Dr. Aurélien Bouteiller
* Sr. Research Associate at Innovative Computing Laboratory
* University of Tennessee
* 1122 Volunteer Boulevard, suite 350
* Knoxville, TN 37996
* 865 974 6321
Le 10 déc. 08 à 10:28, Bernard Secher - SFME/LGLS a écrit :
Hi everybody
I want to
Hi everybody
I want to use MPI_publish_name function to do communicaztion between two
independant code.
I saw on the web i must use the orted daemon with the following command:
orted --persistent --seed --scope public --universe foo
The communication success, but when i close the communicat
25 matches
Mail list logo