Re: [OMPI users] Are the Messages delivered in order in the MPI?

2012-01-25 Thread George Bosilca
Mateus,

MPI guarantee message ordering per communicator per peer. In other words any 
message going from peer A to peer B in the same communicator will be 
__matched__ on the receiver in the exact same order as they were sent (this 
remains true even for multi-threaded libraries). MPI does not mandate any other 
type of ordering, such as between communicators or between different pairs of 
processes.

Now, what I previously said is only true for the matching logic. Completion of 
message reception is a totally different thing.

  George.



On Jan 24, 2012, at 23:53, Mateus Augusto  wrote:

> After a read: http://blogs.cisco.com/performance/more_traffic/ 
> I understood that if a large message is sent and then a short message is 
> sent, then the short message can reach before. But what if the messages have 
> the same size, and are small enough so that no fragmentation occurs, the 
> ordering in delivery will be guaranteed?
> 
> 
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users


Re: [OMPI users] pure static "mpirun" launcher

2012-01-25 Thread Ilias Miroslav
Hello again,

I need own static "mpirun" for porting (together with the static executable) 
onto various (unknown) grid servers. In grid computing one can not expect 
OpenMPI-ILP64 installtion on each computing element. 

Jeff: I tried LDFLAGS in configure

ilias@194.160.135.47:~/bin/ompi-ilp64_full_static/openmpi-1.4.4/../configure 
--prefix=/home/ilias/bin/ompi-ilp64_full_static -without-memory-manager 
--without-libnuma --enable-static --disable-shared CXX=g++ CC=gcc F77=gfortran 
FC=gfortran FFLAGS="-m64 -fdefault-integer-8 -static" FCFLAGS="-m64 
-fdefault-integer-8 -static" CFLAGS="-m64 -static" CXXFLAGS="-m64 -static"  
LDFLAGS="-static  -Wl,-E" 

but still got dynamic, not static "mpirun":
ilias@194.160.135.47:~/bin/ompi-ilp64_full_static/bin/.ldd ./mpirun
linux-vdso.so.1 =>  (0x7fff6090c000)
libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x7fd7277cf000)
libnsl.so.1 => /lib/x86_64-linux-gnu/libnsl.so.1 (0x7fd7275b7000)
libutil.so.1 => /lib/x86_64-linux-gnu/libutil.so.1 (0x7fd7273b3000)
libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x7fd727131000)
libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 
(0x7fd726f15000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x7fd726b9)
/lib64/ld-linux-x86-64.so.2 (0x7fd7279ef000)

Any help please ? config.log is here:

https://docs.google.com/open?id=0B8qBHKNhZAipNTNkMzUxZDEtNjJmZi00YzY3LWI4MmYtY2RkZDVkMjhiOTM1

Best, Miro
--
Message: 10
List-Post: users@lists.open-mpi.org
Date: Tue, 24 Jan 2012 11:55:21 -0500
From: Jeff Squyres 
Subject: Re: [OMPI users] pure static "mpirun" launcher
To: Open MPI Users 
Message-ID: 
Content-Type: text/plain; charset=windows-1252

Ilias: Have you simply tried building Open MPI with flags that force static 
linking?  E.g., something like this:

  ./configure --enable-static --disable-shared LDFLAGS=-Wl,-static

I.e., put in LDFLAGS whatever flags your compiler/linker needs to force static 
linking.  These LDFLAGS will be applied to all of Open MPI's executables, 
including mpirun.


On Jan 24, 2012, at 10:28 AM, Ralph Castain wrote:

> Good point! I'm traveling this week with limited resources, but will try to 
> address when able.
>
> Sent from my iPad
>
> On Jan 24, 2012, at 7:07 AM, Reuti  wrote:
>
>> Am 24.01.2012 um 15:49 schrieb Ralph Castain:
>>
>>> I'm a little confused. Building procs static makes sense as libraries may 
>>> not be available on compute nodes. However, mpirun is only executed in one 
>>> place, usually the head node where it was built. So there is less reason to 
>>> build it purely static.
>>>
>>> Are you trying to move mpirun somewhere? Or is it the daemons that mpirun 
>>> launches that are the real problem?
>>
>> This depends: if you have a queuing system, the master node of a parallel 
>> job may be one of the slave nodes already where the jobscript runs. 
>> Nevertheless I have the nodes uniform, but I saw places where it wasn't the 
>> case.
>>
>> An option would be to have a special queue, which will execute the jobscript 
>> always on the headnode (i.e. without generating any load) and use only 
>> non-local granted slots for mpirun. For this it might be necssary to have a 
>> high number of slots on the headnode for this queue, and request always one 
>> slot on this machine in addition to the necessary ones on the computing node.
>>
>> -- Reuti
>>
>>
>>> Sent from my iPad
>>>
>>> On Jan 24, 2012, at 5:54 AM, Ilias Miroslav  wrote:
>>>
 Dear experts,

 following http://www.open-mpi.org/faq/?category=building#static-build I 
 successfully build static OpenMPI library.
 Using such prepared library I succeeded in building parallel static 
 executable - dirac.x (ldd dirac.x-not a dynamic executable).

 The problem remains, however,  with the mpirun (orterun) launcher.
 While on the local machine, where I compiled both static OpenMPI & static 
 dirac.x  I am able to launch parallel job
 /mpirun -np 2 dirac.x ,
 I can not lauch it elsewhere, because "mpirun" is dynamically linked, thus 
 machine dependent:

 ldd mpirun:
 linux-vdso.so.1 =>  (0x7fff13792000)
 libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x7f40f8cab000)
 libnsl.so.1 => /lib/x86_64-linux-gnu/libnsl.so.1 (0x7f40f8a93000)
 libutil.so.1 => /lib/x86_64-linux-gnu/libutil.so.1 (0x7f40f888f000)
 libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x7f40f860d000)
 libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 
 (0x7f40f83f1000)
 libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x7f40f806c000)
 /lib64/ld-linux-x86-64.so.2 (0x7f40f8ecb000)

 Please how to I build "pure" static mpirun launcher, usable (in my case 
 together with static dirac.x) also on other computers  ?

 Thanks, Miro

 --
 RNDr. Miroslav Ilia?, PhD.


[OMPI users] cannot call member function 'virtual void MPI::MPI::Datatype::Commit()' without an object

2012-01-25 Thread Victor Pomponiu
Hi,

I cannot call MPI::Datatype::Commit() and MPI::Datatype::Get_size()
functions from my program. The error that I receive is the some for both of
them:

"cannot call member function 'virtual void MPI::Datatype::Commit()' without
an object
or
"cannot call member function 'virtual void MPI::Datatype::Get_size()'
without an object

If I'm providing an input parameter to them I will receive this error:

e.g.,
>MPI::Datatype::Commit(MPIVecDataBlock)

'no matching function for call to ‘MPI::Datatype::Commit(MPI::Datatype&)’


However, MPI::Datatype::Creat_struct() can be found.

Anyone can tell me how to solve this issue?


Thanks
V


Re: [OMPI users] Are the Messages delivered in order in the MPI?

2012-01-25 Thread Jeff Squyres
The tag also factors in here.  What I said in the blog entry was:

"The MPI specification doesn’t define which message arrives first.  It defines 
which message is matched first at the receiver: the first one (which happens to 
be the long one).  Specifically, between a pair of peers, MPI defines that 
messages sent on the same communicator and tag will be matched at the receiver 
in the same relative order."


On Jan 25, 2012, at 1:20 AM, George Bosilca wrote:

> Mateus,
> 
> MPI guarantee message ordering per communicator per peer. In other words any 
> message going from peer A to peer B in the same communicator will be 
> __matched__ on the receiver in the exact same order as they were sent (this 
> remains true even for multi-threaded libraries). MPI does not mandate any 
> other type of ordering, such as between communicators or between different 
> pairs of processes.
> 
> Now, what I previously said is only true for the matching logic. Completion 
> of message reception is a totally different thing.
> 
>   George.
> 
> 
> 
> On Jan 24, 2012, at 23:53, Mateus Augusto  wrote:
> 
>> After a read: http://blogs.cisco.com/performance/more_traffic/ 
>> I understood that if a large message is sent and then a short message is 
>> sent, then the short message can reach before. But what if the messages have 
>> the same size, and are small enough so that no fragmentation occurs, the 
>> ordering in delivery will be guaranteed?
>> 
>> 
>> ___
>> users mailing list
>> us...@open-mpi.org
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users


-- 
Jeff Squyres
jsquy...@cisco.com
For corporate legal information go to:
http://www.cisco.com/web/about/doing_business/legal/cri/




Re: [OMPI users] openib btl and MPI_THREAD_MULTIPLE

2012-01-25 Thread Yevgeny Kliteynik
On 24-Jan-12 5:59 PM, Ronald Heerema wrote:
> I was wondering if anyone can comment on the current state of support for the 
> openib btl when MPI_THREAD_MULTIPLE is enabled.

Short version - it's not supported.
Longer version - no one really spent time on testing it and fixing all
the places where this parameter breaks stuff, primarily due to lack of
demand.

-- YK


> 
> regards,
> Ron Heerema
> 
> 
> 
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users



Re: [OMPI users] Openmpi in Mingw

2012-01-25 Thread Shiqing Fan

Hi,

Are you using 32 bit Windows or 64 bit? Because as far as I know, the 
build for 64 bit windows with MinGW is not working. Which CMake 
Generator did you use? Did you run CMake from the MSYS command window?


Thanks,
Shiqing

On 2012-01-24 9:24 PM, Temesghen Kahsai wrote:

Hello,

I am having truble compiling openmpi (version 1.5.5rc2r25765 - nightly 
built) on Mingw. I am running Windows 7 and the latest version of Mingw.

I keep on getting the following error:

In file included from ../../opal/include/opal_config_bottom.h:258:0,
 from ../../opal/include/opal_config.h:2327,
 from asm.c:19:
../../opal/win32/win_compat.h:93:14: error: conflicting types for 
'ssize_t'
c:\mingw\bin\../lib/gcc/mingw32/4.6.2/../../../../include/sys/types.h:118:18: 
note: previous declaration of 'ssize_t' wa

s here
In file included from ../../opal/include/opal_config_bottom.h:258:0,
 from ../../opal/include/opal_config.h:2327,
 from asm.c:19:
../../opal/win32/win_compat.h:321:0: warning: "OPAL_HAVE_HWLOC" 
redefined [enabled by default]
../../opal/include/opal_config.h:1876:0: note: this is the location of 
the previous definition

In file included from ../../opal/include/opal_config.h:2327:0,
 from asm.c:19:
../../opal/include/opal_config_bottom.h:559:0: warning: "PF_UNSPEC" 
redefined [enabled by default]
c:\mingw\bin\../lib/gcc/mingw32/4.6.2/../../../../include/winsock2.h:368:0: 
note: this is the location of the previous definition
../../opal/include/opal_config_bottom.h:562:0: warning: "AF_INET6" 
redefined [enabled by default]
c:\mingw\bin\../lib/gcc/mingw32/4.6.2/../../../../include/winsock2.h:329:0: 
note: this is the location of the previous definition
../../opal/include/opal_config_bottom.h:565:0: warning: "PF_INET6" 
redefined [enabled by default]
c:\mingw\bin\../lib/gcc/mingw32/4.6.2/../../../../include/winsock2.h:392:0: 
note: this is the location of the previous definition



Thank you.

T
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users




--
---
Shiqing Fan
High Performance Computing Center Stuttgart (HLRS)
Tel: ++49(0)711-685-87234  Nobelstrasse 19
Fax: ++49(0)711-685-65832  70569 Stuttgart
http://www.hlrs.de/organization/people/shiqing-fan/
email: f...@hlrs.de



Re: [OMPI users] cannot call member function 'virtual void MPI::MPI::Datatype::Commit()' without an object

2012-01-25 Thread Jeff Squyres
On Jan 25, 2012, at 5:03 AM, Victor Pomponiu wrote:

> I cannot call MPI::Datatype::Commit() and MPI::Datatype::Get_size() functions 
> from my program. The error that I receive is the some for both of them:
> 
> "cannot call member function 'virtual void MPI::Datatype::Commit()' without 
> an object
> or 
> "cannot call member function 'virtual void MPI::Datatype::Get_size()' without 
> an object
> 
> If I'm providing an input parameter to them I will receive this error:
> 
> e.g.,
> >MPI::Datatype::Commit(MPIVecDataBlock)
> 
> 'no matching function for call to ‘MPI::Datatype::Commit(MPI::Datatype&)’

IIRC, these are member functions.  So you'd call my_datatype.Commit() and 
my_datatype.Get_size().

> However, MPI::Datatype::Creat_struct() can be found. 

This is a static function that is just in the MPI namespace -- it is not a 
member function on an object instance.

-- 
Jeff Squyres
jsquy...@cisco.com
For corporate legal information go to:
http://www.cisco.com/web/about/doing_business/legal/cri/




Re: [OMPI users] MPI_Comm_split and intercommunicator - Problem

2012-01-25 Thread Rodrigo Oliveira
Hi Thatyene,

I took a look in your code and it seems to be logically correct. Maybe
there is some problem when you call the split function having one client
process with color = MPI_UNDEFINED. I understood you are trying to isolate
one of the client process to do something applicable only to it, am I
wrong? According to open mpi documentation, this function can be used to do
that, but it is not working. Anyone have any idea about what can be?

Best regards

Rodrigo Oliveira

On Mon, Jan 23, 2012 at 4:53 PM, Thatyene Louise Alves de Souza Ramos <
thaty...@dcc.ufmg.br> wrote:

> Hi there!
>
> I've been trying to use the MPI_Comm_split function on an
> intercommunicator, but I didn't have success. My application is very simple
> and consists of a server that spawns 2 clients. After that, I want to split
> the intercommunicator between the server and the clients so that one client
> stay not connected with the server.
>
> The processes block in the split call and do not return. Can anyone help
> me?
>
> == Simplified server code ==
>
> int main( int argc, char *argv[] ) {
>
> MPI::Intracomm spawn_communicator = MPI::COMM_SELF;
> MPI::Intercomm group1;
>
> MPI::Init(argc, argv);
> group1 = spawn_client ( /* spawns 2 processes and returns the
> intercommunicator with them */ );
>  /* Tryes to split the intercommunicator */
> int color = 0;
>  MPI::Intercomm new_G1 = group1.Split(color, 0);
> group1.Free();
> group1 = new_G1;
>
> cout << "server after splitting- size G1 = " << group1.Get_remote_size()
> << endl << endl;
> MPI::Finalize();
>  return 0;
> }
>
> == Simplified client code ==
>
> int main( int argc, char *argv[] ) {
>
>  MPI::Intracomm group_communicator;
> MPI::Intercomm parent;
> int group_rank;
>  MPI::Init(argc, argv);
>  parent = MPI::Comm::Get_parent ();
> group_communicator = MPI::COMM_WORLD;
>  group_rank = group_communicator.Get_rank();
>  if (group_rank == 0) {
> color = 0;
>  }
> else {
> color = MPI_UNDEFINED;
>  }
>  MPI::Intercomm new_parent = parent.Split(color, inter_rank);
>  if (new_parent != MPI::COMM_NULL) {
> parent.Free();
> parent = new_parent;
>  }
>  group_communicator.Free();
>  parent.Free();
> MPI::Finalize();
> return 0;
> }
>
> Thanks in advance.
>
> Thatyene Ramos
>
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>


Re: [OMPI users] MPI_Comm_split and intercommunicator - Problem

2012-01-25 Thread Thatyene Louise Alves de Souza Ramos
It seems the split is blocking when must return MPI_COMM_NULL, in the case
I have one process with a color that does not exist in the other group or
with the color = MPI_UNDEFINED.

On Wed, Jan 25, 2012 at 4:28 PM, Rodrigo Oliveira  wrote:

> Hi Thatyene,
>
> I took a look in your code and it seems to be logically correct. Maybe
> there is some problem when you call the split function having one client
> process with color = MPI_UNDEFINED. I understood you are trying to isolate
> one of the client process to do something applicable only to it, am I
> wrong? According to open mpi documentation, this function can be used to do
> that, but it is not working. Anyone have any idea about what can be?
>
> Best regards
>
> Rodrigo Oliveira
>
>