Re: [OMPI users] Building OpenMPI on Windows 7

2011-03-22 Thread hi
Hi Shiqing,

While building my application (on Windows 7, Vistual Studio 2008 32-bit
application) with openmpi-1.5.2, getting following error...

util.o : error LNK2001: unresolved external symbol _ompi_mpi_byte
util.o : error LNK2001: unresolved external symbol _ompi_mpi_op_max
util.o : error LNK2001: unresolved external symbol _ompi_mpi_int
util.o : error LNK2001: unresolved external symbol _ompi_mpi_char
util.o : error LNK2001: unresolved external symbol _ompi_mpi_comm_world
util.o : error LNK2001: unresolved external symbol _ompi_mpi_op_sum
Linking options...
/LIBPATH:""c:\openmpi-1.5.2\installed"/lib/" libmpi_cxxd.lib libmpid.lib
libmpi_f77d.lib libopen-pald.lib libopen-rted.lib

I seems that 'dllexport' is missing for above symbols.

Thank you.
-Hiral


On Fri, Mar 18, 2011 at 1:53 AM, Shiqing Fan  wrote:

> Hi Hiral,
>
>
>
> > There is no f90 bindings at moment for Windows.
> Any idea when this available?
>
> At moment, no. But only if there is strong requirements.
>
>
>
> Regards,
> Shiqing
>
>
> Thank you.
> -Hiral
>
> On Thu, Mar 17, 2011 at 5:21 PM, Shiqing Fan  wrote:
>
>>
>>  I tried building openmpi-1.5.2 on Windows 7 (as described below
>> environment) with OMPI_WANT_F77_BINDINGS_ON and
>> OMPI_WANT_F90_BINDINGS_ON using "ifort".
>>
>> I observed that it has generated mpif77.exe but didn't generated
>> mpif90.exe, any idea???
>>
>>
>> There is no f90 bindings at moment for Windows.
>>
>>
>>  BTW: while using above generated mpif77.exe to compile hello_f77.f got
>> following errors...
>>
>> c:\openmpi-1.5.2\examples> mpif77 hello_f77.f
>> Intel(R) Visual Fortran Compiler Professional for applications running on
>> IA-32,
>>  Version 11.1Build 20100414 Package ID: w_cprof_p_11.1.065
>> Copyright (C) 1985-2010 Intel Corporation.  All rights reserved.
>> C:/openmpi-1.5.2/installed/include\mpif-config.h(91): error #5082: Syntax
>> error,
>>  found ')' when expecting one of: (  
>> > _KIND_PARAM>   ...
>>   parameter (MPI_STATUS_SIZE=)
>> -^
>> compilation aborted for hello_f77.f (code 1)
>>
>> It seems MPI_STATUS_SIZE is not set. Could you please send me your
>> CMakeCache.txt to me off the mailing list, so that I can check what is going
>> wrong? A quick solution would be just set it to 0.
>>
>>
>> Regards,
>> Shiqing
>>
>>  Thank you.
>> -Hiral
>>
>>
>> On Wed, Mar 16, 2011 at 8:11 PM, Damien  wrote:
>>
>>
>>> Hiral,
>>>
>>> To add to Shiqing's comments, 1.5 has been running great for me on
>>> Windows for over 6 months since it was in beta.  You should give it a try.
>>>
>>> Damien
>>>
>>> On 16/03/2011 8:34 AM, Shiqing Fan wrote:
>>>
>>> Hi Hiral,
>>>
>>>
>>>
>>> > it's only experimental in 1.4 series. And there is only F77 bingdings
>>> on Windows, no F90 bindings.
>>> Can you please provide steps to build 1.4.3 with experimental f77
>>> bindings on Windows?
>>>
>>> Well, I highly recommend to use 1.5 series, but I can also take a look
>>> and probably provide you a patch for 1.4 .
>>>
>>>
>>>
>>> BTW: Do you have any idea on: when next stable release with full fortran
>>> support on Windows would be available?
>>>
>>> There is no plan yet.
>>>
>>>
>>> Regards,
>>> Shiqing
>>>
>>>
>>>
>>>
>>> Thank you.
>>> -Hiral
>>>
>>> On Wed, Mar 16, 2011 at 6:59 PM, Shiqing Fan  wrote:
>>>
>>>
 Hi Hiral,

 1.3.4 is quite old, please use the latest version. As Damien noted, the
 full fortran support is in 1.5 series, it's only experimental in 1.4 
 series.
 And there is only F77 bingdings on Windows, no F90 bindings. Another choice
 is to use the released binary installers to avoid compiling everything by
 yourself.


 Best Regards,
 Shiqing

 On 3/16/2011 11:47 AM, hi wrote:

  Greetings!!!



 I am trying to build openmpi-1.3.4 and openmpi-1.4.3 on Windows 7
 (64-bit OS), but getting some difficuty...



 My build environment:

 OS : Windows 7 (64-bit)

 C/C++ compiler : Visual Studio 2008 and Visual Studio 2010

 Fortran compiler: Intel "ifort"



 Approach: followed the "First Approach" described in README.WINDOWS
 file.



 *1) Using openmpi-1.3.4:***

 Observed build time error in version.cc(136). This error is related
 to getting SVN version information as described in
 http://www.open-mpi.org/community/lists/users/2010/01/11860.php. As we
 are using this openmpi-1.3.4 stable version on Linux platform, is there any
 fix to this compile time error?



 *2) Using openmpi-1.4.3:***

 Builds properly without F77/F90 support (i.e. i.e. Skipping MPI F77
 interface).

 Now to get the "mpif*.exe" for fortran programs, I provided proper
 "ifort" path and enabled "OMPI_WANT_F77_BINDINGS=ON" and/or
 OMPI_WANT_F90_BINDINGS=ON flag; but getting following errors...

 *   2.a) "ifort" with OMPI_WANT_F77_BINDIN

Re: [OMPI users] Building OpenMPI on Windows 7

2011-03-22 Thread Shiqing Fan

Hi Hiral,

You have to add "OMPI_IMPORTS" as a preprocessor definition in you 
project configuration. Or a easier way is to use the mpicc command line.


Please also take a look into the output of "mpicc --showme", it will 
give you the complete compile options.



Regards,
Shiqing

On 3/22/2011 10:36 AM, hi wrote:

Hi Shiqing,
While building my application (on Windows 7, Vistual Studio 2008 
32-bit application) with openmpi-1.5.2, getting following error...

util.o : error LNK2001: unresolved external symbol _ompi_mpi_byte
util.o : error LNK2001: unresolved external symbol _ompi_mpi_op_max
util.o : error LNK2001: unresolved external symbol _ompi_mpi_int
util.o : error LNK2001: unresolved external symbol _ompi_mpi_char
util.o : error LNK2001: unresolved external symbol _ompi_mpi_comm_world
util.o : error LNK2001: unresolved external symbol _ompi_mpi_op_sum
Linking options...
/LIBPATH:""c:\openmpi-1.5.2\installed"/lib/" libmpi_cxxd.lib 
libmpid.lib libmpi_f77d.lib libopen-pald.lib libopen-rted.lib

I seems that 'dllexport' is missing for above symbols.
Thank you.
-Hiral

On Fri, Mar 18, 2011 at 1:53 AM, Shiqing Fan > wrote:


Hi Hiral,



> There is no f90 bindings at moment for Windows.
Any idea when this available?

At moment, no. But only if there is strong requirements.



Regards,
Shiqing


Thank you.
-Hiral
On Thu, Mar 17, 2011 at 5:21 PM, Shiqing Fan mailto:f...@hlrs.de>> wrote:



I tried building openmpi-1.5.2 on Windows 7 (as described
below environment) with OMPI_WANT_F77_BINDINGS_ON and
OMPI_WANT_F90_BINDINGS_ON using "ifort".
I observed that it has generated mpif77.exe but didn't
generated mpif90.exe, any idea???


There is no f90 bindings at moment for Windows.



BTW: while using above generated mpif77.exe to compile
hello_f77.f got following errors...

c:\openmpi-1.5.2\examples> mpif77 hello_f77.f
Intel(R) Visual Fortran Compiler Professional for
applications running on IA-32,
 Version 11.1Build 20100414 Package ID:
w_cprof_p_11.1.065
Copyright (C) 1985-2010 Intel Corporation.  All rights
reserved.
C:/openmpi-1.5.2/installed/include\mpif-config.h(91):
error #5082: Syntax error,
 found ')' when expecting one of: ( 
...
  parameter (MPI_STATUS_SIZE=)
-^
compilation aborted for hello_f77.f (code 1)


It seems MPI_STATUS_SIZE is not set. Could you please send me
your CMakeCache.txt to me off the mailing list, so that I can
check what is going wrong? A quick solution would be just set
it to 0.


Regards,
Shiqing


Thank you.
-Hiral
On Wed, Mar 16, 2011 at 8:11 PM, Damien mailto:dam...@khubla.com>> wrote:

Hiral,
To add to Shiqing's comments, 1.5 has been running great
for me on Windows for over 6 months since it was in
beta.  You should give it a try.
Damien
On 16/03/2011 8:34 AM, Shiqing Fan wrote:

Hi Hiral,

> it's only experimental in 1.4 series. And there is
only F77 bingdings on Windows, no F90 bindings.
Can you please provide steps to build 1.4.3 with
experimental f77 bindings on Windows?

Well, I highly recommend to use 1.5 series, but I can
also take a look and probably provide you a patch for
1.4 .

BTW: Do you have any idea on: when next stable release
with full fortran support on Windows would be available?

There is no plan yet.
Regards,
Shiqing

Thank you.
-Hiral
On Wed, Mar 16, 2011 at 6:59 PM, Shiqing Fan
mailto:f...@hlrs.de>> wrote:

Hi Hiral,
1.3.4 is quite old, please use the latest version.
As Damien noted, the full fortran support is in
1.5 series, it's only experimental in 1.4 series.
And there is only F77 bingdings on Windows, no F90
bindings. Another choice is to use the released
binary installers to avoid compiling everything by
yourself.
Best Regards,
Shiqing
On 3/16/2011 11:47 AM, hi wrote:


Greetings!!!

I am trying to build openmpi-1.3.4 and
openmpi-1.4.3 on Windows 7 (64-bit OS), but
getting some difficuty...

My build environment:

OS : Windows 7 (64-bit)

C/C++ compiler : Visual Studio 2008 and Visual
Studio 2010

Fortran compiler: Intel "ifort"

Approach: followed 

[OMPI users] Parallel Computation under WiFi for Beginners

2011-03-22 Thread Abdul Rahman Riza
Dear All,

I am newbie in parallel computing and would like to ask.

I have switch and 2 laptops: 

 1. Dell inspiron 640, dual core 2 gb ram
 2. Dell inspiron 1010 intel atom 1 gb ram


Both laptop running Ubuntu 10.04 under wireles network using TP-LINK
access point.

I am wondering if you have tutorial and source code as demo of simple
parallel computing for  2 laptops to perform simultaneous computation.

Riza


Re: [OMPI users] bizarre failure with IMB/openib

2011-03-22 Thread Dave Love
Dave Love  writes:

> I'm trying to test some new nodes with ConnectX adaptors, and failing to
> get (so far just) IMB to run on them.

I suspect this is https://svn.open-mpi.org/trac/ompi/ticket/1919.  I'm
rather surprised it isn't an FAQ (actually frequently asked, not meaning
someone should have written it up).



Re: [OMPI users] 1.5.3 and SGE integration?

2011-03-22 Thread Dave Love
Ralph Castain  writes:

>> Should rshd be mentioned in the release notes?
>
> Just starting the discussion on the best solution going forward. I'd
> rather not have to tell SGE users to add this to their cmd line. :-(

Sure.  I just thought a new component would normally be mentioned in the
notes.



Re: [OMPI users] intel compiler linking issue and issue of environment variable on remote node, with open mpi 1.4.3 (Tim Prince)

2011-03-22 Thread yanyg


Thank you very much for the comments and hints. I will try to 
upgrade our intel compiler collections. As for my second issue, 
with open mpi, is there any way to propagate enviroment variables 
of the current process on the master node to other slave nodes, 
such that orted daemon could run on slave nodes too?

Thanks,
Yiguang

> On 3/21/2011 5:21 AM, ya...@adina.com wrote:
> 
> > I am trying to compile our codes with open mpi 1.4.3, by intel
> > compilers 8.1.
> >
> > (1) For open mpi 1.4.3 installation on linux beowulf cluster, I use:
> >
> > ./configure --prefix=/home/yiguang/dmp-setup/openmpi-1.4.3
> > CC=icc
> > CXX=icpc F77=ifort FC=ifort --enable-static LDFLAGS="-i-static -
> > static-libcxa" --with-wrapper-ldflags="-i-static -static-libcxa"
> > 2>&1 | tee config.log
> >
> > and
> >
> > make all install 2>&1 | tee install.log
> >
> > The issue is that I am trying to build open mpi 1.4.3 with intel
> > compiler libraries statically linked to it, so that when we run
> > mpirun/orterun, it does not need to dynamically load any intel
> > libraries. But what I got is mpirun always asks for some intel
> > library(e.g. libsvml.so) if I do not put intel library path on
> > library search path($LD_LIBRARY_PATH). I checked the open mpi user
> > archive, it seems only some kind user mentioned to use
> > "-i-static"(in my case) or "-static-intel" in ldflags, this is what
> > I did, but it seems not working, and I did not get any confirmation
> > whether or not this works for anyone else from the user archive.
> > could anyone help me on this? thanks!
> >
> 
> If you are to use such an ancient compiler (apparently a 32-bit one),
> you must read the docs which come with it, rather than relying on
> comments about a more recent version.  libsvml isn't included
> automatically at link time by that 32-bit compiler, unless you specify
> an SSE option, such as -xW. It's likely that no one has verified
> OpenMPI with a compiler of that vintage.  We never used the 32-bit
> compiler for MPI, and we encountered run-time library bugs for the
> ifort x86_64 which weren't fixed until later versions.
> 
> 
> -- 
> Tim Prince
> 
> 
> --



Re: [OMPI users] Is there an mca parameter equivalent to -bind-to-core?

2011-03-22 Thread Ralph Castain

On Mar 21, 2011, at 9:27 PM, Eugene Loh wrote:

> Gustavo Correa wrote:
> 
>> Dear OpenMPI Pros
>> 
>> Is there an MCA parameter that would do the same as the mpiexec switch 
>> '-bind-to-core'?
>> I.e., something that I could set up not in the mpiexec command line,
>> but for the whole cluster, or for an user, etc.
>> 
>> In the past I used '-mca mpi mpi_paffinity_alone=1'.

Must be a typo here - the correct command is '-mca mpi_paffinity_alone 1'

>> But that was before '-bind-to-core' came along.
>> However, my recollection of some recent discussions here in the list
>> is that the latter would not do the same as '-bind-to-core',
>> and that the recommendation was to use '-bind-to-core' in the mpiexec 
>> command line.

Just to be clear: mpi_paffinity_alone=1 still works and will cause the same 
behavior as bind-to-core.


>> 
> A little awkward, but how about
> 
> --bycorermaps_base_schedule_policy  core
> --bysocket  rmaps_base_schedule_policy  socket
> --bind-to-core  orte_process_bindingcore
> --bind-to-socketorte_process_bindingsocket
> --bind-to-none  orte_process_bindingnone
> 
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users




Re: [OMPI users] 1.5.3 and SGE integration?

2011-03-22 Thread Ralph Castain

On Mar 22, 2011, at 6:02 AM, Dave Love wrote:

> Ralph Castain  writes:
> 
>>> Should rshd be mentioned in the release notes?
>> 
>> Just starting the discussion on the best solution going forward. I'd
>> rather not have to tell SGE users to add this to their cmd line. :-(
> 
> Sure.  I just thought a new component would normally be mentioned in the
> notes.

You mean the rshd component? Probably should have been, but slipped thru the 
cracks. All that component does is allow the direct rsh/ssh launch of MPI apps 
instead of using the OMPI daemon. Only a few special systems use it because 
there is no way to know that a process died (since no daemon is monitoring it), 
and thus no way to cleanup if something goes wrong.


> 
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users




Re: [OMPI users] intel compiler linking issue and issue of environment variable on remote node, with open mpi 1.4.3 (Tim Prince)

2011-03-22 Thread Ralph Castain
On a beowulf cluster? So you are using bproc?

If so, you have to use the OMPI 1.2 series - we discontinued bproc support at 
the start of 1.3. Bproc will take care of the envars.

If not bproc, then I assume you will use ssh for launching? Usually, the 
environment is taken care of by setting up your .bashrc (or equiv for your 
shell) on the remote nodes (which usually have a shared file system so all 
binaries are available on all nodes).


On Mar 22, 2011, at 7:00 AM, ya...@adina.com wrote:

> 
> 
> Thank you very much for the comments and hints. I will try to 
> upgrade our intel compiler collections. As for my second issue, 
> with open mpi, is there any way to propagate enviroment variables 
> of the current process on the master node to other slave nodes, 
> such that orted daemon could run on slave nodes too?
> 
> Thanks,
> Yiguang
> 
>> On 3/21/2011 5:21 AM, ya...@adina.com wrote:
>> 
>>> I am trying to compile our codes with open mpi 1.4.3, by intel
>>> compilers 8.1.
>>> 
>>> (1) For open mpi 1.4.3 installation on linux beowulf cluster, I use:
>>> 
>>> ./configure --prefix=/home/yiguang/dmp-setup/openmpi-1.4.3
>>> CC=icc
>>> CXX=icpc F77=ifort FC=ifort --enable-static LDFLAGS="-i-static -
>>> static-libcxa" --with-wrapper-ldflags="-i-static -static-libcxa"
>>> 2>&1 | tee config.log
>>> 
>>> and
>>> 
>>> make all install 2>&1 | tee install.log
>>> 
>>> The issue is that I am trying to build open mpi 1.4.3 with intel
>>> compiler libraries statically linked to it, so that when we run
>>> mpirun/orterun, it does not need to dynamically load any intel
>>> libraries. But what I got is mpirun always asks for some intel
>>> library(e.g. libsvml.so) if I do not put intel library path on
>>> library search path($LD_LIBRARY_PATH). I checked the open mpi user
>>> archive, it seems only some kind user mentioned to use
>>> "-i-static"(in my case) or "-static-intel" in ldflags, this is what
>>> I did, but it seems not working, and I did not get any confirmation
>>> whether or not this works for anyone else from the user archive.
>>> could anyone help me on this? thanks!
>>> 
>> 
>> If you are to use such an ancient compiler (apparently a 32-bit one),
>> you must read the docs which come with it, rather than relying on
>> comments about a more recent version.  libsvml isn't included
>> automatically at link time by that 32-bit compiler, unless you specify
>> an SSE option, such as -xW. It's likely that no one has verified
>> OpenMPI with a compiler of that vintage.  We never used the 32-bit
>> compiler for MPI, and we encountered run-time library bugs for the
>> ifort x86_64 which weren't fixed until later versions.
>> 
>> 
>> -- 
>> Tim Prince
>> 
>> 
>> --
> 
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users




[OMPI users] "Re: RoCE (IBoE) & OpenMPI"

2011-03-22 Thread Eli Cohen
Hi,
this discussion has been brought to my attention so I joined this
mailing list to try to help.
As you already stated that the SL maps correctly to PCP when using
ibv_rc_pingpong, I assume OpenMPI works over rdma_cm. In that cases
please note the following:
1. If you're using OFED-1.5.2, than if if the rdma_cm socket is bound
to VLAN net device, all egress traffic will bear a default priority of
3.
2. The default priority is controlled by a module parameter to
rdma_cm.ko named def_prec2sl.
3. You may change the priority on a per socket basis (overriding the
module parameter) by using setsockopt() to set the option
RDMA_OPTION_ID_TOS to the required value of the TOS.
4. The TOS is mapped to SL according to the following formula: SL = TOS >> 5

I hope that clears things.

> Late yesterday I did have a chance to test the patch Jeff provided
> (against 1.4.3 - testing 1.5.x is on the docket for today). While it
> works, in that I can specify a gid_index, it doesn't do everything
> required - my traffic won't match a lossless CoS on the ethernet
> switch. Specifying a GID is only half of it; I really need to also
> specify a service level.
> The bottom 3 bits of the IB SL are mapped to ethernet's PCP bits in
> the VLAN tag. With a non-default gid, I can select an available VLAN
> (so RoCE's packets will include the PCP bits), but the only way to
> specify a priority is to use an SL. So far, the only RoCE-enabled app
> I've been able to make work correctly (such that traffic matches a
> lossless CoS on the switch) is ibv_rc_pingpong - and then, I need to
> use both a specific GID and a specific SL.
> The slides Pavel found seem a little misleading to me. The VLAN isn't
> determined by bound netdev; all VLAN netdevs map to the same IB
> adapter for RoCE. VLAN is determined by gid index. Also, the SL
> isn't determined by a set kernel policy; it's provided via the IB
> interfaces. As near as I can tell from Mellanox's documentation, OFED
> test apps, and the driver source, a RoCE adapter is an Infiniband card
> in almost all respects (even more so than an iWARP adapter).


Re: [OMPI users] Displaying MAIN in Totalview

2011-03-22 Thread Jeff Squyres
Huh.  We hadn't had any reports of DDT issues.

Is it failing because MPIR_Breakpoint is physically not present in the library?


On Mar 21, 2011, at 2:50 PM, Dominik Goeddeke wrote:

> Hi,
> 
> for what it's worth: Same thing happens with DDT. OpenMPI 1.2.x runs fine, 
> later versions (at least 1.4.x and newer) let DDT bail out with "Could not 
> break at function MPIR_Breakpoint".
> 
> DDT has something like "OpenMPI (compatibility mode)" in its session launch 
> dialog, with this setting (instead of the default "OpenMPI") it works 
> flawlessly.
> 
> Dominik
> 
> 
> 
> On 03/21/2011 06:22 PM, Ralph Castain wrote:
>> Ick - appears that got dropped a long time ago. I'll add it back in and post 
>> a CMR for 1.4 and 1.5 series.
>> 
>> Thanks!
>> Ralph
>> 
>> 
>> On Mar 21, 2011, at 11:08 AM, David Turner wrote:
>> 
>>> Hi,
>>> 
>>> About a month ago, this topic was discussed with no real resolution:
>>> 
>>> http://www.open-mpi.org/community/lists/users/2011/02/15538.php
>>> 
>>> We noticed the same problem (TV does not display the user's MAIN
>>> routine upon initial startup), and contacted the TV developers.
>>> They suggested a simple OMPI code modification, which we implemented
>>> and tested; it seems to work fine.  Hopefully, this capability
>>> can be restored in future releases.
>>> 
>>> Here is the body of our communication with the TV developers:
>>> 
>>> --
>>> 
>>> Interestingly enough, someone else asked this very same question recently 
>>> and I finally dug into it last week and figured out what was going on. 
>>> TotalView publishes a public interface which allows any MPI implementor to 
>>> set things up so that it should work fairly seamless with TotalView. I 
>>> found that one of the defines in the interface is
>>> 
>>> MPIR_force_to_main
>>> 
>>> and when we find this symbol defined in mpirun (or orterun in Open MPI's 
>>> case) then we spend a bit more effort to focus the source pane on the main 
>>> routine. As you may guess, this is NOT being defined in OpenMPI 1.4.2. It 
>>> was being defined in the 1.2.x builds though, in a routine called 
>>> totalview.c. OpenMPI has been re-worked significantly since then, and 
>>> totalview.c has been replaced by debuggers.c in orte/tools/orterun. About 
>>> line 130 to 140 (depending on any changes since my look at the 1.4.1 
>>> sources) you should find a number of MPIR_ symbols being defined.
>>> 
>>> struct MPIR_PROCDESC *MPIR_proctable = NULL;
>>> int MPIR_proctable_size = 0;
>>> int MPIR_being_debugged = 0;
>>> volatile int MPIR_debug_state = 0;
>>> volatile int MPIR_i_am_starter = 0;
>>> volatile int MPIR_partial_attach_ok = 1;
>>> 
>>> 
>>> I believe you should be able to insert the line:
>>> 
>>> int MPIR_force_to_main = 0;
>>> 
>>> into this section, and then the behavior you are looking for should work 
>>> after you rebuild OpenMPI. I haven't yet had the time to do that myself, 
>>> but that was all that existed in the 1.2.x sources, and I know those 
>>> achieved the desired effect. It's quite possible that someone realized the 
>>> symbol was initialized, but wasn't be used anyplace, so they just removed 
>>> it. Without realizing we were looking for it in the debugger. When I 
>>> pointed this out to the other user, he said he would try it out and pass it 
>>> on to the Open MPI group. I just checked on that thread, and didn't see any 
>>> update, so I passed on the info myself.
>>> 
>>> --
>>> 
>>> -- 
>>> Best regards,
>>> 
>>> David Turner
>>> User Services Groupemail: dptur...@lbl.gov
>>> NERSC Division phone: (510) 486-4027
>>> Lawrence Berkeley Labfax: (510) 486-4316
>>> ___
>>> users mailing list
>>> us...@open-mpi.org
>>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>> 
>> ___
>> users mailing list
>> us...@open-mpi.org
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
> 
> 
> -- 
> Dr. Dominik Göddeke
> Institut für Angewandte Mathematik
> Technische Universität Dortmund
> http://www.mathematik.tu-dortmund.de/~goeddeke
> Tel. +49-(0)231-755-7218  Fax +49-(0)231-755-5933
> 
> 
> 
> 
> 
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users


-- 
Jeff Squyres
jsquy...@cisco.com
For corporate legal information go to:
http://www.cisco.com/web/about/doing_business/legal/cri/




Re: [OMPI users] Parallel Computation under WiFi for Beginners

2011-03-22 Thread Jeff Squyres
There's lots of good MPI tutorials on the web.

My favorites are at the NCSA web site; if you get a free account, you can login 
and see their course listings.


On Mar 22, 2011, at 7:30 AM, Abdul Rahman Riza wrote:

> Dear All,
> 
> I am newbie in parallel computing and would like to ask.
> 
> I have switch and 2 laptops:
>   • Dell inspiron 640, dual core 2 gb ram
>   • Dell inspiron 1010 intel atom 1 gb ram
> 
> Both laptop running Ubuntu 10.04 under wireles network using TP-LINK access 
> point.
> 
> I am wondering if you have tutorial and source code as demo of simple 
> parallel computing for  2 laptops to perform simultaneous computation.
> 
> Riza
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users


-- 
Jeff Squyres
jsquy...@cisco.com
For corporate legal information go to:
http://www.cisco.com/web/about/doing_business/legal/cri/




Re: [OMPI users] Displaying MAIN in Totalview

2011-03-22 Thread Dominik Goeddeke

did a quick check for a 1.5.2 build I have lying around (on an Ubuntu box):
$ nm libmpi.so | grep MPIR_Breakpoint
000537bf T MPIR_Breakpoint

Dominik

On 03/22/2011 03:35 PM, Jeff Squyres wrote:

Huh.  We hadn't had any reports of DDT issues.

Is it failing because MPIR_Breakpoint is physically not present in the library?


On Mar 21, 2011, at 2:50 PM, Dominik Goeddeke wrote:


Hi,

for what it's worth: Same thing happens with DDT. OpenMPI 1.2.x runs fine, later versions 
(at least 1.4.x and newer) let DDT bail out with "Could not break at function 
MPIR_Breakpoint".

DDT has something like "OpenMPI (compatibility mode)" in its session launch dialog, with 
this setting (instead of the default "OpenMPI") it works flawlessly.

Dominik



On 03/21/2011 06:22 PM, Ralph Castain wrote:

Ick - appears that got dropped a long time ago. I'll add it back in and post a 
CMR for 1.4 and 1.5 series.

Thanks!
Ralph


On Mar 21, 2011, at 11:08 AM, David Turner wrote:


Hi,

About a month ago, this topic was discussed with no real resolution:

http://www.open-mpi.org/community/lists/users/2011/02/15538.php

We noticed the same problem (TV does not display the user's MAIN
routine upon initial startup), and contacted the TV developers.
They suggested a simple OMPI code modification, which we implemented
and tested; it seems to work fine.  Hopefully, this capability
can be restored in future releases.

Here is the body of our communication with the TV developers:

--

Interestingly enough, someone else asked this very same question recently and I 
finally dug into it last week and figured out what was going on. TotalView 
publishes a public interface which allows any MPI implementor to set things up 
so that it should work fairly seamless with TotalView. I found that one of the 
defines in the interface is

MPIR_force_to_main

and when we find this symbol defined in mpirun (or orterun in Open MPI's case) 
then we spend a bit more effort to focus the source pane on the main routine. 
As you may guess, this is NOT being defined in OpenMPI 1.4.2. It was being 
defined in the 1.2.x builds though, in a routine called totalview.c. OpenMPI 
has been re-worked significantly since then, and totalview.c has been replaced 
by debuggers.c in orte/tools/orterun. About line 130 to 140 (depending on any 
changes since my look at the 1.4.1 sources) you should find a number of MPIR_ 
symbols being defined.

struct MPIR_PROCDESC *MPIR_proctable = NULL;
int MPIR_proctable_size = 0;
int MPIR_being_debugged = 0;
volatile int MPIR_debug_state = 0;
volatile int MPIR_i_am_starter = 0;
volatile int MPIR_partial_attach_ok = 1;


I believe you should be able to insert the line:

int MPIR_force_to_main = 0;

into this section, and then the behavior you are looking for should work after 
you rebuild OpenMPI. I haven't yet had the time to do that myself, but that was 
all that existed in the 1.2.x sources, and I know those achieved the desired 
effect. It's quite possible that someone realized the symbol was initialized, 
but wasn't be used anyplace, so they just removed it. Without realizing we were 
looking for it in the debugger. When I pointed this out to the other user, he 
said he would try it out and pass it on to the Open MPI group. I just checked 
on that thread, and didn't see any update, so I passed on the info myself.

--

--
Best regards,

David Turner
User Services Groupemail: dptur...@lbl.gov
NERSC Division phone: (510) 486-4027
Lawrence Berkeley Labfax: (510) 486-4316
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users

___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users


--
Dr. Dominik Göddeke
Institut für Angewandte Mathematik
Technische Universität Dortmund
http://www.mathematik.tu-dortmund.de/~goeddeke
Tel. +49-(0)231-755-7218  Fax +49-(0)231-755-5933





___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users





--
Dr. Dominik Göddeke
Institut für Angewandte Mathematik
Technische Universität Dortmund
http://www.mathematik.tu-dortmund.de/~goeddeke
Tel. +49-(0)231-755-7218  Fax +49-(0)231-755-5933







Re: [OMPI users] Parallel Computation under WiFi for Beginners

2011-03-22 Thread Prentice Bisbal
I'd like to point out that nothing special needs to be done because
you're using a wireless network. As long as you're using TCP for your
message passing, it won't make a difference what you're using as long as
you have TCP/IP configured correctly.

On 03/22/2011 10:42 AM, Jeff Squyres wrote:
> There's lots of good MPI tutorials on the web.
> 
> My favorites are at the NCSA web site; if you get a free account, you can 
> login and see their course listings.
> 
> 
> On Mar 22, 2011, at 7:30 AM, Abdul Rahman Riza wrote:
> 
>> Dear All,
>>
>> I am newbie in parallel computing and would like to ask.
>>
>> I have switch and 2 laptops:
>>  • Dell inspiron 640, dual core 2 gb ram
>>  • Dell inspiron 1010 intel atom 1 gb ram
>>
>> Both laptop running Ubuntu 10.04 under wireles network using TP-LINK access 
>> point.
>>
>> I am wondering if you have tutorial and source code as demo of simple 
>> parallel computing for  2 laptops to perform simultaneous computation.
>>
>> Riza
>> ___
>> users mailing list
>> us...@open-mpi.org
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
> 
> 

-- 
Prentice Bisbal
Linux Software Support Specialist/System Administrator
School of Natural Sciences
Institute for Advanced Study
Princeton, NJ


Re: [OMPI users] intel compiler linking issue and issue of environment variable on remote node, with open mpi 1.4.3

2011-03-22 Thread Jeff Squyres
On Mar 21, 2011, at 8:21 AM, ya...@adina.com wrote:

> The issue is that I am trying to build open mpi 1.4.3 with intel 
> compiler libraries statically linked to it, so that when we run 
> mpirun/orterun, it does not need to dynamically load any intel 
> libraries. But what I got is mpirun always asks for some intel 
> library(e.g. libsvml.so) if I do not put intel library path on library 
> search path($LD_LIBRARY_PATH). I checked the open mpi user 
> archive, it seems only some kind user mentioned to use
> "-i-static"(in my case) or "-static-intel" in ldflags, this is what I did,
> but it seems not working, and I did not get any confirmation whether 
> or not this works for anyone else from the user archive. could 
> anyone help me on this? thanks!

Is it Open MPI's executables that require the intel shared libraries at run 
time, or your application?  Keep in mind the difference:

1. Compile/link flags that you specify to OMPI's configure script are used to 
compile/link Open MPI itself (including executables such as mpirun).

2. mpicc (and friends) use a similar-but-different set of flags to compile and 
link MPI applications.  Specifically, we try to use the minimal set of flags 
necessary to compile/link, and let the user choose to add more flags if they 
want to.  See this FAQ entry for more details:

http://www.open-mpi.org/faq/?category=mpi-apps#override-wrappers-after-v1.0

> (2) After compiling and linking our in-house codes  with open mpi 
> 1.4.3, we want to make a minimal list of executables for our codes 
> with some from open mpi 1.4.3 installation, without any dependent 
> on external setting such as environment variables, etc.
> 
> I orgnize my directory as follows:
> 
> parent---
>|
>   package
>   |
>   bin  
>   |
>   lib
>   |
>   tools
> 
> In package/ directory are executables from our codes. bin/ has 
> mpirun and orted, copied from openmpi installation. lib/ includes 
> open mpi libraries, and intel libraries. tools/ includes some c-shell 
> scripts to launch mpi jobs, which uses mpirun in bin/.

FWIW, you can use the following OMPI options to configure to eliminate all the 
OMPI plugins (i.e., locate all that code up in libmpi and friends, vs. being 
standalone-DSOs):

--disable-shared --enable-static

This will make libmpi.a (vs. libmpi.so and a bunch of plugins) which your 
application can statically link against.  But it does make a larger executable. 
 Alternatively, you can:

--disable-dlopen

(instead of disable-shared/enable-static) which will make a giant libmpi.so 
(vs. libmpi.so and all the plugin DSOs).  So your MPI app will still 
dynamically link against libmpi, but all the plugins will be physically located 
in libmpi.so vs. being dlopen'ed at run time.

> The parent/ directory is on a NFS shared by all nodes of the 
> cluster. In ~/.bashrc(shared by all nodes too), I clear PATH and 
> LD_LIBRARY_PATH without direct to any directory of open mpi 
> 1.4.3 installation. 
> 
> First, if I set above bin/ directory  to PATH and lib/ 
> LD_LIBRARY_PATH in ~/.bashrc, our parallel codes(starting by the 
> C shell script in tools/) run AS EXPECTED without any problem, so 
> that I set other things right.
> 
> Then again, to avoid modifying ~/.bashrc or ~/.profile, I set bin/ to 
> PATH and lib/ to LD_LIBRARY_PATH in the C shell script under 
> tools/ directory, as:
> 
> setenv PATH /path/to/bin:$PATH
> setenv LD_LIBRARY_PATH /path/to/lib:$LD_LIBRARY_PATH

Instead, you might want to try:

   /path/to/mpirun ...

which will do the same thing as mpirun's --prefix option (see mpirun(1) for 
details here), and/or use the --enable-mpi-prefix-by-default configure option.  
This option, as is probably pretty obvious :-), makes mpirun behave as if the 
--prefix option was specified on the command line, with an argument equal to 
the $prefix from configure.

-- 
Jeff Squyres
jsquy...@cisco.com
For corporate legal information go to:
http://www.cisco.com/web/about/doing_business/legal/cri/




Re: [OMPI users] Parallel Computation under WiFi for Beginners

2011-03-22 Thread Abdul Rahman Riza
Thank you guys for information.

I don;t know from where I should start. This is my first experience
using OpenMPI. Is there any simple calculation using my 2 laptops? 
Please if there is very very simple tutorial for dummies...

On Tue, 2011-03-22 at 13:34 -0400, Prentice Bisbal wrote:

> I'd like to point out that nothing special needs to be done because
> you're using a wireless network. As long as you're using TCP for your
> message passing, it won't make a difference what you're using as long as
> you have TCP/IP configured correctly.
> 
> On 03/22/2011 10:42 AM, Jeff Squyres wrote:
> > There's lots of good MPI tutorials on the web.
> > 
> > My favorites are at the NCSA web site; if you get a free account, you can 
> > login and see their course listings.
> > 
> > 
> > On Mar 22, 2011, at 7:30 AM, Abdul Rahman Riza wrote:
> > 
> >> Dear All,
> >>
> >> I am newbie in parallel computing and would like to ask.
> >>
> >> I have switch and 2 laptops:
> >>• Dell inspiron 640, dual core 2 gb ram
> >>• Dell inspiron 1010 intel atom 1 gb ram
> >>
> >> Both laptop running Ubuntu 10.04 under wireles network using TP-LINK 
> >> access point.
> >>
> >> I am wondering if you have tutorial and source code as demo of simple 
> >> parallel computing for  2 laptops to perform simultaneous computation.
> >>
> >> Riza
> >> ___
> >> users mailing list
> >> us...@open-mpi.org
> >> http://www.open-mpi.org/mailman/listinfo.cgi/users
> > 
> > 
> 




Re: [OMPI users] Parallel Computation under WiFi for Beginners

2011-03-22 Thread Abdul Rahman Riza
Thank Jeff,

How can I get free account? It requires username and password

http://hpcsoftware.ncsa.illinois.edu/Software/user/show_all.php?deploy_id=989&view=NCSA%20&PHPSESSID=247ec50d90ddc9b3e8d7e1631bc1efa1
A username and password are being requested by
https://internal.ncsa.uiuc.edu. The site says: "Secure (SSL) Kerberos
Login"



On Tue, 2011-03-22 at 10:42 -0400, Jeff Squyres wrote:

> There's lots of good MPI tutorials on the web.
> 
> My favorites are at the NCSA web site; if you get a free account, you can 
> login and see their course listings.
> 
> 
> On Mar 22, 2011, at 7:30 AM, Abdul Rahman Riza wrote:
> 
> > Dear All,
> > 
> > I am newbie in parallel computing and would like to ask.
> > 
> > I have switch and 2 laptops:
> > • Dell inspiron 640, dual core 2 gb ram
> > • Dell inspiron 1010 intel atom 1 gb ram
> > 
> > Both laptop running Ubuntu 10.04 under wireles network using TP-LINK access 
> > point.
> > 
> > I am wondering if you have tutorial and source code as demo of simple 
> > parallel computing for  2 laptops to perform simultaneous computation.
> > 
> > Riza
> > ___
> > users mailing list
> > us...@open-mpi.org
> > http://www.open-mpi.org/mailman/listinfo.cgi/users
> 
> 




Re: [OMPI users] Parallel Computation under WiFi for Beginners

2011-03-22 Thread Jeff Squyres
Try this URL:

http://www.citutor.org/login.php


On Mar 22, 2011, at 2:19 PM, Abdul Rahman Riza wrote:

> Thank Jeff,
> 
> How can I get free account? It requires username and password
> 
> http://hpcsoftware.ncsa.illinois.edu/Software/user/show_all.php?deploy_id=989&view=NCSA%20&PHPSESSID=247ec50d90ddc9b3e8d7e1631bc1efa1
> A username and password are being requested by 
> https://internal.ncsa.uiuc.edu. The site says: "Secure (SSL) Kerberos Login"
> 
> 
> 
> On Tue, 2011-03-22 at 10:42 -0400, Jeff Squyres wrote:
>> There's lots of good MPI tutorials on the web.
>> 
>> My favorites are at the NCSA web site; if you get a free account, you can 
>> login and see their course listings.
>> 
>> 
>> On Mar 22, 2011, at 7:30 AM, Abdul Rahman Riza wrote:
>> 
>> > Dear All,
>> > 
>> > I am newbie in parallel computing and would like to ask.
>> > 
>> > I have switch and 2 laptops:
>> >• Dell inspiron 640, dual core 2 gb ram
>> >• Dell inspiron 1010 intel atom 1 gb ram
>> > 
>> > Both laptop running Ubuntu 10.04 under wireles network using TP-LINK 
>> > access point.
>> > 
>> > I am wondering if you have tutorial and source code as demo of simple 
>> > parallel computing for  2 laptops to perform simultaneous computation.
>> > 
>> > Riza
>> > ___
>> > users mailing list
>> > 
>> us...@open-mpi.org
>> 
>> > 
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>> 
>> 
>> 
>> 
> 
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users


-- 
Jeff Squyres
jsquy...@cisco.com
For corporate legal information go to:
http://www.cisco.com/web/about/doing_business/legal/cri/




Re: [OMPI users] Parallel Computation under WiFi for Beginners

2011-03-22 Thread Jeff Squyres
Look in Open MPI's examples/ directory.


On Mar 22, 2011, at 2:15 PM, Abdul Rahman Riza wrote:

> Thank you guys for information.
> 
> I don;t know from where I should start. This is my first experience using 
> OpenMPI. Is there any simple calculation using my 2 laptops? 
> Please if there is very very simple tutorial for dummies...
> 
> On Tue, 2011-03-22 at 13:34 -0400, Prentice Bisbal wrote:
>> I'd like to point out that nothing special needs to be done because
>> you're using a wireless network. As long as you're using TCP for your
>> message passing, it won't make a difference what you're using as long as
>> you have TCP/IP configured correctly.
>> 
>> On 03/22/2011 10:42 AM, Jeff Squyres wrote:
>> > There's lots of good MPI tutorials on the web.
>> > 
>> > My favorites are at the NCSA web site; if you get a free account, you can 
>> > login and see their course listings.
>> > 
>> > 
>> > On Mar 22, 2011, at 7:30 AM, Abdul Rahman Riza wrote:
>> > 
>> >> Dear All,
>> >>
>> >> I am newbie in parallel computing and would like to ask.
>> >>
>> >> I have switch and 2 laptops:
>> >>   • Dell inspiron 640, dual core 2 gb ram
>> >>   • Dell inspiron 1010 intel atom 1 gb ram
>> >>
>> >> Both laptop running Ubuntu 10.04 under wireles network using TP-LINK 
>> >> access point.
>> >>
>> >> I am wondering if you have tutorial and source code as demo of simple 
>> >> parallel computing for  2 laptops to perform simultaneous computation.
>> >>
>> >> Riza
>> >> ___
>> >> users mailing list
>> >> 
>> us...@open-mpi.org
>> 
>> >> 
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>> 
>> > 
>> > 
>> 
>> 
> 
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users


-- 
Jeff Squyres
jsquy...@cisco.com
For corporate legal information go to:
http://www.cisco.com/web/about/doing_business/legal/cri/




Re: [OMPI users] Parallel Computation under WiFi for Beginners

2011-03-22 Thread Abdul Rahman Riza
THANKS JEFF..!!

On Tue, 2011-03-22 at 14:20 -0400, Jeff Squyres wrote:

> Try this URL:
> 
> http://www.citutor.org/login.php
> 
> 
> On Mar 22, 2011, at 2:19 PM, Abdul Rahman Riza wrote:
> 
> > Thank Jeff,
> > 
> > How can I get free account? It requires username and password
> > 
> > http://hpcsoftware.ncsa.illinois.edu/Software/user/show_all.php?deploy_id=989&view=NCSA%20&PHPSESSID=247ec50d90ddc9b3e8d7e1631bc1efa1
> > A username and password are being requested by 
> > https://internal.ncsa.uiuc.edu. The site says: "Secure (SSL) Kerberos Login"
> > 
> > 
> > 
> > On Tue, 2011-03-22 at 10:42 -0400, Jeff Squyres wrote:
> >> There's lots of good MPI tutorials on the web.
> >> 
> >> My favorites are at the NCSA web site; if you get a free account, you can 
> >> login and see their course listings.
> >> 
> >> 
> >> On Mar 22, 2011, at 7:30 AM, Abdul Rahman Riza wrote:
> >> 
> >> > Dear All,
> >> > 
> >> > I am newbie in parallel computing and would like to ask.
> >> > 
> >> > I have switch and 2 laptops:
> >> >  • Dell inspiron 640, dual core 2 gb ram
> >> >  • Dell inspiron 1010 intel atom 1 gb ram
> >> > 
> >> > Both laptop running Ubuntu 10.04 under wireles network using TP-LINK 
> >> > access point.
> >> > 
> >> > I am wondering if you have tutorial and source code as demo of simple 
> >> > parallel computing for  2 laptops to perform simultaneous computation.
> >> > 
> >> > Riza
> >> > ___
> >> > users mailing list
> >> > 
> >> us...@open-mpi.org
> >> 
> >> > 
> >> http://www.open-mpi.org/mailman/listinfo.cgi/users
> >> 
> >> 
> >> 
> >> 
> > 
> > ___
> > users mailing list
> > us...@open-mpi.org
> > http://www.open-mpi.org/mailman/listinfo.cgi/users
> 
>