[OMPI users] MPI_Type_create_subarray fails!

2007-01-30 Thread Ivan de Jesus Deras Tabora
Hi, Recently I installed OpenMPI 1.1.4 using the source RPM on Fedora Core 6. then I tried to run some benchmarks from NASA. First I tried is some I/O benchmarks, It compiles, but when I run it, it generates the following error: [abc:25584] *** An error occurred in MPI_Type_create_subarray [abc

Re: [OMPI users] ompi_info segmentation fault

2007-01-30 Thread Jeff Squyres
Please note that due to a mixup in the 1.1.3 release, we just released v1.1.4. :-( See http://www.open-mpi.org/community/lists/announce/2007/01/0010.php for the official announcement. The short version is that the wrong tarball was posted to the OMPI web site for the 1.1.3 release (doh!)

Re: [OMPI users] ompi_info segmentation fault

2007-01-30 Thread Avishay Traeger
Jeff, Upgrading to 1.1.3 solved both issues - thank you very much! Avishay On Mon, 2007-01-29 at 20:59 -0500, Jeff Squyres wrote: > I'm quite sure that we have since fixed the command line parsing > problem, and I *think* we fixed the mmap problem. > > Is there any way that you can upgrade to

[OMPI users] no MPI_2COMPLEX and MPI_2DOUBLE_COMPLEX

2007-01-30 Thread Bert Wesarg
Hello, I see the extern definitions in mpi.h for ompi_mpi_2cplex and ompi_mpi_2dblcplex, but no #define for MPI_2COMPLEX and MPI_2DOUBLE_COMPLEX. Greetings Bert Wesarg

Re: [OMPI users] mutex deadlock in btl tcp

2007-01-30 Thread George Bosilca
Jeremy, You're right. Thanks for point it out. I do the change in the trunk. george. On Jan 30, 2007, at 3:40 AM, Jeremy Buisson wrote: Dear Open MPI users list, From time to time, I experience a mutex deadlock in Open-MPI 1.1.2. The stack trace is available at the end of the mail. The d

Re: [OMPI users] Scrambled communications using sshstarteronmultiple nodes.

2007-01-30 Thread Fisher, Mark S
The code can be Freely downloaded for US citizens (it is export controlled) at http://zephyr.lerc.nasa.gov/wind/. I can also provide you the test case which is very small. I am a developer of the code and can help you dig through it if you decide to download it. On the above page you will need to r

Re: [OMPI users] Scrambled communications using ssh starteronmultiple nodes.

2007-01-30 Thread Jeff Squyres
Is there any way that you can share the code? On Jan 30, 2007, at 9:57 AM, Fisher, Mark S wrote: The slaves send specific requests to the master and then waits for a reply to that request. For instance it might send a request to read a variable from the file. The master will read the variable a

Re: [OMPI users] Scrambled communications using ssh starteronmultiple nodes.

2007-01-30 Thread Fisher, Mark S
The slaves send specific requests to the master and then waits for a reply to that request. For instance it might send a request to read a variable from the file. The master will read the variable and send it back with the same tag in response. Thus there is never more than one response at a time t

Re: [OMPI users] Scrambled communications using ssh starter onmultiple nodes.

2007-01-30 Thread Jeff Squyres
On Jan 30, 2007, at 9:35 AM, Fisher, Mark S wrote: The master process uses both MPI_ANY_SOURCE and MPI_ANY_TAG while waiting for requests from slave processes. The slaves sometimes use MPI_ANY_TAG but the source is always specified. I think you said that you only had corruption issues on the s

Re: [OMPI users] Scrambled communications using ssh starter onmultiple nodes.

2007-01-30 Thread Fisher, Mark S
The master process uses both MPI_ANY_SOURCE and MPI_ANY_TAG while waiting for requests from slave processes. The slaves sometimes use MPI_ANY_TAG but the source is always specified. We have run the code through valgrid for a number of cases including the one being used here. The code is Fortran

Re: [OMPI users] mpirun related

2007-01-30 Thread Adrian Knoth
On Mon, Jan 29, 2007 at 10:49:10PM -0800, Chevchenkovic Chevchenkovic wrote: > Hi, Hi > mpirun internally uses ssh to launch a program on multiple nodes. > I would like to see the various parameters that are sent to each of > the nodes. How can I do this? You mean adding "pls_rsh_debug=1" to you

[OMPI users] mutex deadlock in btl tcp

2007-01-30 Thread Jeremy Buisson
Dear Open MPI users list, From time to time, I experience a mutex deadlock in Open-MPI 1.1.2. The stack trace is available at the end of the mail. The deadlock seems to be caused by lines 118 & 119 of the ompi/mca/btl/tcp/btl_tcp.c file, in function mca_btl_tcp_add_procs: OBJ_RELEASE(t

[OMPI users] mpirun related

2007-01-30 Thread Chevchenkovic Chevchenkovic
Hi, mpirun internally uses ssh to launch a program on multiple nodes. I would like to see the various parameters that are sent to each of the nodes. How can I do this? -chev