Can please try this test program on Windows environment.

Thank you.
-Hiral

On Thu, May 12, 2011 at 10:59 AM, hi <hiralsmaill...@gmail.com> wrote:
> Any comment or suggestion on my below update....
>
>
>
> On Wed, May 11, 2011 at 12:59 PM, hi <hiralsmaill...@gmail.com> wrote:
>> Hi Jeff,
>>
>>> Can you send the info listed on the help page?
>>
>> From the HELP page...
>>
>> ***For run-time problems:
>> 1) Check the FAQ first. Really. This can save you a lot of time; many
>> common problems and solutions are listed there.
>> I couldn't find reference in FAQ.
>>
>> 2) The version of Open MPI that you're using.
>> I am using pre-built openmpi-1.5.3 64-bit and 32-bit binaries on Window 7
>> I also tried with locally built openmpi-1.5.2 using Visual Studio 2008
>> 32-bit compilers
>> I tried various compilers: VS-9 32-bit and VS-10 64-bit and
>> corresponding intel ifort compiler.
>>
>> 3) The config.log file from the top-level Open MPI directory, if
>> available (please compress!).
>> Don't have.
>>
>> 4) The output of the "ompi_info --all" command from the node where
>> you're invoking mpirun.
>> see output of pre-built openmpi-1.5.3_x64/bin/ompi_info --all" in 
>> attachments.
>>
>> 5) If running on more than one node --
>> I am running test program on single none.
>>
>> 6) A detailed description of what is failing.
>> Already described in this post.
>>
>> 7) Please include information about your network:
>> As I am running test program on local and single machine, this might
>> not be required.
>>
>>> You forgot ierr in the call to MPI_Finalize.  You also paired 
>>> DOUBLE_PRECISION data with MPI_INTEGER in the call to allreduce.  And you 
>>> mixed sndbuf and rcvbuf in the call to allreduce, meaning that when your 
>>> print rcvbuf afterwards, it'll always still be 0.
>>
>> As I am not Fortran programmer, this is my mistake !!!
>>
>>
>>>        program Test_MPI
>>>            use mpi
>>>            implicit none
>>>
>>>            DOUBLE PRECISION rcvbuf(5), sndbuf(5)
>>>            INTEGER nproc, rank, ierr, n, i, ret
>>>
>>>            n = 5
>>>            do i = 1, n
>>>                sndbuf(i) = 2.0
>>>                rcvbuf(i) = 0.0
>>>            end do
>>>
>>>            call MPI_INIT(ierr)
>>>            call MPI_COMM_RANK(MPI_COMM_WORLD, rank, ierr)
>>>            call MPI_COMM_SIZE(MPI_COMM_WORLD, nproc, ierr)
>>>            write(*,*) "size=", nproc, ", rank=", rank
>>>            write(*,*) "start --, rcvbuf=", rcvbuf
>>>            CALL MPI_ALLREDUCE(sndbuf, rcvbuf, n,
>>>     &              MPI_DOUBLE_PRECISION, MPI_SUM, MPI_COMM_WORLD, ierr)
>>>            write(*,*) "end --, rcvbuf=", rcvbuf
>>>
>>>            CALL MPI_Finalize(ierr)
>>>        end
>>>
>>> (you could use "include 'mpif.h'", too -- I tried both)
>>>
>>> This program works fine for me.
>>
>> I am observing same crash, as described in this thread (when executing
>> as "mpirun -np 2 mar_f_dp.exe"), even with above correct and simple
>> test program. I commented 'use mpi' as it gave me "Error in compiled
>> module file" error, so I used 'include "mpif.h"' statement (see
>> attachement).
>>
>> It seems that Windows specific issue, (I could run this test program
>> on Linux with openmpi-1.5.1).
>>
>> Can anybody try this test program on Windows?
>>
>> Thank you in advance.
>> -Hiral
>>
>

Reply via email to