Classification: UNCLASSIFIED
Caveats: NONE

Sounds like there is an issue in transferring the value to the core that is 
printing out your timestep data.

I doubt that the problem is MPI_COMM_WORLD, that controls the communication 
description and shouldn't affect the values transferred.

You will want to look at your MPI_Reduce and MPI_Send calls to make sure that 
everything is ending up on the correct cores. Keep in
mind that with MPI each process has its own local copy of a value, so changes 
on one core don't affect any others.

-Andrew

-----Original Message-----
From: users [mailto:users-boun...@open-mpi.org] On Behalf Of Muhammad Ashfaqur 
Rahman
Sent: Saturday, February 14, 2015 6:10 AM
To: Open MPI Users
Subject: Re: [OMPI users] prob in running two mpi merged program (UNCLASSIFIED)

Dear Andrew Burns,

Many thanks to you for providing steps to check my programs. The combined 
program is now running parallel. But the values from one
of the program are appearing as NaN. The possible reason may be the 
MPI_COMM_WORLD. I am still trying to sort it out. I have
attached here the related files and outputs for your kind suggestions:


Regards

Ashfaq


On Fri, Feb 6, 2015 at 6:35 PM, Burns, Andrew J CTR (US) 
<andrew.j.burns35....@mail.mil> wrote:


        Classification: UNCLASSIFIED
        Caveats: NONE

        The placing of clminitialize and clmstop feel a little awkward, but it 
doesn't look like they would break the program. If
you were
        calling MPI_Init more than once it would throw an error and if Finalize 
were called early in clmstop the only serial section
would
        be the deallocation.



        One other thought is to ensure that you are properly launching the 
program as multicore.

        The command should be similar to as follows (where NPROCS is the number 
of cores being used):

        mpirun -n NPROCS ./program

        If you were to launch the program with simply "./program" it would run 
as serial. It would also run as serial if you were to
call
        "mpirun ./program" since no number of processes are specified.




        If the program is properly launched in parallel and then converts to 
serial, you should be able to track down the location
where
        this happens by inserting some core polling similar to the following 
pseudocode:

        for (i = 0; i < numprocs; ++i)
        {
          if (i = coreid)
          {
            print("core ", id, " out of ", numprocs)
          }
          MPI_Barrier()
        }



        You will want to check all of the calls inside the main loop to ensure 
that none of them call finalize.

        -Andrew Burns

        -----Original Message-----
        From: users [mailto:users-boun...@open-mpi.org] On Behalf Of Muhammad 
Ashfaqur Rahman
        Sent: Friday, February 06, 2015 9:50 AM
        To: Open MPI Users

        Subject: Re: [OMPI users] prob in running two mpi merged program 
(UNCLASSIFIED)

        Dear Andrew Burns,
        Many thanks for your correct understanding and descriptive suggestion.
        I have now changed the FLAGS of one program for not to take any MPI 
tags, i.e., switched off MPI. And then for the another
set kept
        open for MPI options.
        After that call the MPI_Initialize to the beginning of Main program 
(aadmn.F here) and call MPI_Finalize containing program
        (clmstop.F90 here) at the end part.
        After compilation it is still serial.
        I have attached here the FILES containing MPI calling and Makefile for 
your kind consideration.


        Regards
        Ashfaq

        On Thu, Feb 5, 2015 at 8:25 PM, Burns, Andrew J CTR (US) 
<andrew.j.burns35....@mail.mil> wrote:


                Classification: UNCLASSIFIED
                Caveats: NONE

                Okay, I think I may get what's going on. I think you're calling 
one mpi capable program from within another mpi
program.
        What you
                have to do is assume that the program that is being called 
already had MPI_Init called and that MPI_Finalize will be
called
        after
                the program returns.

                Example (pseudocode for brevity):

                int main()
                {
                  MPI_Init();

                  int x;

                  int p2result = Program2(x, comm);

                  MPI_Bcast(p2result, comm);

                  MPI_Finalize();
                }

                int Program2(int x, MPI_Comm comm)
                {
                  int returnval;
                  MPI_AllReduce(&returnval, x, comm);
                  return returnval;
                }



                If the second program were to be:

                int Program2(int x, MPI_Comm comm)
                {
                  MPI_Init();
                  int returnval;
                  MPI_AllReduce(&returnval, x, comm);
                  return returnval;
                  MPI_Finalize()
                }

                The program would return to serial when MPI_Finalize is first 
called, potentially throwing several errors.

                -Andrew Burns

                -----Original Message-----
                From: users [mailto:users-boun...@open-mpi.org] On Behalf Of 
Muhammad Ashfaqur Rahman
                Sent: Wednesday, February 04, 2015 3:42 PM
                To: Open MPI Users

                Subject: Re: [OMPI users] prob in running two mpi merged 
program (UNCLASSIFIED)

                Dear Andrew Burns,
                Thank you for your ideas. Your guess is partly correct, I am 
trying to merge two sets of programs into one
executable and
        then run
                in mpi.
                As per your suggestion, I have omitted the MPI_Finalize from of 
one set. And also commented the MPI_Barrier in some
parts.
                But still it is serial.
                For your idea: attached here Makefile.


                Regards
                Ashfaq


                On Tue, Feb 3, 2015 at 6:26 PM, Burns, Andrew J CTR (US) 
<andrew.j.burns35....@mail.mil> wrote:


                        Classification: UNCLASSIFIED
                        Caveats: NONE

                        If I could venture a guess, it sounds like you are 
trying to merge two separate programs into one executable
and run
        them in
                parallel
                        via MPI.

                        The problem sounds like an issue where your program 
starts in parallel but then changes back to serial while
the
        program is
                still
                        executing.

                        I can't be entirely sure without looking at the code 
itself.

                        One guess is that MPI_Finalize is in the wrong 
location. Finalize should be called to end the parallel
section and
        move the
                program
                        back to serial. Typically this means that Finalize will 
be very close to the last line of the program.

                        It may also be possible that with the way your program 
is structured, the effect is effectively serial since
only
        one core
                is
                        processing at any given moment. This may be due to 
extensive use of barrier or similar functions.

                        Andrew Burns
                        Lockheed Martin
                        Software Engineer
                        410-306-0409
                        ARL DSRC
                        andrew.j.bur...@us.army.mil
                        andrew.j.burns35....@mail.mil

                        -----Original Message-----
                        From: users [mailto:users-boun...@open-mpi.org] On 
Behalf Of Ralph Castain
                        Sent: Tuesday, February 03, 2015 9:05 AM
                        To: Open MPI Users
                        Subject: Re: [OMPI users] prob in running two mpi 
merged program

                        I'm afraid I don't quite understand what you are 
saying, so let's see if I can clarify. You have two fortran
MPI
        programs.
                You start
                        one using "mpiexec". You then start the other one as a 
singleton - i.e., you just run "myapp" without using
mpiexec.
        The two
                apps are
                        attempting to execute an MPI_Connect/accept so they can 
"join".

                        Is that correct? You mention MPICH in your statement 
about one of the procs - are you using MPICH or Open
MPI? If
        the
                latter, which
                        version are you using?

                        Ralph


                        On Mon, Feb 2, 2015 at 11:35 PM, Muhammad Ashfaqur 
Rahman <ashfaq...@gmail.com> wrote:


                                Dear All,
                                Take my greetings. I am new in mpi usage. I 
have problems in parallel run, when two fortran mpi
programs are
        merged
                to one
                        executable. If these two are separate, then they are 
running parallel.

                                One program has used spmd and another one  has 
used mpich header directly.

                                Other issue is that while trying to run the 
above mentioned merged program in mpi, it's first
started with
        separate
                parallel
                        instances of same step and then after some steps it 
becomes serial.

                                Please help me in this regards

                                Ashfaq
                                Ph.D Student
                                Dept. of Meteorology

                                _______________________________________________
                                users mailing list
                                us...@open-mpi.org
                                Subscription: 
http://www.open-mpi.org/mailman/listinfo.cgi/users
                                Link to this post: 
http://www.open-mpi.org/community/lists/users/2015/02/26264.php




                        Classification: UNCLASSIFIED
                        Caveats: NONE



                        _______________________________________________
                        users mailing list
                        us...@open-mpi.org
                        Subscription: 
http://www.open-mpi.org/mailman/listinfo.cgi/users
                        Link to this post: 
http://www.open-mpi.org/community/lists/users/2015/02/26266.php





                Classification: UNCLASSIFIED
                Caveats: NONE



                _______________________________________________
                users mailing list
                us...@open-mpi.org
                Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
                Link to this post: 
http://www.open-mpi.org/community/lists/users/2015/02/26293.php





        Classification: UNCLASSIFIED
        Caveats: NONE



        _______________________________________________
        users mailing list
        us...@open-mpi.org
        Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
        Link to this post: 
http://www.open-mpi.org/community/lists/users/2015/02/26300.php




Classification: UNCLASSIFIED
Caveats: NONE


Attachment: smime.p7s
Description: S/MIME cryptographic signature

Reply via email to