Hi,
I haven't used the more mpi process also but still am still unable to
reduce my exection time.Here is my code *http://seshendramln.blogspot.se/*
and please help me in solving.
In this code iam getting the same execution time in i increase or decrease
the no.of nodes.

thanking you


With regards
seshendra


On Fri, May 4, 2012 at 12:55 PM, Jeff Squyres <jsquy...@cisco.com> wrote:

> You probably need to be more fine-grained in your timing.  Find out
> exactly what is increasing in time.  This is a common symptom for codes
> that do not scale well -- i.e., adding more MPI processes actually causes
> it to slow down.
>
>
> On May 3, 2012, at 7:48 AM, seshendra seshu wrote:
>
> > Hi,
> > I have written an parallel program and when i run my program on 4,8,16
> nodes and calculated the execution time at master using MPI_Wtime in master
> node. The problem the execution time is increasing rapidly like NON
> parallel program-55 sec, and for parallel program 2-nodes--60sec , 4-nodes
> 74sec, 8-node--120 sec and for 16 nodes---for 180 sec. can i know my
> problem in parallel version actually the time needs to be decreased but it
> is increasing i dont the reason. i have calculated my time as shown below
> >
> >
> > main(argv,argc)
> > {
> > double start,end;
> > start= MPI_Wtime;
> > // done some work
> > {
> > // start send from master node and receiving it
> > end =MPI_Wtime;
> > cout<<"execution time"<<end-start;
> > }
> > //in slave nodes done some work
> >  MPI_Finalize;
> > }
> >
> > Please help me in solving this problem.
> >
> > --
> >  WITH REGARDS
> > M.L.N.Seshendra
> > _______________________________________________
> > users mailing list
> > us...@open-mpi.org
> > http://www.open-mpi.org/mailman/listinfo.cgi/users
>
>
> --
> Jeff Squyres
> jsquy...@cisco.com
> For corporate legal information go to:
> http://www.cisco.com/web/about/doing_business/legal/cri/
>
>
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>



-- 
 WITH REGARDS
M.L.N.Seshendra

Reply via email to