Hi,

I use Infiniband(--mca btl openib,self).

I know that waiting allreduce might cause high cpu consumption.
I intended to create such a situation to check system cpu usage
when some processes are kept waiting. I'm afraid that it might
affect execution speed. Indeed, my application (MUMPS base)
built with 1.7rc tends to be slightly slower than 1.6.2.

That's why I'd like to ask the reason and influence of this
behavior here.

Regards,
tmishima

> Not sure why they would be different, though there are changes to the
code, of course. Would have to dig deep to find out why - perhaps one of
the BTL developers will chime in here. Which transport
> are you using (Infiniband, TCP, ?)?
>
> As for why the cpu gets consumed, it's that allreduce that is causing it.
You put one rank to sleep, and then have all the others jump into allreduce
- so they all start spinning like mad trying to
> complete the collective.
>
>
> On Oct 28, 2012, at 8:38 PM, tmish...@jcity.maeda.co.jp wrote:
>
> >
> >
> > Hi,
> >
> > I made my testing program simpler as shown below.
> >
> > I compared openmpi-1.6.2 and openmpi1.7rc1/4 again
> > in system cpu usage while some processes wait for
> > others.
> >
> > Then, the result is same as reported bofore.
> >
> >               system cpu usage
> > openmpi-1.6.2     0%
> > openmpi-1.7rc1   70%
> > openmpi-1.7rc4   70%
> >
> > My question is why openmpi1.7rc is so different from
> > openmpi-1.6.2 in system cpu usage. Is this the intended
> > behavior?
> >
> >      INCLUDE 'mpif.h'
> >      CALL MPI_INIT(IERR)
> > c
> >      CALL MPI_COMM_RANK( MPI_COMM_WORLD, MYID, IERR )
> >      IF ( MYID .EQ. 0 ) CALL SLEEP(180) ! WAIT 180 SEC.
> > c
> >      ISND = 1
> >      CALL MPI_ALLREDUCE(ISND,IRCV,1,MPI_INTEGER,MPI_SUM,
> >     +MPI_COMM_WORLD,IERR)
> >      CALL MPI_FINALIZE(IERR)
> > c
> >      END
> >
> > Regards,
> > tmishima
> >
> >> I'm not sure - just fishing for possible answers. When we see high cpu
> > usage, it usually occurs during MPI communications - when a process is
> > waiting for a message to arrive, it polls at a high rate
> >> to keep the latency as low as possible. Since you have one process
> > "sleep" before calling the finalize sequence, it could be that the
other
> > process is getting held up on a receive and thus eating the
> >> cpu.
> >>
> >> There really isn't anything special going on during Init/Finalize, and
> > OMPI itself doesn't have any MPI communications in there. I'm not
familiar
> > with MUMPS, but if MUMPS finalize is doing something
> >> like an MPI_Barrier to ensure the procs finalize together, then that
> > would explain what you see. The docs I could find imply there is some
MPI
> > embedded in MUMPS, but I couldn't find anything specific
> >> about finalize.
> >>
> >>
> >> On Oct 25, 2012, at 6:43 PM, tmish...@jcity.maeda.co.jp wrote:
> >>
> >>>
> >>>
> >>> Hi Ralph,
> >>>
> >>> do you really mean "MUMPS finalize"? I don't think it has much
relation
> >>> with
> >>> this behavior?
> >>>
> >>> Anyway, I'm just a mumps user. I have to ask mumps developers about
> > what
> >>> MUMPS
> >>> initailize and finalize does.
> >>>
> >>> Regartds,
> >>> tmishima
> >>>
> >>>> Out of curiosity, what does MUMPS finalize do? Does it send a
message
> > or
> >>> do a barrier operation?
> >>>>
> >>>>
> >>>> On Oct 25, 2012, at 5:53 PM, tmish...@jcity.maeda.co.jp wrote:
> >>>>
> >>>>>
> >>>>>
> >>>>> Hi,
> >>>>>
> >>>>> I find that system CPU time of openmpi-1.7rc1 is quite different
with
> >>>>> that of openmpi-1.6.2 as shown in the attached ganglia display.
> >>>>>
> >>>>> About 2 years ago, I reported a similar behavior of openmpi-1.4.3.
> >>>>> The testing method is what I used at that time.
> >>>>> (please see my post entitled "SYSTEM CPU with OpenMPI 1.4.3")
> >>>>>
> >>>>> Is this due to a pre-released version's check routine or does
> >>>>> something go wrong?
> >>>>>
> >>>>> Best regards,
> >>>>> Tetsuya Mishima
> >>>>>
> >>>>> ------------------
> >>>>> Testing program:
> >>>>>    INCLUDE 'mpif.h'
> >>>>>    INCLUDE 'dmumps_struc.h'
> >>>>>    TYPE (DMUMPS_STRUC) MUMPS_PAR
> >>>>> c
> >>>>>    MUMPS_PAR%COMM = MPI_COMM_WORLD
> >>>>>    MUMPS_PAR%SYM = 1
> >>>>>    MUMPS_PAR%PAR = 1
> >>>>>    MUMPS_PAR%JOB = -1 ! INITIALIZE MUMPS
> >>>>>    CALL MPI_INIT(IERR)
> >>>>>    CALL DMUMPS(MUMPS_PAR)
> >>>>> c
> >>>>>    CALL MPI_COMM_RANK( MPI_COMM_WORLD, MYID, IERR )
> >>>>>    IF ( MYID .EQ. 0 ) CALL SLEEP(180) ! WAIT 180 SEC.
> >>>>> c
> >>>>>    MUMPS_PAR%JOB = -2 ! FINALIZE MUMPS
> >>>>>    CALL DMUMPS(MUMPS_PAR)
> >>>>>    CALL MPI_FINALIZE(IERR)
> >>>>> c
> >>>>>    END
> >>>>> ( This does nothing but just calls intialize & finalize
> >>>>> routine of MUMPS & MPI)
> >>>>>
> >>>>> command line : mpirun -host node03 -np 16 ./testrun
> >>>>>
> >>>>> (See attached file:
> >>>
> >
openmpi17rc1-cmp.bmp)<openmpi17rc1-cmp.bmp>_______________________________________________

> >
> >>>
> >>>>> users mailing list
> >>>>> us...@open-mpi.org
> >>>>> http://www.open-mpi.org/mailman/listinfo.cgi/users
> >>>>
> >>>>
> >>>> _______________________________________________
> >>>> users mailing list
> >>>> us...@open-mpi.org
> >>>> http://www.open-mpi.org/mailman/listinfo.cgi/users
> >>>>
> >>>
> >>> _______________________________________________
> >>> users mailing list
> >>> us...@open-mpi.org
> >>> http://www.open-mpi.org/mailman/listinfo.cgi/users
> >>
> >>
> >> _______________________________________________
> >> users mailing list
> >> us...@open-mpi.org
> >> http://www.open-mpi.org/mailman/listinfo.cgi/users
> >>
> >
> > _______________________________________________
> > users mailing list
> > us...@open-mpi.org
> > http://www.open-mpi.org/mailman/listinfo.cgi/users
>
>
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>

Reply via email to