Re: [OMPI users] mpirun only works when -np <4 (Gus Correa)

2009-12-10 Thread Mattijs Janssens
nes) > >> when users start listening to streaming video, doing Matlab > >> calculations, > >>etc, while the MPI programs are running. > >>This tends to oversubscribe the cores, and may lead > >>to crashes. > >> > >>2) RAM: > >>Can you monitor the RAM usage through "top"? > >>(I presume you are on Linux.) > >>It may show unexpected memory leaks, if they exist. > >> > >>On "top", type "1" (one) see all cores, type "f" > >>then "j" > >>to see the core number associated to each process. > >> > >>3) Do the programs work right with other MPI flavors > >>(e.g. MPICH2)? > >>If not, then it is not OpenMPI's fault. > >> > >>4) Any possibility that the MPI versions/flavors of > >>mpicc and > >>mpirun that you are using to compile and launch the > >>program are not the > >>same? > >> > >>5) Are you setting processor affinity on mpiexec? > >> > >>mpiexec -mca mpi_paffinity_alone 1 -np ... bla, bla > >> ... > >> > >>Context switching across the cores may also cause > >>trouble, I suppose. > >> > >>6) Which Linux are you using (uname -a)? > >> > >>On other mailing lists I read reports that only > >>quite recent kernels > >>support all the Intel Nehalem processor features > >> well. I don't have Nehalem, I can't help here, > >>but the information may be useful > >>for other list subscribers to help you. > >> > >>*** > >> > >>As for the programs, some programs require specific > >>setup, > >>(and even specific compilation) when the number of > >>MPI processes > >>vary. > >>It may help if you tell us a link to the program > >> sites. > >> > >>Baysian statistics is not totally out of our > >> business, but phylogenetic genetic trees is not really my league, hence > >> forgive me any bad guesses, please, > >>but would it need specific compilation or a different > >>set of input parameters to run correctly on a > >> different number of processors? > >>Do the programs mix MPI (message passing) with > >>OpenMP (threads)? > >> > >>I found this MrBayes, which seems to do the above: > >> > >>http://mrbayes.csit.fsu.edu/ > >>http://mrbayes.csit.fsu.edu/wiki/index.php/Main_Page > >> > >>As for the ABySS, what is it, where can it be found? > >>Doesn't look like a deep ocean circulation model, as > >>the name suggest. > >> > >>My $0.02 > >>Gus Correa > >> > >> > >> > >> > >> ___ > >>users mailing list > >>us...@open-mpi.org <mailto:us...@open-mpi.org> > >> > >>http://www.open-mpi.org/mailman/listinfo.cgi/users > >> > >> > >>___ > >>users mailing list > >>us...@open-mpi.org <mailto:us...@open-mpi.org> > >> > >>http://www.open-mpi.org/mailman/listinfo.cgi/users > >> > >> > >>_ > >>Matthew MacManes > >>PhD Candidate > >>University of California- Berkeley > >>Museum of Vertebrate Zoology > >>Phone: 510-495-5833 > >>Lab Website: http://ib.berkeley.edu/labs/lacey > >>Personal Website: http://macmanes.com/ > >> > >> > >> > >> > >> > >> > >> > >> > >> > >> > >>___ > >>users mailing list > >>us...@open-mpi.org <mailto:us...@open-mpi.org> > >> > >>http://www.open-mpi.org/mailman/listinfo.cgi/users > >> > >> > >>___ > >>users mailing list > >>us...@open-mpi.org <mailto:us...@open-mpi.org> > >> > >>http://www.open-mpi.org/mailman/listinfo.cgi/users > >> > >> > >> > >> > >> > >> ___ > >> users mailing list > >> us...@open-mpi.org > >> http://www.open-mpi.org/mailman/listinfo.cgi/users > > > > ___ > > users mailing list > > us...@open-mpi.org > > http://www.open-mpi.org/mailman/listinfo.cgi/users -- Mattijs Janssens OpenCFD Ltd. 9 Albert Road, Caversham, Reading RG4 7AN. Tel: +44 (0)118 9471030 Email: m.janss...@opencfd.co.uk URL: http://www.OpenCFD.co.uk

Re: [OMPI users] Parallel Quicksort

2009-08-06 Thread Mattijs Janssens
'. Should be easy to make a test application but you'll need to have OpenFOAM installed. Mattijs -- Mattijs Janssens OpenCFD Ltd. 9 Albert Road, Caversham, Reading RG4 7AN. Tel: +44 (0)118 9471030 Email: m.janss...@opencfd.co.uk URL: http://www.OpenCFD.co.uk

Re: [OMPI users] Low performance of Open MPI-1.3 over Gigabit

2009-03-04 Thread Mattijs Janssens
for Open > > MPI jobs. > > > > Note: On the same cluster Open MPI gives better performance for > > inifiniband nodes. > > > > What could be the problem for Open MPI over Gigabit? > > Any flags need to be used? > > Or is it not that good to use Open MPI on Gigabit? > > > > Thanks, > > Sangamesh > > ___ > users mailing list > us...@open-mpi.org > http://www.open-mpi.org/mailman/listinfo.cgi/users -- Mattijs Janssens OpenCFD Ltd. 9 Albert Road, Caversham, Reading RG4 7AN. Tel: +44 (0)118 9471030 Email: m.janss...@opencfd.co.uk URL: http://www.OpenCFD.co.uk

Re: [OMPI users] MPI flavor-agnostic libraries

2009-01-14 Thread Mattijs Janssens
MVAPICH. I can see some ways that > might work, but they are pretty complex - for example, I could create an > intercept library that loads a real MPI library explicitly and do whatever > needs be done (for example, translating MPI_Comm parameters). Does anyone > know of anything th

Re: [OMPI users] OpenMPI runtime-specific environment variable?

2008-10-21 Thread Mattijs Janssens
> > Brian > -- > Brian M. Adams, PhD (bria...@sandia.gov) > Optimization and Uncertainty Estimation > Sandia National Laboratories, Albuquerque, NM > http://www.sandia.gov/~briadam -- Mattijs Janssens OpenCFD Ltd. 9 Albert Road, Caversham, Reading RG4 7AN. Tel: +44 (0)118 9471030 Email: m.janss...@opencfd.co.uk URL: http://www.OpenCFD.co.uk

Re: [OMPI users] weird problem with passing data between nodes

2008-06-13 Thread Mattijs Janssens
Sounds like a typical deadlock situation. All processors are waiting for one another. Not a specialist but from what I know if the messages are small enough they'll be offloaded to kernel/hardware and there is no deadlock. That why it might work for small messages and/or certain mpi implementat

[OMPI users] ETH BTL

2007-10-31 Thread Mattijs Janssens
Sorry if this has already been discussed, am new to this list. I came across the ETH BTL from http://archiv.tu-chemnitz.de/pub/2006/0111/data/hoefler-CSR-06-06.pdf and was wondering whether this protocol is available / integrated into OpenMPI. Kind regards, Mattijs -- Mattijs Janssens