[O-MPI users] forrtl: severe (39): error during read, unit 5, file /dev/ptmx - OpenMPI 1.0.2

2006-01-31 Thread Konstantin Kudin
Hi all, The package Quantum Espresso (QE) 3.0 from www.pwscf.org works fine with MPICH 1.2.7 and these versions of Intel compilers. QE also compiles fine with Open-MPI. However, when trying to run the CP program from the package with either Open-MPI v1.0.1, or the latest nightly build [ 1.0.2a

Re: [O-MPI users] forrtl: severe (39): error during read, unit 5, file /dev/ptmx - OpenMPI 1.0.2

2006-02-01 Thread Konstantin Kudin
unit 5, file /dev/ptmx". If I add a keyword to the end of the file "END", and then never let the program see the end of file, it works fine. Is there a patch for Open-MPI that would fix that? Thanks! Konstantin --- Konstantin Kudin wrote: > Hi all, > > The package

[O-MPI users] Open-MPI all-to-all performance

2006-02-02 Thread Konstantin Kudin
Hi all, Please see the attached file for a detailed report of Open-MPI performance. Any fixes in the pipeline for that? Konstantin __ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com repor

Re: [O-MPI users] Open-MPI all-to-all performance

2006-02-02 Thread Konstantin Kudin
Hi all, There seem to have been problems with the attachement. Here is the report: I did some tests of Open-MPI version 1.0.2a4r8848. My motivation was an extreme degradation of all-to-all MPI performance on 8 cpus (ran like 1 cpu). At the same time, MPICH 1.2.7 on 8 cpus runs more like on 4 (

Re: [O-MPI users] Open-MPI all-to-all performance

2006-02-04 Thread Konstantin Kudin
Dear Jeff and Galen, I have tried openmpi-1.1a1r8890. The good news is that it seems like the freaky long latencies for certain packet sizes went away with the options they showed up with before. Also, one version of all-to-all appears to behave nicer with a specified set of parameters. However,

Re: [O-MPI users] Open-MPI all-to-all performance

2006-02-04 Thread Konstantin Kudin
Sorry, I forgot to specify everything properly in my previous e-mail: mpirun -np 8 -mca btl tcp -mca coll self,basic,tuned -mca \ mpi_paffinity_alone 1 skampi41 #/*@insyncol_MPI_Alltoall-nodes-long-SM.ski*/ 2 256.4 2.8 8 256.4 2.8 8 32153.0 13.0

Re: [O-MPI users] Open-MPI all-to-all performance

2006-02-06 Thread Konstantin Kudin
chmark. > > Thanks, > > Galen > > > On Feb 4, 2006, at 9:37 AM, Konstantin Kudin wrote: > > > Dear Jeff and Galen, > > > > I have tried openmpi-1.1a1r8890. The good news is that it seems > like > > the freaky long latencies for certain pac

[O-MPI users] "alltoall" vs "alltoallv"

2006-02-07 Thread Konstantin Kudin
Hi all, I was wondering if it would be possible to use the same scheduling for "alltoallv" as for "alltoall". If one assumes the messages of roughly the same size, then "alltoall" would not be an unreasonable approximation for "alltoallv". As is, it appears that in v1.1 "alltoallv" is done via a

Re: [O-MPI users] forrtl: severe (39): error during read, unit 5, file /dev/ptmx - OpenMPI 1.0.2

2006-02-08 Thread Konstantin Kudin
ff Squyres wrote: > > > Konstantin -- > > > > I am able to replicate your error. Let me look into it and get > back > > to you. > > > > > > On Feb 1, 2006, at 12:16 PM, Konstantin Kudin wrote: > > > >> Hi, > >> > >>

[OMPI users] OpenMPI start up problems

2007-07-19 Thread Konstantin Kudin
All, I've run across a somewhat difficult code for OpenMPI to handle (CPMD). Here is the report on the versions I tried: 1.1.4 - mostly does not start 1.1.5 - works 1.2.3 - does not start The machine has dual Opterons, with Gigabit. The running command with 4x2 cpus is: mpirun -np $np -mac

[OMPI users] fresh benchmarks for "alltoall"

2006-02-23 Thread Konstantin Kudin
Hi all, I retested the very recent trunk with the skampi 4.1. The "alltoall" works quite nicely up to 7 dual Opterons, whereas bunch of isend+irecv's choke. There appear to be some "special" effects related to the 1Gbit setup we are using (problems with broadcom adapters?), and unless there is a