Re: [OMPI users] Error: Unable to create the sub-directory (/tmp/openmpi etc...)

2013-12-17 Thread Reuti
Hi, Am 17.12.2013 um 22:32 schrieb Brandon Turner: > I've been struggling with this problem for a few days now and am out of > ideas. I am submitting a job using TORQUE on a beowulf cluster. One step > involves running mpiexec, and that is where this error occurs. I've found > some similar oth

[OMPI users] Error: Unable to create the sub-directory (/tmp/openmpi etc...)

2013-12-17 Thread Brandon Turner
Dear List, I've been struggling with this problem for a few days now and am out of ideas. I am submitting a job using TORQUE on a beowulf cluster. One step involves running mpiexec, and that is where this error occurs. I've found some similar other queries in the past: http://www.open-mpi.org/com

Re: [OMPI users] Basic question on compiling fortran with windows computer

2013-12-17 Thread Jeff Squyres (jsquyres)
In the OMPI 1.6 series, the Fortran wrapper compilers are named "mpif77" and "mpif90". They were consolidated down to "mpifort" starting with OMPI 1.7. On Dec 17, 2013, at 2:18 PM, Johanna Schauer wrote: > Dear List, > > I have been looking for an answer everywhere, but I cannot find much on

[OMPI users] Basic question on compiling fortran with windows computer

2013-12-17 Thread Johanna Schauer
Dear List, I have been looking for an answer everywhere, but I cannot find much on this topic. I have a fortran code that uses open mpi. Also, I have a windows 8 computer. I have gfortran installed on my computer and it compiles just fine by itself. Now, I have downloaded and installed Open M

[OMPI users] CFP: 1st International Workshop on Cloud for Bio (C4Bio 2014) - in conjunction with CCGRID 2014

2013-12-17 Thread Javier Garcia Blas
Dear Sir or Madam, (We apologize if you receive multiple copies of this message) FIRST INTERNATIONAL WORKSHOP ON CLOUD FOR BIO (C4Bio) to be held as part of IEEE/ACM CCGrid 2014 Chicago, USA, May 26-29, 2014 http://www.arcos.inf.uc3

Re: [OMPI users] slowdown with infiniband and latest CentOS kernel

2013-12-17 Thread Maxime Boissonneault
Hi, Do you have thread multiples enabled in your OpenMPI installation ? Maxime Boissonneault Le 2013-12-16 17:40, Noam Bernstein a écrit : Has anyone tried to use openmpi 1.7.3 with the latest CentOS kernel (well, nearly latest: 2.6.32-431.el6.x86_64), and especially with infiniband? I'm seein

Re: [OMPI users] slowdown with infiniband and latest CentOS kernel

2013-12-17 Thread Ralph Castain
OMPI_MCA_hwloc_base_binding_policy=core On Dec 17, 2013, at 8:40 AM, Noam Bernstein wrote: > On Dec 17, 2013, at 11:04 AM, Ralph Castain wrote: > >> Are you binding the procs? We don't bind by default (this will change in >> 1.7.4), and binding can play a significant role when comparing acro

Re: [OMPI users] slowdown with infiniband and latest CentOS kernel

2013-12-17 Thread Noam Bernstein
On Dec 17, 2013, at 11:04 AM, Ralph Castain wrote: > Are you binding the procs? We don't bind by default (this will change in > 1.7.4), and binding can play a significant role when comparing across kernels. > > add "--bind-to-core" to your cmd line Now that it works, is there a way to set it v

Re: [OMPI users] slowdown with infiniband and latest CentOS kernel

2013-12-17 Thread Noam Bernstein
On Dec 17, 2013, at 11:04 AM, Ralph Castain wrote: > Are you binding the procs? We don't bind by default (this will change in > 1.7.4), and binding can play a significant role when comparing across kernels. > > add "--bind-to-core" to your cmd line Yeay - it works. Thank you very much for the

Re: [OMPI users] slowdown with infiniband and latest CentOS kernel

2013-12-17 Thread Nathan Hjelm
On Tue, Dec 17, 2013 at 11:16:48AM -0500, Noam Bernstein wrote: > On Dec 17, 2013, at 11:04 AM, Ralph Castain wrote: > > > Are you binding the procs? We don't bind by default (this will change in > > 1.7.4), and binding can play a significant role when comparing across > > kernels. > > > > add

Re: [OMPI users] slowdown with infiniband and latest CentOS kernel

2013-12-17 Thread Noam Bernstein
On Dec 17, 2013, at 11:04 AM, Ralph Castain wrote: > Are you binding the procs? We don't bind by default (this will change in > 1.7.4), and binding can play a significant role when comparing across kernels. > > add "--bind-to-core" to your cmd line I've previously always used mpi_paffinity_alo

Re: [OMPI users] slowdown with infiniband and latest CentOS kernel

2013-12-17 Thread John Hearns
'Htop' is a very good tool for looking at where processes are running.

Re: [OMPI users] slowdown with infiniband and latest CentOS kernel

2013-12-17 Thread Ralph Castain
Are you binding the procs? We don't bind by default (this will change in 1.7.4), and binding can play a significant role when comparing across kernels. add "--bind-to-core" to your cmd line On Dec 17, 2013, at 7:09 AM, Noam Bernstein wrote: > On Dec 16, 2013, at 5:40 PM, Noam Bernstein > wr

Re: [OMPI users] slowdown with infiniband and latest CentOS kernel

2013-12-17 Thread Noam Bernstein
On Dec 16, 2013, at 5:40 PM, Noam Bernstein wrote: > > Once I have some more detailed information I'll follow up. OK - I've tried to characterize the behavior with vasp, which accounts for most of our cluster usage, and it's quite odd. I ran my favorite benchmarking job repeated 4 times. As yo