Hi,
Am 17.12.2013 um 22:32 schrieb Brandon Turner:
> I've been struggling with this problem for a few days now and am out of
> ideas. I am submitting a job using TORQUE on a beowulf cluster. One step
> involves running mpiexec, and that is where this error occurs. I've found
> some similar oth
Dear List,
I've been struggling with this problem for a few days now and am out of
ideas. I am submitting a job using TORQUE on a beowulf cluster. One step
involves running mpiexec, and that is where this error occurs. I've found
some similar other queries in the past:
http://www.open-mpi.org/com
In the OMPI 1.6 series, the Fortran wrapper compilers are named "mpif77" and
"mpif90". They were consolidated down to "mpifort" starting with OMPI 1.7.
On Dec 17, 2013, at 2:18 PM, Johanna Schauer wrote:
> Dear List,
>
> I have been looking for an answer everywhere, but I cannot find much on
Dear List,
I have been looking for an answer everywhere, but I cannot find much on
this topic.
I have a fortran code that uses open mpi. Also, I have a windows 8 computer.
I have gfortran installed on my computer and it compiles just fine by
itself.
Now, I have downloaded and installed Open M
Dear Sir or Madam,
(We apologize if you receive multiple copies of this message)
FIRST INTERNATIONAL WORKSHOP ON CLOUD FOR BIO (C4Bio)
to be held as part of IEEE/ACM CCGrid 2014
Chicago, USA, May 26-29, 2014
http://www.arcos.inf.uc3
Hi,
Do you have thread multiples enabled in your OpenMPI installation ?
Maxime Boissonneault
Le 2013-12-16 17:40, Noam Bernstein a écrit :
Has anyone tried to use openmpi 1.7.3 with the latest CentOS kernel
(well, nearly latest: 2.6.32-431.el6.x86_64), and especially with infiniband?
I'm seein
OMPI_MCA_hwloc_base_binding_policy=core
On Dec 17, 2013, at 8:40 AM, Noam Bernstein wrote:
> On Dec 17, 2013, at 11:04 AM, Ralph Castain wrote:
>
>> Are you binding the procs? We don't bind by default (this will change in
>> 1.7.4), and binding can play a significant role when comparing acro
On Dec 17, 2013, at 11:04 AM, Ralph Castain wrote:
> Are you binding the procs? We don't bind by default (this will change in
> 1.7.4), and binding can play a significant role when comparing across kernels.
>
> add "--bind-to-core" to your cmd line
Now that it works, is there a way to set it v
On Dec 17, 2013, at 11:04 AM, Ralph Castain wrote:
> Are you binding the procs? We don't bind by default (this will change in
> 1.7.4), and binding can play a significant role when comparing across kernels.
>
> add "--bind-to-core" to your cmd line
Yeay - it works. Thank you very much for the
On Tue, Dec 17, 2013 at 11:16:48AM -0500, Noam Bernstein wrote:
> On Dec 17, 2013, at 11:04 AM, Ralph Castain wrote:
>
> > Are you binding the procs? We don't bind by default (this will change in
> > 1.7.4), and binding can play a significant role when comparing across
> > kernels.
> >
> > add
On Dec 17, 2013, at 11:04 AM, Ralph Castain wrote:
> Are you binding the procs? We don't bind by default (this will change in
> 1.7.4), and binding can play a significant role when comparing across kernels.
>
> add "--bind-to-core" to your cmd line
I've previously always used mpi_paffinity_alo
'Htop' is a very good tool for looking at where processes are running.
Are you binding the procs? We don't bind by default (this will change in
1.7.4), and binding can play a significant role when comparing across kernels.
add "--bind-to-core" to your cmd line
On Dec 17, 2013, at 7:09 AM, Noam Bernstein wrote:
> On Dec 16, 2013, at 5:40 PM, Noam Bernstein
> wr
On Dec 16, 2013, at 5:40 PM, Noam Bernstein wrote:
>
> Once I have some more detailed information I'll follow up.
OK - I've tried to characterize the behavior with vasp, which accounts for
most of our cluster usage, and it's quite odd. I ran my favorite benchmarking
job repeated 4 times. As yo
14 matches
Mail list logo