suggested that I could override it by setting
TMP,
> TEMP or TEMPDIR, which I did to no avail.
>From my experience on edison: the one environment variable that does
works is TMPDIR - the one that is not listed in the error message :-)
Can't help you with your mpirun problem though ...
/etc/sysconfig/torque_mom:
ulimit -d unlimited
ulimit -s unlimited
ulimit -n 32768
ulimit -l 2097152
or whatever you consider to be reasonable.
Cheers,
Martin
--
Martin Siegert
WestGrid/ComputeCanada
Simon Fraser University
Burnaby, British Columbia
On Wed, Jun 11, 2014 at 10:20:08PM +,
+1 even if cmake would make life easier for the developpers, you may
want to consider those sysadmins/users who actually need to compile
and install the software. And for those cmake is a nightmare. Everytime
I run into a software package that uses cmake it makes me cringe.
gromacs is t
MCA v2.0, API v2.0, Component v1.6.4)
MCA maffinity: hwloc (MCA v2.0, API v2.0, Component v1.6.4)
does not give any indication that this is actually used.
Cheers,
Martin
--
Martin Siegert
WestGrid/ComputeCanada
Simon Fraser University
le Fortran programs with the appropriate I8FLAG),
but it is an unnecessary complication: I have not encountered a piece
of software other than dirac that requires this.
Cheers,
Martin
--
Martin Siegert
Head, Research Computing
WestGrid/ComputeCanada Site Lead
Simon Fraser University
Burnaby, Br
trancy threaded", but that does not work when the underlying compiler
is changed, e.g., OMPI_FC=gfortran.
Cheers,
Martin
--
Martin Siegert
Simon Fraser University
Burnaby, British Columbia
Canada
> Is -pthread really needed? Is there a configure option to change this
> or should h
On Wed, Jun 27, 2012 at 02:30:11PM -0400, Jeff Squyres wrote:
> On Jun 27, 2012, at 2:25 PM, Martin Siegert wrote:
>
> >> http://www.open-mpi.org/~jsquyres/unofficial/openmpi-1.6.1ticket3131r26612M.tar.bz2
> >
> > Thanks! I tried this and, indeed, the program (I tested
Hi Jeff,
On Wed, Jun 20, 2012 at 04:16:12PM -0400, Jeff Squyres wrote:
> On Jun 20, 2012, at 3:36 PM, Martin Siegert wrote:
>
> > by now we know of three programs - dirac, wrf, quantum espresso - that
> > all hang with openmpi-1.4.x (have not yet checked with openmpi-1.6)
file openmpi-mca-params.conf.
What is the reason that this is not the default in the first place?
Are there any negative effects?
Cheers,
Martin
On Thu, May 03, 2012 at 11:01:30PM -0700, Martin Siegert wrote:
> On Tue, Apr 24, 2012 at 04:19:31PM -0400, Brock Palen wrote:
> > To throw in
s/2011/07/16996.php
- Martin
> On Apr 24, 2012, at 3:09 PM, Jeffrey Squyres wrote:
>
> > Could you repeat your tests with 1.4.5 and/or 1.5.5?
> >
> >
> > On Apr 23, 2012, at 1:32 PM, Martin Siegert wrote:
> >
> >> Hi,
> >>
> >&
appreciated ... I already spent a
huge amount of time on this and I am running out of ideas.
Cheers,
Martin
--
Martin Siegert
Simon Fraser University
Burnaby, British Columbia
Canada
subroutine myMPI_Allreduce(sendbuf, recvbuf, cnt, datatype, op, comm, mpierr)
implicit none
include 'mpif
ny of these (in decreasing order) over passwordless ssh keys.
Cheers,
Martin
--
Martin Siegert
Simon Fraser University
Burnaby, British Columbia
Thanks, Jeff, for the details!
On Sat, Sep 24, 2011 at 07:26:49AM -0400, Jeff Squyres wrote:
> On Sep 22, 2011, at 11:06 PM, Martin Siegert wrote:
>
> > I am trying to figure out how openmpi (1.4.3) sets its PATH
> > for executables. From the man page:
> >
> &g
PATH on the node where mpiexec is running
is used as the PATH on all nodes (by default). Or is there a reason why
that is a really bad idea?
Cheers,
Martin
--
Martin Siegert
Head, Research Computing
WestGrid/ComputeCanada Site Lead
IT Servicesphone: 778 782-4691
mpi.spawn.Rslaves() call.
BTW: the whole script works in the same way when submitting under torque
using the TM interface and without specifying -hostfile ... on the
mpiexec command line.
Cheers,
Martin
--
Martin Siegert
Head, Research Computing
WestGrid/ComputeCanada Site Lead
IT Services
re calling
MPI_Send with a count argument of -2147483648.
Which could result in a segmentation fault.
Cheers,
Martin
--
Martin Siegert
Head, Research Computing
WestGrid/ComputeCanada Site Lead
IT Servicesphone: 778 782-4691
Simon Fraser University
example?
Thanks in advance!
Cheers,
Martin
--
Martin Siegert
Head, Research Computing
WestGrid/ComputeCanada Site Lead
IT Servicesphone: 778 782-4691
Simon Fraser Universityfax: 778 782-4242
Burnaby, British Columbia email: sieg
Hi,
I am creating a new thread (was: MPI_Allreduce on local machine).
On Wed, Jul 28, 2010 at 05:07:29PM -0400, Gus Correa wrote:
> Still, the alignment under Intel may or may not be right.
> And this may or may not explain the errors that Hugo has got.
>
> FYI, the ompi_info from my OpenMPI 1.3.
On Wed, Jul 28, 2010 at 01:05:52PM -0700, Martin Siegert wrote:
> On Wed, Jul 28, 2010 at 11:19:43AM -0400, Gus Correa wrote:
> > Hugo Gagnon wrote:
> >> Hi Gus,
> >> Ompi_info --all lists its info regarding fortran right after C. In my
> >> ca
give the correct results:
program types
use mpi
implicit none
integer :: mpierr, size
call MPI_Init(mpierr)
call MPI_Type_size(MPI_DOUBLE_PRECISION, size, mpierr)
print*, 'double precision size: ', size
call MPI_Finalize(mpierr)
end
mpif90 -g types.f90
mpiexec -n 1 ./a.out
Yes, I am quite sure that you need at least 16GB to run SPEC MPIM2007.
See the FAQ at http://www.spec.org/mpi2007/docs/faq.html#MemoryMedium
Furthermore, the benchmark is designed to run on at least 16p.
Cheers,
Martin
--
Martin Siegert
Head, Research Computing
WestGrid Site Lead
IT Services
,
Martin
--
Martin Siegert
Head, Research Computing
WestGrid Site Lead
IT Servicesphone: 778 782-4691
Simon Fraser Universityfax: 778 782-4242
Burnaby, British Columbia email: sieg...@sfu.ca
Canada V5A 1S6
On Wed, Apr 28, 2010 at 05
does that work?
- will this lead to an oversubscription of nodes
or
- will the creation of additional threads fail, if the number of
processes on a node has reached the process count on that node
assigned through torque?
How is this done properly?
Thanks!
Cheers,
Martin
--
Martin Siegert
2368E-05
4789 -0.3909323E+00 -0.4560614E-04
4790 -0.3907985E+00 -0.8639889E-04
4791 -0.3906647E+00 -0.1271607E-03
In other words: I do not see a problem.
This is with openmpi-1.3.3, scalapack-1.8.0, mpiblacs-1.1p3,
ifort-11.1.038, mkl-10.2.0.013.
Cheers,
Martin
--
On Wed, Nov 11, 2009 at 07:49:25AM -0700, Blosch, Edwin L wrote:
> Thanks for the reply, Jeff,
>
> I think -i-static is an Intel 9 option, but unfortunately it didn't make a
> difference switching to -static-intel:
>
>
> libtool: link: /appserv/intel/cce/10.1.021/bin/icc -DNDEBUG
> -finline-fu
coll_tuned_util.c) is executed.
Any ideas how to fix this?
Cheers,
Martin
--
Martin Siegert
Head, Research Computing
WestGrid Site Lead
IT Servicesphone: 778 782-4691
Simon Fraser Universityfax: 778 782-4242
Burnaby, British Columbia
ere I can specify MPI_IN_PLACE. Unfortunately, the standard
says nothing about the other processes in this case. Do I need a valid
receive buffer there?
Cheers,
Martin
--
Martin Siegert
Head, Research Computing
WestGrid Site Lead
IT Servicesphone: 778 78
Hi Vincent,
On Mon, Nov 09, 2009 at 11:45:29AM +0100, Vincent Loechner wrote:
>
> Martin,
>
> > I expect problems with sizes larger than 2^31-1, but these array sizes
> > are still much smaller.
> No, they are bigger, you allocate two arrays of 320 Mdouble :
> 2 * 320M * 8 = 5GB.
>
> Are your p
orted here).
--
All programs/libraries are 64bit, interconnect is IB.
I expect problems with sizes larger than 2^31-1, but these array sizes
are still much smaller.
What is the problem here?
Cheers,
Martin
--
Martin Si
:jm, and the third starts at 1) then you should be able
to do
call MPI_Allreduce(array(1,1,1,nl,0,ng), phim(1,1,1,nl,0,ng), &
im*jm*kmloc(coords(2)+1), MPI_REAL, MPI_SUM, &
ang_com, ierr)
Cheers,
Martin
--
Martin Siegert
Head, Research Computing
W
switch between versions as
> required.
>
> Jim
>
> -Original Message-
> From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On
> Behalf Of Martin Siegert
> Sent: Friday, July 17, 2009 3:29 PM
> To: Open MPI Users
> Subject: [OMPI users] ifort a
another way of accomplishing this?
Cheers,
Martin
--
Martin Siegert
Head, Research Computing
WestGrid Site Lead
IT Servicesphone: 778 782-4691
Simon Fraser Universityfax: 778 782-4242
Burnaby, British Columbia email: sieg
32 matches
Mail list logo