n should I supply to resolve this problem?
Sincerely,
Tom Rosmond
rtran / worked kinda
by accident. Recall that F08 is very, very strongly typed (even more so than
C++). Meaning: being picky about 1D-and-a-specific-legnth is a *feature*!
(yeah, it's kind of a PITA, but it really does help prevent bugs)
On Mar 23, 2017, at 1:06 PM, Tom Rosmond wro
Hello,
Attached is a simple MPI program demonstrating a problem I have
encountered with 'MPI_Type_create_hindexed' when compiling with the
'mpi_f08' module. There are 2 blocks of code that are only different
in how the length and displacement arrays are declared. I get
indx.f90(50): error
Gilles,
Yes, I found that definition about 5 minutes after I posted the
question. Thanks for the response.
Tom
On 03/22/2017 03:47 PM, Gilles Gouaillardet wrote:
Tom,
what if you use
type(mpi_datatype) :: mpiint
Cheers,
Gilles
On Thursday, March 23, 2017, Tom Rosmond <mailto:r
Hello;
I am converting some fortran 90/95 programs from the 'mpif.h' include
file to the 'mpi_f08' model and have encountered a problem. Here is a
simple test program that demonstrates it:
__-
program testf08
!
Open MPI master
/* i replaced shar_mem with fptr_mem */
Cheers,
Gilles
On 10/26/2016 3:29 AM, Tom Rosmond wrote:
All:
I am trying to understand the use of the shared memory features of
MPI-3 that allow direct sharing of the memory space of on-node
processes. Attached are 2 small test
se, but my suspicion is that this is
a program correctness issue. I can't point to any error, but I've
ruled out the obvious alternatives.
Jeff
On Tue, Oct 25, 2016 at 11:29 AM, Tom Rosmond <mailto:rosm...@reachone.com>> wrote:
All:
I am trying to understand the use
All:
I am trying to understand the use of the shared memory features of MPI-3
that allow direct sharing of the memory space of on-node processes.
Attached are 2 small test programs, one written in C (testmpi3.c), the
other F95 (testmpi3.f90) . They are solving the identical 'halo'
exchange
Thanks for replying, but the difference between what can be done in C vs
fortran is still my problem. I apologize for my rudimentary
understanding of C, but here is a brief summary:
In my originally attached C-program 'testmpi3.c' we have:
int **shar_pntr : declare pointer variable (a point
Hello,
I am trying to port a simple halo exchange program from C to fortran.
It is designed to demonstrate the shared memory features of MPI-3. The
original C program was download from an Intel site, and I have modified
it to simplify the port. A tarfile of a directory with each program and
Gilles,
Yes, that solved the problem. Thanks for the help. I assume this fix
will be in the next official release, i.e. 1.10.3?
Tom Rosmond
On 04/13/2016 05:07 PM, Gilles Gouaillardet wrote:
Tom,
i was able to reproduce the issue with an older v1.10 version, but not
with current v1.10
Hello,
In this thread from the Open-MPI archives:
https://www.open-mpi.org/community/lists/devel/2014/03/14416.php
a strange problem with a system call is discussed, and claimed to be
solved. However, in running a simple test program with some new MPI-3
functions, the problem seems to be bac
Hello,
I have been looking into the MPI-3 extensions that added ways to do
direct memory copying on multi-core 'nodes' that share memory.
Architectures constructed from these nodes are universal now, so
improved ways to exploit them are certainly needed. However, it is my
understanding that
Hello,
The benefits of 'using' the MPI module over 'including' MPIF.H are clear
because of the sanity checks it performs, and I recently did some
testing with the module that seems to uncover a possible bug or design
flaw in OpenMPI's handling of arrays in user-defined data types.
Attached a
Actually, you are not the first to encounter the problem with
'MPI_Type_indexed' for very large datatypes. I also run with a 1.6
release, and solved the problem by switching to
'MPI_Type_Create_Hindexed' for the datatype. The critical difference is
that the displacements for 'MPI_type_indexed' ar
With array bounds checking your program returns an out-of-bounds error
in the mpi_isend call at line 104. Looks like 'send_request' should be
indexed with 'sendcount', not 'icount'.
T. Rosmond
On Thu, 2015-01-08 at 20:28 +0100, Diego Avesani wrote:
> the attachment
>
> Diego
>
>
>
> On 8 J
4-07-21 at 13:37 -0500, Rob Latham wrote:
>
> On 07/20/2014 04:23 PM, Tom Rosmond wrote:
> > Hello,
> >
> > For several years I have successfully used MPIIO in a Fortran global
> > atmospheric ensemble data assimilation system. However, I always
> > wonder
Hello,
For several years I have successfully used MPIIO in a Fortran global
atmospheric ensemble data assimilation system. However, I always
wondered if I was fully exploiting the power of MPIIO, specifically by
using derived data types to better describe memory and file data
layouts. All of my
What Fortran compiler is your OpenMPI build with? Some fortran's don't
understand MPI_IN_PLACE. Do a 'fortran MPI_IN_PLACE' search to see
several instances.
T. Rosmond
On Sat, 2013-09-07 at 10:16 -0400, Hugo Gagnon wrote:
> Nope, no luck. My environment is:
>
> OpenMPI 1.6.5
> gcc 4.8.1
> M
Just as an experiment, try replacing
use mpi
with
include 'mpif.h'
If that fixes the problem, you can confront the OpenMPI experts
T. Rosmond
On Fri, 2013-09-06 at 23:14 -0400, Hugo Gagnon wrote:
> Thanks for the input but it still doesn't work for me... Here's the
> version without MPI
I'm afraid I can't answer that. Here's my environment:
OpenMPI 1.6.1
IFORT 12.0.3.174
Scientific Linux 6.4
What fortran compiler are you using?
T. Rosmond
On Fri, 2013-09-06 at 23:14 -0400, Hugo Gagnon wrote:
> Thanks for the input but it still doesn't work for me... Here's the
> ver
Hello,
Your syntax defining 'a' is not correct. This code works correctly.
program test
use mpi
integer :: ierr, myrank, a(2) = 0
call MPI_Init(ierr)
call MPI_Comm_rank(MPI_COMM_WORLD,myrank,ierr)
if (myrank == 0) then
a(1) = 1
a(2) = 2
else
a(1) = 3
a(2) = 4
endif
call
MPI_Allreduce(MPI_IN_
2/2013 11:33 AM, Tom Rosmond wrote:
> >> Thanks for the confirmation of the MPIIO problem. Interestingly, we
> >> have the same problem when using MPIIO in INTEL MPI. So something
> >> fundamental seems to be wrong.
> >>
> >
> > I think but I am not su
Hello:
A colleague and I are running an atmospheric ensemble data assimilation
system using MPIIO. We find that if for an individual
MPI_FILE_READ_AT_ALL the block of data read exceeds 2**31 elements, the
program fails. Our application is 32 bit fortran (Intel), so we
certainly can see why this
I just built Openmpi 1.6.1 with the '--with-libnuma=(dir)' and got a
'WARNING: unrecognized options' message. I am running on a NUMA
architecture and have needed this feature with earlier Openmpi releases.
Is the support now native in the 1.6 versions? If not, what should I
do?
T. Rosmond
Will do. My machine is currently quite busy, so it will be a while
before I get answers. Stay tuned.
T. Rosmond
On Tue, 2012-04-24 at 13:36 -0600, Ralph Castain wrote:
> Add --display-map to your mpirun cmd line
>
> On Apr 24, 2012, at 1:33 PM, Tom Rosmond wrote:
>
> > Jef
. Is there an environmental variable or an MCA option I can
add to my 'mpirun' command line that would give that to me? I am
running 1.5.4.
T. Rosmond
On Tue, 2012-04-24 at 15:11 -0400, Jeffrey Squyres wrote:
> On Apr 24, 2012, at 3:01 PM, Tom Rosmond wrote:
>
> > My que
We have a large ensemble-based atmospheric data assimilation system that
does a 3-D cartesian partitioning of the 'domain' using MPI_DIMS_CREATE,
MPI_CART_CREATE, etc. Two of the dimensions are spacial, i.e. latitude
and longitude; the third is an 'ensemble' dimension, across which
subsets of ense
1AM -0800, Tom Rosmond wrote:
> > With all of this, here is my MPI related question. I recently added an
> > option to use MPI-IO to do the heavy IO lifting in our applications. I
> > would like to know what the relative importance of the dedicated MPI
> > network vis
Recently the organization I work for bought a modest sized Linux cluster
for running large atmospheric data assimilation systems. In my
experience a glaring problem with systems of this kind is poor IO
performance. Typically they have 2 types of network: 1) A high speed,
low latency, e.g. Infinib
hasing this down
> > a couple of years ago. See orte/test/mpi/bcast_loop.c
> >
> >
> > On Nov 29, 2011, at 9:35 AM, Jeff Squyres wrote:
> >
> >> That's quite weird/surprising that you would need to set it down to *5* --
> >> that's reall
ke your pick - inserting a barrier before or after doesn't seem to make a
> lot of difference, but most people use "before". Try different values until
> you get something that works for you.
>
>
> On Nov 14, 2011, at 3:10 PM, Tom Rosmond wrote:
>
> > Hel
Hello:
A colleague and I have been running a large F90 application that does an
enormous number of mpi_bcast calls during execution. I deny any
responsibility for the design of the code and why it needs these calls,
but it is what we have inherited and have to work with.
Recently we ported the c
On Mon, 2011-08-29 at 14:22 -0500, Rob Latham wrote:
> On Mon, Aug 22, 2011 at 08:38:52AM -0700, Tom Rosmond wrote:
> > Yes, we are using collective I/O (mpi_file_write_at_all,
> > mpi_file_read_at_all). The swaping of fortran and mpi-io are just
> > branches in the code a
On Mon, 2011-08-22 at 10:23 -0500, Rob Latham wrote:
> On Thu, Aug 18, 2011 at 08:46:46AM -0700, Tom Rosmond wrote:
> > We have a large fortran application designed to run doing IO with either
> > mpi_io or fortran direct access. On a linux workstation (16 AMD cores)
> >
We have a large fortran application designed to run doing IO with either
mpi_io or fortran direct access. On a linux workstation (16 AMD cores)
running openmpi 1.5.3 and Intel fortran 12.0 we are having trouble with
random failures with the mpi_io option which do not occur with
conventional fortra
Rob,
Thanks for the clarification. I had seen that point about
non-decreasing offsets in the standard and it was just beginning to dawn
on me that maybe it was my problem. I will rethink my mapping strategy
to comply with the restriction. Thanks again.
T. Rosmond
On Tue, 2011-05-24 at 10:09
is the >72 character lines,
> but then when that is gone, I'm not sure how the allocatable stuff fits in...
> (I'm not enough of a Fortran programmer to know)
>
Anyone else out that who can comment
T. Rosmond
>
> On May 10, 2011, at 7:14 PM, Tom Rosmond
I would appreciate someone with experience with MPI-IO look at the
simple fortran program gzipped and attached to this note. It is
imbedded in a script so that all that is necessary to run it is do:
'testio' from the command line. The program generates a small 2-D input
array, sets up an MPI-IO e
still does the job.
T. Rosmond
On Thu, 2011-01-06 at 14:52 -0600, Rob Latham wrote:
> On Tue, Dec 21, 2010 at 06:38:59PM -0800, Tom Rosmond wrote:
> > I use the function MPI_FILE_SET_VIEW with the 'native'
> > data representation and correctly write a file with MPI_FILE_WRI
I have been experimenting with some simple fortran test programs to
write files with some of the MPI-IO functions, and have come across a
troubling issue. I use the function MPI_FILE_SET_VIEW with the 'native'
data representation and correctly write a file with MPI_FILE_WRITE_ALL.
However, if I ch
Rosmond
On Fri, 2010-12-17 at 15:47 -0600, Rob Latham wrote:
> On Wed, Dec 15, 2010 at 01:21:35PM -0800, Tom Rosmond wrote:
> > I want to implement an MPI-IO solution for some of the IO in a large
> > atmospheric data assimilation system. Years ago I got some small
> > demo
I want to implement an MPI-IO solution for some of the IO in a large
atmospheric data assimilation system. Years ago I got some small
demonstration Fortran programs ( I think from Bill Gropp) that seem to
be good candidate prototypes for what I need. Two of them are attached
as part of simple she
x27;s compiler specific I think. I've done this with OpenMPI no
> > problem, however on one another cluster with ifort I've gotten error
> > messages about not using MPI_IN_PLACE. So I think if it compiles,
> > it should work fine.
> >
> > On Thu, Sep 16, 2010
I am working with a Fortran 90 code with many MPI calls like this:
call mpi_gatherv(x,nsize(rank+1),
mpi_real,x,nsize,nstep,mpi_real,root,mpi_comm_world,mstat)
'x' is allocated on root to be large enough to hold the results of the
gather, other arrays and parameters are defined correctly, an
Your fortran call to 'mpi_bcast' needs a status parameter at the end of
the argument list. Also, I don't think 'MPI_INT' is correct for
fortran, it should be 'MPI_INTEGER'. With these changes the program
works OK.
T. Rosmond
On Fri, 2010-05-21 at 11:40 +0200, Pankatz, Klaus wrote:
> Hi folks,
>
>
> You might do a "man orte_hosts" (assuming you installed the man pages)
> and see what it says.
>
> Ralph
>
> On Nov 10, 2009, at 2:46 PM, Tom Rosmond wrote:
>
> > I want to run a number of MPI executables simultaneously in a PBS job.
> > For example
I want to run a number of MPI executables simultaneously in a PBS job.
For example on my system I do 'cat $PBS_NODEFILE' and get a list like
this:
n04
n04
n04
n04
n06
n06
n06
n06
n07
n07
n07
n07
n09
n09
n09
n09
i.e, 16 processors on 4 nodes. from which I can parse into file(s) as
desired. If I w
AMJAD
On your first question, the answer is probably, if everything else is
done correctly. The first test is to not try to do the overlapping
communication and computation, but do them sequentially and make sure
the answers are correct. Have you done this test? Debugging your
original approach
I am curious about the algorithm(s) used in the OpenMPI implementations
of the all2all and all2allv. As many of you know, there are alternate
algorithms for all2all type operations, such as that of Plimpton, et al
(2006), that basically exchange latency costs for bandwidth costs, which
pays big di
Have you looked at the self-scheduling algorithm described in "USING
MPI" by Gropp, Lusk, and Skjellum. I have seen efficient
implementations of it for large satellite data assimilation problems in
numerical weather prediction, where load distribution across processors
cannot be predicted in advan
d,iwin,istat)
!
if(ir.eq.1)then
do 250 i = 1,imjm
ximjm(i) = i
250 continue
endif
!
itarget_disp = loff(ir)
call mpi_win_fence(0,iwin,istat)
call mpi_get(x,len(ir),mpi_real,0,itarget_disp,len(ir),mpi_real,
& iwin,istat)
call mpi_win_fence(0,iwin,istat)
!
print('(A,i3,8f20.2)'),' x
Attached is some error output from my tests of 1-sided message passing,
plus my info file. Below are two copies of a simple fortran subroutine
that mimics mpi_allgatherv using mpi-get calls. The top version fails,
the bottom runs OK. It seems clear from these examples, plus the
'self_send
I am continuing to test the MPI-2 features of 1.1, and have run into
some puzzling behavior. I wrote a simple F90 program to test 'mpi_put'
and 'mpi_get' on a coordinate transformation problem on a two dual-core
processor Opteron workstation running the PGI 6.1 compiler. The program
runs correc
I am testing the one-sided message passing (mpi_put, mpi_get) that is
now supported in the 1.1 release. It seems to work OK for some simple
test codes, but when I run my big application, it fails. This
application is a large weather model that runs operationally on the SGI
Origin 3000, using
remove certain libraries from 1.0.1.
You absolutely have to:
cd openmpi1.0.1
sudo make uninstall
cd ../openmpi1.0.2
sudo make install
I have had no trouble in the past with PGF90 version 6.1-3 and
OpenMPI 1.1a on a dual Operton 1.4 GHz machine running Debian Linux.
Michael
On May 24, 2006,
After using OPENMPI Ver 1.0.1 for several months without trouble, last week
I decided to upgrade to Ver 1.0.2. My primary motivation was curiosity,
to see if
there was any performance benefit. To my surprise, several of my F90
applications
refused to run with the newer version. I also tried V
d. I
now must
have my sysadmin guy transport the installation to the compute nodes, but I
hope that will be routine.
Thanks for the help
Brian Barrett wrote:
On Mar 10, 2006, at 8:35 AM, Brian Barrett wrote:
On Mar 9, 2006, at 11:37 PM, Tom Rosmond wrote:
Attached are output files
wrote:
On Mar 9, 2006, at 2:51 PM, Tom Rosmond wrote:
I am trying to install OPENMPI on a Linux cluster with 22 dual
Opteron nodes
and a Myrinet interconnect. I am having trouble with the build
with the GM
libraries. I configured with:
./configure --prefix-/users/rosmond/ompi --with-gm
Troy Telford wrote:
The configure seemed to go OK, but the make failed. As you see at the
end of the
make output, it doesn't like the format of libgm.so. It looks to me
that it is using
a path (/usr/lib/.) to 32 bit libraries, rather than 64 bit
(/usr/lib64/). Is this
correct? Wha
ing
a path (/usr/lib/.) to 32 bit libraries, rather than 64 bit
(/usr/lib64/). Is this
correct? What's the solution?
Tom Rosmond
config.log.bz2
Description: BZip2 compressed data
config_out.bz2
Description: BZip2 compressed data
make_out.bz2
Description: BZip2 compressed data
I just received deliver of a new dual processor Opteron workstation
running Suse linux. I installed the 64 bit intel compiler and openmpi,
and my first test of a trivial mpi code produced the output below. I
have seen this kind of problem before, and know that it has to do with
the SSH environmen
I downloaded totalview 7.1.0-2 yesterday for the trial period to test it
with my
MPI applications. Your FAQ's suggested it was compatible with Openmpi,
although it is not listed as a supported MPI on the Entus website. I
recompiled
one of my fortran mpi codes with the debugging option and trie
:
On Jan 4, 2006, at 4:24 PM, Tom Rosmond wrote:
I have been using LAM-MPI for many years on PC/Linux systems and
have been quite pleased with its performance. However, at the
urging of the
LAM-MPI website, I have decided to switch to OPENMPI. For much of my
preliminary testing I work o
o match the LAM performance.
regards
Tom Rosmond
Open MPI: 1.0.1r8453
Open MPI SVN revision: r8453
Open RTE: 1.0.1r8453
Open RTE SVN revision: r8453
OPAL: 1.0.1r8453
OPAL SVN revision: r8453
Prefix: /usr/local/open
65 matches
Mail list logo