Re: [OMPI users] OpenMPI-ROMIO-OrangeFS

2014-03-25 Thread Rob Latham
afternoon again, it might be friday until I can digg into that. Was there any progress with this? Otherwise, what version of PVFS2 is known to work with OMPI 1.6? Thanks. Edgar, should I pick this up for MPICH, or was this fix specific to OpenMPI ? ==rob -- Rob Latham Mathematics and Computer

Re: [OMPI users] OpenMPI-ROMIO-OrangeFS

2014-03-28 Thread Rob Latham
that only matters if ROMIO uses extended generalized requests. I trust ticket #1159 is still accurate? https://svn.open-mpi.org/trac/ompi/ticket/1159 ==rob -- Rob Latham Mathematics and Computer Science Division Argonne National Lab, IL USA

Re: [OMPI users] File locking in ADIO, OpenMPI 1.6.4

2014-04-14 Thread Rob Latham
off data sieving writes, which is what I would have first guessed would trigger this lock message. So I guess you are hitting one of the other cases. ==rob -- Rob Latham Mathematics and Computer Science Division Argonne National Lab, IL USA

Re: [OMPI users] ROMIO bug reading darrays

2014-05-07 Thread Rob Latham
rg http://www.open-mpi.org/mailman/listinfo.cgi/users -- Rob Latham Mathematics and Computer Science Division Argonne National Lab, IL USA

Re: [OMPI users] ROMIO bug reading darrays

2014-05-07 Thread Rob Latham
t;31 bit transfers" fixes went into the MPICH-3.1 release. Slurping those changes, which are individually small (using some _x versions of type-inquiry routines here, some MPI_Count promotions there) but pervasive, might give OpenMPI a bit of a headache. ==rob Thanks, Richard On 7 May

Re: [OMPI users] ROMIO bug reading darrays

2014-05-08 Thread Rob Latham
On 05/07/2014 11:36 AM, Rob Latham wrote: On 05/05/2014 09:20 PM, Richard Shaw wrote: Hello, I think I've come across a bug when using ROMIO to read in a 2D distributed array. I've attached a test case to this email. Thanks for the bug report and the test case. I've o

Re: [OMPI users] ROMIO bug reading darrays

2014-05-08 Thread Rob Latham
just get segfaults. Thanks, Richard ___ users mailing list us...@open-mpi.org http://www.open-mpi.org/mailman/listinfo.cgi/users -- Rob Latham Mathematics and Computer Science Division Argonne National Lab, IL USA

Re: [OMPI users] ROMIO bug reading darrays

2014-05-08 Thread Rob Latham
On 05/07/2014 11:36 AM, Rob Latham wrote: On 05/05/2014 09:20 PM, Richard Shaw wrote: Hello, I think I've come across a bug when using ROMIO to read in a 2D distributed array. I've attached a test case to this email. Thanks for the bug report and the test case. I've o

Re: [OMPI users] bug in MPI_File_set_view?

2014-05-19 Thread Rob Latham
ess rank 2 with PID 13969 on node oriol-VirtualBox exited on signal 6 (Aborted). -- ___ users mailing list us...@open-mpi.org http://www.open-mpi.org/mailman/listinfo.cgi/users

Re: [OMPI users] latest stable and win7/msvc2013

2014-07-17 Thread Rob Latham
i.org/mailman/listinfo.cgi/users Link to this post:http://www.open-mpi.org/community/lists/users/2014/07/24793.php ___ users mailing list us...@open-mpi.org Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users Link to this post: http:/

Re: [OMPI users] latest stable and win7/msvc2013

2014-07-17 Thread Rob Latham
on Microsoft's intentions regarding MPI and C99/C11 (just dreaming now). hey, (almost all of) c99 support is in place in visual studio 2013 http://blogs.msdn.com/b/vcblog/archive/2013/07/19/c99-library-support-in-visual-studio-2013.aspx ==rob On 2014-07-17 11:42 AM, Jed Brown wrote: Rob Lath

Re: [OMPI users] MPIIO and derived data types

2014-07-21 Thread Rob Latham
c)) which (if I am reading fortran correctly) is a contiguous chunk of memory. If instead you had a more elaborate data structure, like a mesh of some kind, then passing an indexed type to the read call might make more sense. ==rob -- Rob Latham Mathematics and Computer Science Division Argonne National Lab, IL USA

Re: [OMPI users] Using PLFS with Open MPI 1.8

2014-07-28 Thread Rob Latham
OpenMPI to pick up all the bits... As with Lustre, I don't have access to a PLFS system and would welcome community contributions to integrate and test PLFS into ROMIO. ==rob -- Rob Latham Mathematics and Computer Science Division Argonne National Lab, IL USA

Re: [OMPI users] MPI-I/O issues

2014-08-06 Thread Rob Latham
mio resync. You are on your own with ompio! ==rob -- Rob Latham Mathematics and Computer Science Division Argonne National Lab, IL USA

Re: [OMPI users] MPI-I/O issues

2014-08-06 Thread Rob Latham
mpich 3.1.2 , I don't see those issues. Thanks, Mohamad ___ users mailing list us...@open-mpi.org Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users Link to this post: http://www.open-mpi.org/community/lists/users/2014/08/24931.php -

Re: [OMPI users] MPI-I/O issues

2014-08-11 Thread Rob Latham
ity/lists/users/2014/08/24931.php ___ users mailing list us...@open-mpi.org Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users Link to this post: http://www.open-mpi.org/community/lists/users/2014/08/24963.php -- Rob Latham Mathematics and

Re: [OMPI users] MPI-I/O issues

2014-08-11 Thread Rob Latham
orge. On Mon, Aug 11, 2014 at 9:44 AM, Rob Latham mailto:r...@mcs.anl.gov>> wrote: On 08/10/2014 07:32 PM, Mohamad Chaarawi wrote: Update: George suggested that I try with the 1.8.2 rc3 and that one resolves the hindexed_block segfault that I was seei

Re: [OMPI users] Best way to communicate a 2d array with Java binding

2014-08-22 Thread Rob Latham
ink to this post: http://www.open-mpi.org/community/lists/users/2014/08/25130.php -- Rob Latham Mathematics and Computer Science Division Argonne National Lab, IL USA

Re: [OMPI users] Best way to communicate a 2d array with Java binding

2014-08-22 Thread Rob Latham
construct an HINDEXED type (or with very new MPICH, HINDEXED_BLOCK) and send that instead of copying. ==rob On Fri, Aug 22, 2014 at 3:38 PM, Rob Latham mailto:r...@mcs.anl.gov>> wrote: On 08/22/2014 10:10 AM, Saliya Ekanayake wrote: Hi, I've a quick questi

Re: [OMPI users] Runtime replacement of mpi libraries?

2014-09-11 Thread Rob Latham
PI? Thx.John Cary ___ users mailing list us...@open-mpi.org Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users Link to this post: http://www.open-mpi.org/community/lists/users/2014/09/25311.php -- Rob Latham Mathematics and Computer Science Divisio

Re: [OMPI users] File locking in ADIO, OpenMPI 1.6.4

2014-09-18 Thread Rob Latham
h this approach doesn't need RMA shared memory. ==rob Thanks, Beichuan -Original Message- From: users [mailto:users-boun...@open-mpi.org] On Behalf Of Rob Latham Sent: Monday, April 14, 2014 14:24 To: Open MPI Users Subject: Re: [OMPI users] File locking in ADIO, OpenMPI 1.6.4

Re: [OMPI users] File locking in ADIO, OpenMPI 1.6.4

2014-09-18 Thread Rob Latham
t's the case. I'll dust off our old RMA-based approach for shared file pointers. It's not perfect, but for folks having difficulty with the file-backed shared file pointer operations it might be useful. ==rob Thanks, Beichuan -Original Message----- From: users [mailto:users

Re: [OMPI users] mpi_file_read and arrays of custom datatypes

2014-12-01 Thread Rob Latham
scrutable -- and I like C-style 6 character variables a lot! ==rob -- Rob Latham Mathematics and Computer Science Division Argonne National Lab, IL USA

Re: [OMPI users] OpenMPI 1.8.4rc3, 1.6.5 and 1.6.3: segmentation violation in mca_io_romio_dist_MPI_File_close

2015-01-12 Thread Rob Latham
users mailing list us...@open-mpi.org Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users Link to this post: http://www.open-mpi.org/community/lists/users/2014/12/26006.php -- Rob Latham Mathematics and Computer Science Division Argonne National Lab, IL USA

Re: [OMPI users] OpenMPI 1.8.4rc3, 1.6.5 and 1.6.3: segmentation violation in mca_io_romio_dist_MPI_File_close

2015-01-14 Thread Rob Latham
//git.mpich.org/mpich.git/commit/a30a4721a2 ==rob -- Rob Latham Mathematics and Computer Science Division Argonne National Lab, IL USA

Re: [OMPI users] MPIIO and OrangeFS

2015-02-24 Thread Rob Latham
193 ==rob Thanks for replays Hanousek Vít ___ users mailing list us...@open-mpi.org Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users Link to this post: http://www.open-mpi.org/community/lists/users/2015/02/26382.php -- Rob La

Re: [OMPI users] MPIIO and OrangeFS

2015-02-25 Thread Rob Latham
it, but nothing change (ROMIO works with special filename format, OMPIO doesnt work) Thanks for your help. If you point me to some usefull documentation, I will be happy. Hanousek Vít -- Původní zpráva -- Od: Rob Latham Komu: us...@open-mpi.org, vithanou...@seznam.cz Datum: 24.

Re: [OMPI users] LAM/MPI -> OpenMPI

2015-02-27 Thread Rob Latham
PI-IO implementation, was this one: r10377 | brbarret | 2007-07-02 21:53:06 so you're missing out on 8 years of I/O related bug fixes and optimizations. ==rob -- Rob Latham Mathematics and Computer Science Division Argonne National Lab, IL USA

Re: [OMPI users] LAM/MPI -> OpenMPI

2015-02-27 Thread Rob Latham
see Rob found a ROMIO commit in 2007. since you asked: r10400 | brbarret | 2008-06-08 23:18:04 -0500 (Sun, 08 Jun 2008) | 4 lines - Fix issue with make -j and the dependency between libmpi.la and liblamf77mpi.la. Thanks to Justin Bronder for bringing this to our attention. ==rob -- Rob Lat

Re: [OMPI users] Regression in MPI_File_close?!

2016-06-07 Thread Rob Latham
On 06/02/2016 06:41 AM, Edgar Gabriel wrote: Gilles, I think the semantics of MPI_File_close does not necessarily mandate that there has to be an MPI_Barrier based on that text snippet. However, I think what the Barrier does in this scenario is 'hide' a consequence of an implementation aspect.

Re: [OMPI users] MPI_File_read+MPI_BOTTOM crash on NFS ?

2016-06-22 Thread Rob Latham
On 06/22/2016 05:47 AM, Gilles Gouaillardet wrote: Thanks for the info, I updated https://github.com/open-mpi/ompi/issues/1809 accordingly. fwiw, the bug occurs when addresses do not fit in 32 bits. for some reasons, I always run into it on OSX but not on Linux, ubless I use dmalloc. I replace

Re: [OMPI users] a question about [MPI]IO on systems without network filesystem

2010-10-19 Thread Rob Latham
you gave in the cb_config_list. Try it and if it does/doesn't work, I'd like to hear. ==rob -- Rob Latham Mathematics and Computer Science Division Argonne National Lab, IL USA

Re: [OMPI users] out of memory in io_romio_ad_nfs_read.c

2010-11-22 Thread Rob Latham
ted, is not going to perform very well, and will likely, despite the library's best efforts, give you incorrect results. ==rob -- Rob Latham Mathematics and Computer Science Division Argonne National Lab, IL USA

Re: [OMPI users] How to avoid abort when calling MPI_Finalize without calling MPI_File_close?

2010-12-01 Thread Rob Latham
that closing files comes a little earlier in the shutdown process. ==rob -- Rob Latham Mathematics and Computer Science Division Argonne National Lab, IL USA

Re: [OMPI users] MPI-IO problem

2010-12-17 Thread Rob Latham
ot free the subarray type) - in writea you don't really need to seek and then write. You could call MPI_FILE_WRITE_AT_ALL. - You use collective I/O in writea (good for you!) but use independent I/O in writeb. Especially for a 2d subarray, you'll likely see better performance with MPI_FILE_WRITE_ALL. ==rob -- Rob Latham Mathematics and Computer Science Division Argonne National Lab, IL USA

Re: [OMPI users] questions about MPI-IO

2011-01-06 Thread Rob Latham
. MPI_SUCCESS) then call MPI_ERROR_STRING(iret, string, outlen, ierr) print *, string(1:outlen) endif end subroutine check_err external32 is a good idea but nowadays portable files are better served with something like HDF5, NetCDF-4 or Parallel-NetCDF, all of which

Re: [OMPI users] Deadlock with mpi_init_thread + mpi_file_set_view

2011-04-01 Thread Rob Latham
ry inside OpenMPI-1.4.3 is pretty old. I wonder if the locking we added over the years will help? Can you try openmpi-1.5.3 and report what happens? ==rob -- Rob Latham Mathematics and Computer Science Division Argonne National Lab, IL USA

Re: [OMPI users] MPI-2 I/O functions (Open MPI 1.5.x on Windows)

2011-04-04 Thread Rob Latham
; in my program. > It correctly worked on Open MPI on Linux. > I would very much appreciate any information you could send me. > I can't find it in Open MPI User's Mailing List Archives. you probably need to configure OpenMPI so that ROMIO (the MPI-IO library) is built with &

Re: [OMPI users] Deadlock with mpi_init_thread + mpi_file_set_view

2011-04-04 Thread Rob Latham
nsure that ROMIO's internal structures get initialized exactly once, and the delete hooks help us be good citizens and clean up on exit. ==rob -- Rob Latham Mathematics and Computer Science Division Argonne National Lab, IL USA

Re: [OMPI users] Trouble with MPI-IO

2011-05-24 Thread Rob Latham
nically non-decreasing order but it can be jammed into memory any which way you want. ROMIO should be better about reporting file views that violate this part of the standard. We report it in a few places but clearly not enough. ==rob -- Rob Latham Mathematics and Computer Science Division Argonne National Lab, IL USA

Re: [OMPI users] reading from a file

2011-05-24 Thread Rob Latham
where decomposing the dataset over N processors will be more straightforward. ==rob -- Rob Latham Mathematics and Computer Science Division Argonne National Lab, IL USA

Re: [OMPI users] File seeking with shared filepointer issues

2011-07-01 Thread Rob Latham
s > > report the correct filesize. Is this working as intended? Since > > MPI_File_seek_shared is a collective, blocking function each process have > > to synchronise at the return point of the function, but not when the > > function is called. It seems that the use of MPI_File_seek_shared without > > an MPI_Barrier call first is very dangerous, or am I missing something? > > > > ___ > > Care2 makes it easy for everyone to live a healthy, green lifestyle and > > impact the causes you care about most. Over 12 Million members! > > http://www.care2.com Feed a child by searching the web! Learn how > > http://www.care2.com/toolbar___ > > users mailing list > > us...@open-mpi.org > > http://www.open-mpi.org/mailman/listinfo.cgi/users > > > > -- Rob Latham Mathematics and Computer Science Division Argonne National Lab, IL USA

Re: [OMPI users] File seeking with shared filepointer issues

2011-07-01 Thread Rob Latham
gt; report the correct filesize. Is this working as intended? Since > > MPI_File_seek_shared is a collective, blocking function each process have > > to synchronise at the return point of the function, but not when the > > function is called. It seems that the use of MPI_File_seek_shared without > > an MPI_Barrier call first is very dangerous, or am I missing something? > > > > ___ > > Care2 makes it easy for everyone to live a healthy, green lifestyle and > > impact the causes you care about most. Over 12 Million members! > > http://www.care2.com Feed a child by searching the web! Learn how > > http://www.care2.com/toolbar___ > > users mailing list > > us...@open-mpi.org > > http://www.open-mpi.org/mailman/listinfo.cgi/users > > > > -- Rob Latham Mathematics and Computer Science Division Argonne National Lab, IL USA

Re: [OMPI users] File seeking with shared filepointer issues

2011-07-05 Thread Rob Latham
value of the shared file pointer, - Rank 0 did so before any other process read the value of the shared file pointer (the green bar) Anyway, this is all known behavior. collecting the traces seemed like a fun way to spend the last hour on friday before the long (USA) weekend :> ==rob -- Rob Latham Mathematics and Computer Science Division Argonne National Lab, IL USA

Re: [OMPI users] parallel I/O on 64-bit indexed arays

2011-08-05 Thread Rob Latham
;Quincey Koziol from the HDF group is going to propose a follow on to this > >>>ticket, specifically about the case you're referring to -- large counts > >>>for file functions and datatype constructors. Quincey -- can you expand > >>>on what you'll be proposing, perchance? > >>Interesting, I think something along the lines of the note would be very > >>useful and needed for large applications. > >> > >>Thanks a lot for the pointers and your suggestions, > >> > >>cheers, > >> > >>Troels > > > > ___ > users mailing list > us...@open-mpi.org > http://www.open-mpi.org/mailman/listinfo.cgi/users -- Rob Latham Mathematics and Computer Science Division Argonne National Lab, IL USA

Re: [OMPI users] MPIIO and EXT3 file systems

2011-08-22 Thread Rob Latham
s. Do you use MPI datatypes to describe either a file view or the application data? These noncontiguous in memory and/or noncontiguous in file access patterns will also trigger fcntl lock calls. You can use an MPI-IO hint to disable data sieving, at a potentially disastrous performance cost. ==r

Re: [OMPI users] MPIIO and EXT3 file systems

2011-08-29 Thread Rob Latham
and friends are broken for XFS or EXT3, those kinds of bugs get a lot of attention :> At this point the usual course of action is "post a small reproducing test case". Your first message said this was a big code, so perhaps that will not be so easy... ==rob -- Rob Latham Mathematics and Computer Science Division Argonne National Lab, IL USA

Re: [OMPI users] IO issue with OpenMPI 1.4.1 and earlier versions

2011-09-13 Thread Rob Latham
v1.4.3) > MCA routed: binomial (MCA v2.0, API v2.0, Component v1.4.3) > MCA routed: direct (MCA v2.0, API v2.0, Component v1.4.3) > MCA routed: linear (MCA v2.0, API v2.0, Component v1.4.3) > MCA plm: rsh (MCA v2.0, API v2.0, Component v1.4.3) > MCA plm: slurm (MCA v2.0, API v2.0, Component v1.4.3) > MCA plm: tm (MCA v2.0, API v2.0, Component v1.4.3) >MCA filem: rsh (MCA v2.0, API v2.0, Component v1.4.3) > MCA errmgr: default (MCA v2.0, API v2.0, Component v1.4.3) > MCA ess: env (MCA v2.0, API v2.0, Component v1.4.3) > MCA ess: hnp (MCA v2.0, API v2.0, Component v1.4.3) > MCA ess: singleton (MCA v2.0, API v2.0, Component v1.4.3) > MCA ess: slurm (MCA v2.0, API v2.0, Component v1.4.3) > MCA ess: tool (MCA v2.0, API v2.0, Component v1.4.3) > MCA grpcomm: bad (MCA v2.0, API v2.0, Component v1.4.3) > MCA grpcomm: basic (MCA v2.0, API v2.0, Component v1.4.3) > ___ > users mailing list > us...@open-mpi.org > http://www.open-mpi.org/mailman/listinfo.cgi/users -- Rob Latham Mathematics and Computer Science Division Argonne National Lab, IL USA

Re: [OMPI users] maximum size for read buffer in MPI_File_read/write

2011-09-27 Thread Rob Latham
g a bit. in general, if you plot "i/o performance vs blocksize", every file system tops out around several tens of megabytes. So, we have given the advice to just split up this nearly 2 gb request into several 1 gb requests. ==rob -- Rob Latham Mathematics and Computer Science Division Argonne National Lab, IL USA

Re: [OMPI users] MPI_File_Write

2011-11-29 Thread Rob Latham
here, or each processor will end up writing the same data to the same location in the file. If you duplicate the work identically to N processors then yeah, you will take N times longer. ==rob -- Rob Latham Mathematics and Computer Science Division Argonne National Lab, IL USA

Re: [OMPI users] IO performance

2012-02-06 Thread Rob Latham
some nice ROMIO optimizations that will help you out with writes to GPFS if you set the "striping_unit" hint to the GPFS block size. ==rob -- Rob Latham Mathematics and Computer Science Division Argonne National Lab, IL USA

Re: [OMPI users] ROMIO Podcast

2012-02-21 Thread Rob Latham
a long time, then switched to SVN in I think 2007? I am way late to the git party, but git-svn is looking mighty attractive as a first step towards transitioning to full git. One more awful svn merge might be enough to push us over the edge. ==rob -- Rob Latham Mathematics and Computer Scienc

Re: [OMPI users] ROMIO Podcast

2012-02-21 Thread Rob Latham
g but make the testing "surface area" a lot larger. We are probably going to have a chance to improve things greatly with some recently funded proposals. ==rob -- Rob Latham Mathematics and Computer Science Division Argonne National Lab, IL USA

Re: [OMPI users] ROMIO Podcast

2012-02-22 Thread Rob Latham
On Tue, Feb 21, 2012 at 05:30:20PM -0500, Rayson Ho wrote: > On Tue, Feb 21, 2012 at 12:06 PM, Rob Latham wrote: > > ROMIO's testing and performance regression framework is honestly a > > shambles.  Part of that is a challenge with the MPI-IO interface > > itself.  For

Re: [OMPI users] Can't read more than 2^31 bytes with MPI_File_read, regardless of type?

2012-08-07 Thread Rob Latham
On Thu, Jul 12, 2012 at 10:53:52AM -0400, Jonathan Dursi wrote: > Hi: > > One of our users is reporting trouble reading large files with > MPI_File_read (or read_all). With a few different type sizes, to > keep count lower than 2^31, the problem persists. A simple C > program to test this is at

Re: [OMPI users] Invalid filename?

2013-01-21 Thread Rob Latham
- Reuti > >> ___ > >> users mailing list > >> us...@open-mpi.org > >> http://www.open-mpi.org/mailman/listinfo.cgi/users > >> > > > > ___ > > users mailing list > > us...@open-mpi.org > > http://www.open-mpi.org/mailman/listinfo.cgi/users > > > ___ > users mailing list > us...@open-mpi.org > http://www.open-mpi.org/mailman/listinfo.cgi/users -- Rob Latham Mathematics and Computer Science Division Argonne National Lab, IL USA

Re: [OMPI users] Romio and OpenMPI builds

2013-01-21 Thread Rob Latham
intel --with-mpi=open_mpi > >> --disable-aio>, data source: default value) > >> Complete set of command line parameters passed to > >> ROMIO's configure script > >> > >>Eric > >> > >>___ > >>users mailing list > >>us...@open-mpi.org > >>http://www.open-mpi.org/mailman/listinfo.cgi/users > > > >___ > >users mailing list > >us...@open-mpi.org > >http://www.open-mpi.org/mailman/listinfo.cgi/users > > ___ > users mailing list > us...@open-mpi.org > http://www.open-mpi.org/mailman/listinfo.cgi/users -- Rob Latham Mathematics and Computer Science Division Argonne National Lab, IL USA

Re: [OMPI users] opening a file with MPI-IO

2013-07-19 Thread Rob Latham
RN, which is used for that purpose in C. It's important to note that MPI-IO routines *do* use ERROR_RETURN as the error handler, so you will have to take the additional step of setting that. ==rob -- Rob Latham Mathematics and Computer Science Division Argonne National Lab, IL USA

Re: [OMPI users] MPIIO max record size

2013-07-19 Thread Rob Latham
be wrong. > > > > I think but I am not sure that it is because the MPI I/O (ROMIO) > code is the same for all distributions... > > It has been written by Rob Latham. Hello! Rajeev wrote it when he was in grad school, then he passed the torch to Rob Ross when he was a

Re: [OMPI users] MPI_FILE_READ: wrong file-size does not raise an exception

2013-11-15 Thread Rob Latham
On 02/11/2013 09:51 AM, Stefan Mauerberger wrote: Hi Everyone! Playing around with MPI_FILE_READ() puzzles me a little. To catch all errors I set the error-handler - the one which is related to file I/O - to MPI_ERRORS_ARE_FATAL. However, when reading from a file which has not the necessary size

Re: [OMPI users] Parallel I/O Usage

2009-07-08 Thread Rob Latham
tually using parallel I/O the right way. I think you're OK here. What are you seeing? Is this NFS? ==rob -- Rob Latham Mathematics and Computer Science Division Argonne National Lab, IL USA

Re: [OMPI users] MPI_File_open return error code 16

2009-10-22 Thread Rob Latham
;, ret); > } else { > MPI_File_close(&fh); > } > MPI_Finalize(); > return 0; > } The error code isn't very interesting, but if you can turn that error code into a human readable string with the MPI_Error_string() routine, then maybe you'll have a hint as to what is causing the problem. ==rob -- Rob Latham Mathematics and Computer Science Division Argonne National Lab, IL USA

Re: [OMPI users] nonblocking MPI_File_iwrite() does block?

2009-11-20 Thread Rob Latham
tions to the table and reduce your overall I/O costs, perhaps even reducing them enough that you no longer miss true asynchronous I/O ? ==rob -- Rob Latham Mathematics and Computer Science Division Argonne National Lab, IL USA

Re: [OMPI users] nonblocking MPI_File_iwrite() does block?

2009-11-23 Thread Rob Latham
supports it). If you really need to experiment with async I/O, I'd love to hear your experiences. ==rob -- Rob Latham Mathematics and Computer Science Division Argonne National Lab, IL USA

Re: [OMPI users] nonblocking MPI_File_iwrite() does block?

2010-01-06 Thread Rob Latham
On Mon, Nov 23, 2009 at 01:32:24PM -0700, Barrett, Brian W wrote: > On 11/23/09 8:42 AM, "Rob Latham" wrote: > > > Is it OK to mention MPICH2 on this list? I did prototype some MPI > > extensions that allowed ROMIO to do true async I/O (at least as far > >

Re: [OMPI users] Problems Using PVFS2 with OpenMPI

2010-01-13 Thread Rob Latham
h OpenMPI as well as an example program > because this is the first time I've attempted this so I may well be > doing something wrong. It sounds like you're on the right track. I should update the PVFS quickstart for the OpenMPI specifics. In addition to pvfs2-ping and pvfs2-ls

Re: [OMPI users] Best way to reduce 3D array

2010-04-05 Thread Rob Latham
though. Nothing prevents rank 30 from hitting that loop before rank 2 does. To ensure order, you could MPI_SEND a token around a ring of MPI processes. Yuck. One approach might be to use MPI_SCAN to collect offsets (the amount of data each process will write) and then do an MPI_FILE_WRITE_AT_ALL

[OMPI users] cleaning up old ROMIO (MPI-IO) drivers

2016-01-05 Thread Rob Latham
river and it's not on the list? First off, let me know and I will probably want to visit your site and take a picture of your system. Then, let me know how much longer you foresee using the driver and we'll create a "deprecated" list for N more years. Thanks ==rob -

Re: [OMPI users] [mpich-discuss] cleaning up old ROMIO (MPI-IO) drivers

2016-01-05 Thread Rob Latham
5/2016 12:31 PM, Rob Latham wrote: I'm itching to discard some of the little-used file system drivers in ROMIO, an MPI-IO implementation used by, well, everyone. I've got more details in this ROMIO blog post: http://press3.mcs.anl.gov/romio/2016/01/05/cleaning-out-old-romio-file-system-d

Re: [OMPI users] cleaning up old ROMIO (MPI-IO) drivers

2016-01-25 Thread Rob Latham
On 01/21/2016 05:59 AM, Dave Love wrote: [Catching up...] Rob Latham writes: Do you use any of the other ROMIO file system drivers? If you don't know if you do, or don't know what a ROMIO file system driver is, then it's unlikely you are using one. What if you use a driv

Re: [OMPI users] MX replacement?

2016-02-11 Thread Rob Latham
On 02/04/2016 11:35 AM, Dave Love wrote: Jeff Hammond writes: On Tuesday, February 2, 2016, Brice Goglin wrote: I announced the end of the Open-MX maintenance to my users in December because OMPI was dropping MX support. Nobody complained. So I don't plan to bring back Open-MX to life nei

Re: [OMPI users] error openmpi check hdf5

2016-02-11 Thread Rob Latham
On 02/10/2016 12:07 PM, Edgar Gabriel wrote: yes and no :-) That particular functions was fixed, but there are a few other especially in the shardefp framework that would cause similar problems if compiled without RTLD_GLOBAL. But more importantly, I can confirm that ompio in the 1.8 and 1.10

Re: [OMPI users] cleaning up old ROMIO (MPI-IO) drivers

2016-02-12 Thread Rob Latham
On 01/21/2016 05:59 AM, Dave Love wrote: :-), but what about plfs, which seems a notable omission? The patches in the plfs distribution don't apply to recent adio, at least the version in ompi 1.8 as far as I remember. I wonder if there's any chance of fixing that and including it, assuming

Re: [OMPI users] PVFS/OrangeFS (was: cleaning up old ROMIO (MPI-IO) drivers)

2016-02-12 Thread Rob Latham
On 01/26/2016 09:32 AM, Dave Love wrote: Rob Latham writes: We didn't need to deploy PLFS at Argonne: GPFS handled writing N-to-1 files just fine (once you line up the block sizes), so I'm beholden to PLFS communities for ROMIO support. I guess GPFS has improved in that res

Re: [OMPI users] Error with MPI_Register_datarep

2016-03-14 Thread Rob Latham
On 03/13/2016 12:21 PM, George Bosilca wrote: Eric, A quick grep in Open MPI source indicates that the only 2 places where MPI_ERR_UNSUPPORTED_DATAREP is issue are deep inside the imported ROMIO code (3.14): ./ompi/mca/io/romio314/romio/adio/include/adioi_errmsg.h:70:MPI_ERR_UNSUPPORTED_DATAR

Re: [OMPI users] MPI-IO: reading an unformatted binary fortran file

2009-06-16 Thread Rob Latham
hat you've written. The MPI-IO library just provides a wrapper around C system calls, so if you created this file with fortran, you'll have to read it back with fortran. Since you eventually want to do parallel I/O, I'd suggest creating the file with MPI-IO (Even if it is MPI_FILE_WRITE from rank 0 or a single process) as well as reading it back (perhaps with MPI_FILE_READ_AT_ALL). ==rob -- Rob Latham Mathematics and Computer Science Division Argonne National Lab, IL USA