Re: [OMPI users] Seg fault in MPI_FINALIZE

2015-10-22 Thread Jeff Squyres (jsquyres)
1.10.1 isn't released yet -- it's "very close" (just working on a few final issues), but not quite out the door yet. Stay tuned... > On Oct 22, 2015, at 2:26 PM, McGrattan, Kevin B. Dr. > wrote: > > OK, I guess I have to upgrade to 1.10.x. I think we have 1.10.0 and I’ll ask > for the lates

Re: [OMPI users] Seg fault in MPI_FINALIZE

2015-10-22 Thread McGrattan, Kevin B. Dr.
OK, I guess I have to upgrade to 1.10.x. I think we have 1.10.0 and I'll ask for the latest. Already using the Intel Fortran 16. Hope it helps. Thanks Kevin McGrattan National Institute of Standards and Technology 100 Bureau Drive, Mail Stop 8664 Gaithersburg, Maryland 20899

Re: [OMPI users] Seg fault in MPI_FINALIZE

2015-10-16 Thread Jeff Squyres (jsquyres)
If you are using Intel 16, yes, 1.10.1 would be a good choice. If you're not using Fortran, you can disable the MPI Fortran bindings, and you should be ok, too. > On Oct 16, 2015, at 3:54 PM, Nick Papior wrote: > > @Jeff, Kevin > > Shouldn't Kevin wait for 1.10.1 with the intel 16 compiler?

Re: [OMPI users] Seg fault in MPI_FINALIZE

2015-10-16 Thread Nick Papior
@Jeff, Kevin Shouldn't Kevin wait for 1.10.1 with the intel 16 compiler? A bugfix for intel 16 has been committed with fb49a2d71ed9115be892e8a22643d9a1c069a8f9. (At least I am anxiously awaiting the 1.10.1 because I cannot get my builds to complete successfully) 2015-10-16 19:33 GMT+00:00 Jeff

Re: [OMPI users] Seg fault in MPI_FINALIZE

2015-10-16 Thread Jeff Squyres (jsquyres)
> On Oct 16, 2015, at 3:25 PM, McGrattan, Kevin B. Dr. > wrote: > > I cannot nail this down any better because this happens like every other > night, with about 1 out of a hundred jobs. Can anyone think of a reason why > the job would seg fault in MPI_FINALIZE, but only under conditions where

[OMPI users] Seg fault in MPI_FINALIZE

2015-10-16 Thread McGrattan, Kevin B. Dr.
My group is running a fairly large CFD code compiled with Intel Fortran 16.0.0 and OpenMPI 1.8.4. Each night we run hundreds of simple test cases, using a range of MPI processes from 1 to 16. I have noticed that if we submit these jobs on our linux cluster and assign each job exclusive rights to