Update: removing the -fast switch caused this error to go away.
Prentice
On 04/27/2017 06:00 PM, Prentice Bisbal wrote:
I'm building Open MPI 2.1.0 with PGI 17.3, and now I'm getting
'illegal instruction' errors during 'make check':
../../config/test-driver: line 107: 65169 Illegal instructio
Hi Esthela,
As George mentions, this is indeed libpsm2 printing this error. Opcode=0xCC is
a disconnect retry. There are a few scenarios that could be happening, but can
simplify in saying it is an already disconnected endpoint message arriving
late. What version of Intel Ompin-path Software or
OMPI version 2.1.0. Should have clarified that initially, sorry. Running on
Ubuntu 12.04.5.
On Fri, Apr 28, 2017 at 10:29 AM, r...@open-mpi.org wrote:
> What version of OMPI are you using?
>
> On Apr 28, 2017, at 8:26 AM, Austin Herrema wrote:
>
> Hello all,
>
> I am using mpi4py in an optimiza
What version of OMPI are you using?
> On Apr 28, 2017, at 8:26 AM, Austin Herrema wrote:
>
> Hello all,
>
> I am using mpi4py in an optimization code that iteratively spawns an MPI
> analysis code (fortran-based) via "MPI.COMM_SELF.Spawn" (I gather that this
> is not an ideal use for comm spa
Hello all,
I am using mpi4py in an optimization code that iteratively spawns an MPI
analysis code (fortran-based) via "MPI.COMM_SELF.Spawn" (I gather that this
is not an ideal use for comm spawn but I don't have too many other options
at this juncture). I am calling "child_comm.Disconnect()" on th
short update on this: master does not finish with either OMPIO or ROMIO.
It admittedly segfaults earlier with OMPIO than with ROMIO, but with one
little tweak I can make them fail both at the same spot. There is
clearly a memory corruption going on for the larger cases, I will try to
further na
actually, reading through the email in more details, I actually doubt
that it is OMPIO.
"I deem an output wrong if it doesn't follow from the parameters or if
the program crashes on execution.
The only difference between OpenMPI and Intel MPI, according to my
tests, is in the different behavio
Thank you for the detailed analysis, I will have a look into that. It
would be really important to know which version of Open MPI triggers
this problem?
Christoph, I doubt that it is
https://github.com/open-mpi/ompi/issues/2399
due to the fact that the test uses collective I/O, which breaks
Before v1.10, the default is ROMIO, and you can force OMPIO with
mpirun --mca io ompio ...
>From v2, the default is OMPIO (unless you are running on lustre iirc),
and you can force ROMIO with
mpirun --mca io ^ompio ...
maybe that can help for the time being
Cheers,
Gilles
- Original Messa
Hello,
Which MPI Version are you using?
This looks for me like it triggers https://github.com/open-mpi/ompi/issues/2399
You can check if you are running into this problem by playing around with the
mca_io_ompio_cycle_buffer_size parameter.
Best
Christoph Niethammer
--
Christoph Niethammer
Hig
Dear OpenMPI Mailing List,
I have a problem with MPI I/O running on more than 1 rank using very
large filetypes. In order to reproduce the problem please take advantage
of the attached program "mpi_io_test.c". After compilation it should be
run on 2 nodes.
The program will do the following for a
11 matches
Mail list logo