Hi George,
George Bosilca wrote:
> It is what I suspected. You can see that the envio array is smaller than
> it should. It was created as an array of doubles with the size t_max, when
> it should have been created as an array of double with the size t_max *
> nprocs.
Ah, yes, I see (and even und
It is what I suspected. You can see that the envio array is smaller than
it should. It was created as an array of doubles with the size t_max, when
it should have been created as an array of double with the size t_max *
nprocs. If you look how the recibe array is created you can notice that
it'
Hi,
George Bosilca wrote:
> On the all-to-all collective the send and receive buffers has to be able
> to contain all the information you try to send. On this particular case,
> as you initialize the envio variable to a double I suppose it is defined
> as a double. If it's the case then the error
Hi,
shen T.T. wrote:
> Do you have the other compiler? Could you check the error and report it ?
I don't used other Intel Compilers at the moment, but I'm going to give
gfortran a try today.
Kind regards,
--
Frank Gruellich
HPC-Techniker
Tel.: +49 3722 528 42
Fax:+49 3722 528 15
E-Mail
Hi,
Graham E Fagg wrote:
> I am not sure which alltoall your using in 1.1 so can you please run
> the ompi_info utility which is normally built and put into the same
> directory as mpirun?
>
> i.e. host% ompi_info
>
> This provides lots of really usefull info on everything before we dig
> deeper
I have the same error message:"forrtl: severe (174): SIGSEGV, segmentation
fault occurred". I run my paralled code on single node or multi nodes, the
error existes. Then i try three Intel compilers : 8.1.037, 9.0.032 and 9.1.033
, but the error still existes. But my code work correctly on Window
Frank,
On the all-to-all collective the send and receive buffers has to be able
to contain all the information you try to send. On this particular case,
as you initialize the envio variable to a double I suppose it is defined
as a double. If it's the case then the error is that the send operat
Hi Frank
I am not sure which alltoall your using in 1.1 so can you please run
the ompi_info utility which is normally built and put into the same
directory as mpirun?
i.e. host% ompi_info
This provides lots of really usefull info on everything before we dig
deeper into your issue
and the