Hi everyone,
Recently I need to compile High-Performance Linpack code with openmpi 1.2
version (a little bit old). When I finish compilation, and try to run, I
get the following errors:
[test:32058] *** Process received signal ***
[test:32058] Signal: Segmentation fault (11)
[test:32058] Signal co
L (goodbye)
mpirun noticed that job rank 0 with PID 46005 on node test-ib exited on
signal 15 (Terminated).
Hope you can give me some suggestions. Thank you.
Kaiming Ouyang, Research Assistant.
Department of Computer Science and Engineering
University of California, Riverside
900 University Avenue, Riv
a good day.
Kaiming Ouyang, Research Assistant.
Department of Computer Science and Engineering
University of California, Riverside
900 University Avenue, Riverside, CA 92521
On Mon, Mar 19, 2018 at 8:39 PM, Jeff Squyres (jsquyres) wrote:
> I'm sorry; I can't help debug a version
k you.
Kaiming Ouyang, Research Assistant.
Department of Computer Science and Engineering
University of California, Riverside
900 University Avenue, Riverside, CA 92521
On Tue, Mar 20, 2018 at 4:35 AM, Jeff Squyres (jsquyres) wrote:
> On Mar 19, 2018, at 11:32 PM, Kaiming Ouyang wrote:
> >
framework so
that this old software does not fit it anymore.
Kaiming Ouyang, Research Assistant.
Department of Computer Science and Engineering
University of California, Riverside
900 University Avenue, Riverside, CA 92521
On Tue, Mar 20, 2018 at 10:46 AM, John Hearns via users <
users@lists.o
Hi Jeff,
Thank you for your advice. I will contact the author for some suggestions.
I also notice I may port this old version library to new openmpi 3.0. I
will work on this soon. Thank you.
Kaiming Ouyang, Research Assistant.
Department of Computer Science and Engineering
University of
Hi all,
I am trying to test the bandwidth of intra-MPI send and recv. The code is
attached here. When I give the input 2048 (namely each process will send
and receive 2GB data), the program reported:
Read 2147479552, expected 2147483648, errno = 95
Read 2147479552, expected 2147483648, errno = 98
R
u running ?
> This reminds me of a bug in CMA that has already been fixed.
>
>
> can you try again with
>
>
> mpirun --mca btl_vader_single_copy_mechanism none ...
>
>
> Cheers,
>
> Gilles
>
> On Fri, Apr 13, 2018 at 1:51 PM, Kaiming Ouyang wrote:
> > H
? Thank you very much.
Kaiming Ouyang, Research Assistant.
Department of Computer Science and Engineering
University of California, Riverside
900 University Avenue, Riverside, CA 92521
___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org