users
Thread
Date
Earlier messages
Later messages
Messages by Thread
Re: [OMPI users] Can't start jobs with srun.
Riebs, Andy via users
Re: [OMPI users] [External] RE: Re: Can't start jobs with srun.
Prentice Bisbal via users
Re: [OMPI users] Can't start jobs with srun.
John Hearns via users
Re: [OMPI users] Can't start jobs with srun.
Patrick Bégou via users
Re: [OMPI users] [External] RE: Re: Can't start jobs with srun.
Prentice Bisbal via users
Re: [OMPI users] Can't start jobs with srun.
Daniel Letai via users
Re: [OMPI users] [External] Re: Can't start jobs with srun.
Prentice Bisbal via users
Re: [OMPI users] [External] Re: Can't start jobs with srun.
Ralph Castain via users
[OMPI users] Preloading the libraries "--preload-files" Effect
Kihang Youn via users
Re: [OMPI users] Preloading the libraries "--preload-files" Effect
Gilles Gouaillardet via users
[OMPI users] OMPI v2.1.5 with Slurm
Levi D Davis via users
Re: [OMPI users] OMPI v2.1.5 with Slurm
Gilles Gouaillardet via users
[OMPI users] opal_path_nfs freeze
Patrick Bégou via users
Re: [OMPI users] opal_path_nfs freeze
Jeff Squyres (jsquyres) via users
Re: [OMPI users] opal_path_nfs freeze
Patrick Bégou via users
Re: [OMPI users] opal_path_nfs freeze
Jeff Squyres (jsquyres) via users
[OMPI users] Problem with MPI_Spawn
Martín Morales via users
[OMPI users] Inquiry about pml layer
Arturo Fernandez via users
[OMPI users] Hwlock library problem
フォンスポール J via users
Re: [OMPI users] Hwlock library problem
Gilles Gouaillardet via users
Re: [OMPI users] Hwlock library problem
フォンスポール J via users
Re: [OMPI users] Hwlock library problem
Gilles Gouaillardet via users
Re: [OMPI users] Hwlock library problem
Jeff Squyres (jsquyres) via users
Re: [OMPI users] Hwlock library problem
フォンスポール J via users
Re: [OMPI users] Hwlock library problem
Gilles Gouaillardet via users
Re: [OMPI users] Hwlock library problem
フォンスポール J via users
Re: [OMPI users] Hwlock library problem
Gilles Gouaillardet via users
Re: [OMPI users] Hwlock library problem
フォンスポール J via users
[OMPI users] Meaning of mpiexec error flags
Mccall, Kurt E. (MSFC-EV41) via users
Re: [OMPI users] Meaning of mpiexec error flags
Ralph Castain via users
Re: [OMPI users] Meaning of mpiexec error flags
Mccall, Kurt E. (MSFC-EV41) via users
Re: [OMPI users] Meaning of mpiexec error flags
Ralph Castain via users
Re: [OMPI users] Meaning of mpiexec error flags
Mccall, Kurt E. (MSFC-EV41) via users
[OMPI users] file/process write speed is not scalable
Dong-In Kang via users
Re: [OMPI users] file/process write speed is not scalable
Gilles Gouaillardet via users
Re: [OMPI users] file/process write speed is not scalable
Dong-In Kang via users
Re: [OMPI users] file/process write speed is not scalable
Patrick Bégou via users
Re: [OMPI users] file/process write speed is not scalable
Dong-In Kang via users
[OMPI users] Clean termination after receiving multiple SIGINT
Kreutzer, Moritz via users
Re: [OMPI users] Clean termination after receiving multiple SIGINT
Ralph Castain via users
Re: [OMPI users] Clean termination after receiving multiple SIGINT
Kreutzer, Moritz via users
Re: [OMPI users] Clean termination after receiving multiple SIGINT
Ralph Castain via users
Re: [OMPI users] Clean termination after receiving multiple SIGINT
Kreutzer, Moritz via users
[OMPI users] Slow collective MPI File IO
Dong-In Kang via users
Re: [OMPI users] Slow collective MPI File IO
Gabriel, Edgar via users
Re: [OMPI users] Slow collective MPI File IO
Dong-In Kang via users
Re: [OMPI users] Slow collective MPI File IO
Collin Strassburger via users
Re: [OMPI users] Slow collective MPI File IO
Dong-In Kang via users
Re: [OMPI users] Slow collective MPI File IO
Gabriel, Edgar via users
Re: [OMPI users] Slow collective MPI File IO
Dong-In Kang via users
Re: [OMPI users] Slow collective MPI File IO
Gilles Gouaillardet via users
Re: [OMPI users] Slow collective MPI File IO
Dong-In Kang via users
Re: [OMPI users] Slow collective MPI File IO
George Reeke via users
Re: [OMPI users] Slow collective MPI File IO
Gilles GOUAILLARDET via users
Re: [OMPI users] Slow collective MPI File IO
Collin Strassburger via users
Re: [OMPI users] Slow collective MPI File IO
Benson Muite via users
[OMPI users] mpirun error only with one node
Garrett, Charles via users
Re: [OMPI users] mpirun error only with one node
John Hearns via users
Re: [OMPI users] mpirun error only with one node
Garrett, Charles via users
[OMPI users] mpirun CLI parsing
Jean-Baptiste Skutnik via users
Re: [OMPI users] mpirun CLI parsing
Ralph Castain via users
[OMPI users] topology.c line 940?
Chen Chieh 陳婕 via users
Re: [OMPI users] topology.c line 940?
Brice Goglin via users
[OMPI users] How to prevent linking in GPFS when it is present
Jonathon A Anderson via users
Re: [OMPI users] How to prevent linking in GPFS when it is present
Gilles Gouaillardet via users
Re: [OMPI users] How to prevent linking in GPFS when it is present
Jonathon A Anderson via users
Re: [OMPI users] How to prevent linking in GPFS when it is present
Gabriel, Edgar via users
Re: [OMPI users] How to prevent linking in GPFS when it is present
Jonathon A Anderson via users
[OMPI users] Regarding eager limit relationship to send message size
Raut, S Biplab via users
Re: [OMPI users] Regarding eager limit relationship to send message size
George Bosilca via users
Re: [OMPI users] Regarding eager limit relationship to send message size
Raut, S Biplab via users
Re: [OMPI users] Regarding eager limit relationship to send message size
George Bosilca via users
Re: [OMPI users] Regarding eager limit relationship to send message size
Raut, S Biplab via users
Re: [OMPI users] Regarding eager limit relationship to send message size
Jeff Squyres (jsquyres) via users
Re: [OMPI users] Regarding eager limit relationship to send message size
Raut, S Biplab via users
Re: [OMPI users] Regarding eager limit relationship to send message size
Jeff Squyres (jsquyres) via users
Re: [OMPI users] Regarding eager limit relationship to send message size
George Bosilca via users
Re: [OMPI users] Regarding eager limit relationship to send message size
Raut, S Biplab via users
[OMPI users] Fault in not recycling bsend buffer ?
Martyn Foster via users
Re: [OMPI users] Fault in not recycling bsend buffer ?
George Bosilca via users
Re: [OMPI users] Fault in not recycling bsend buffer ?
Martyn Foster via users
Re: [OMPI users] Fault in not recycling bsend buffer ?
Jeff Squyres (jsquyres) via users
[OMPI users] Limits of communicator size and number of parallel broadcast transmissions
Konstantinos Konstantinidis via users
Re: [OMPI users] Limits of communicator size and number of parallel broadcast transmissions
George Bosilca via users
[OMPI users] Question about run time message
Jeffrey Layton via users
Re: [OMPI users] Question about run time message
Jeff Squyres (jsquyres) via users
[OMPI users] Strange UCX error
Jure 4ocean via users
[OMPI users] Read from file performance degradation when increasing number of processors in some cases
Ali Cherry via users
Re: [OMPI users] Read from file performance degradation when increasing number of processors in some cases
Gilles Gouaillardet via users
Re: [OMPI users] Read from file performance degradation whenincreasing number of processors in some cases
Gilles Gouaillardet via users
Re: [OMPI users] Read from file performance degradation whenincreasing number of processors in some cases
Ali Cherry via users
Re: [OMPI users] Read from file performance degradation whenincreasing number of processors in some cases
Gabriel, Edgar via users
Re: [OMPI users] Read from file performance degradation whenincreasing number of processors in some cases
Ali Cherry via users
Re: [OMPI users] Read from file performance degradation whenincreasing number of processors in some cases
Gilles Gouaillardet via users
Re: [OMPI users] Read from file performance degradation whenincreasing number of processors in some cases
Ali Cherry via users
[OMPI users] Fw: openmpi/pmix hangs when increasing tasks
Levi D Davis via users
[OMPI users] OpenMPI compile Intel Fortran Error
Noele Franchi Leonardo via users
[OMPI users] How to use OPENMPI with different Service Level in Infiniband Virtual Lane?
Kihang Youn via users
Re: [OMPI users] How to use OPENMPI with different Service Level in Infiniband Virtual Lane?
John Hearns via users
Re: [OMPI users] How to use OPENMPI with different Service Level in Infiniband Virtual Lane?
Jeff Squyres (jsquyres) via users
[OMPI users] Help with One-Sided Communication: Works in Intel MPI, Fails in Open MPI
Matt Thompson via users
Re: [OMPI users] Help with One-Sided Communication: Works in Intel MPI, Fails in Open MPI
Gabriel, Edgar via users
Re: [OMPI users] Help with One-Sided Communication: Works in Intel MPI, Fails in Open MPI
Nathan Hjelm via users
Re: [OMPI users] Help with One-Sided Communication: Works in Intel MPI, Fails in Open MPI
Matt Thompson via users
Re: [OMPI users] Help with One-Sided Communication: Works in Intel MPI, Fails in Open MPI
Adam Simpson via users
Re: [OMPI users] Help with One-Sided Communication: Works in Intel MPI, Fails in Open MPI
Matt Thompson via users
Re: [OMPI users] Help with One-Sided Communication: Works in Intel MPI, Fails in Open MPI
Adam Simpson via users
Re: [OMPI users] Help with One-Sided Communication: Works in Intel MPI, Fails in Open MPI
Matt Thompson via users
[OMPI users] vader_single_copy_mechanism
Bennet Fauber via users
Re: [OMPI users] vader_single_copy_mechanism
Adrian Reber via users
[OMPI users] openmpi/pmix/ucx
Michael Di Domenico via users
Re: [OMPI users] openmpi/pmix/ucx
Ray Muno via users
Re: [OMPI users] openmpi/pmix/ucx
Michael Di Domenico via users
[OMPI users] Question about UCX progress throttling
Joseph Schuchart via users
[OMPI users] Shmem errors on Mac OS Catalina
Jin Tao via users
Re: [OMPI users] [EXTERNAL] Shmem errors on Mac OS Catalina
Gutierrez, Samuel K. via users
Re: [OMPI users] [EXTERNAL] Shmem errors on Mac OS Catalina
Ralph Castain via users
Re: [OMPI users] [EXTERNAL] Shmem errors on Mac OS Catalina
Jin Tao via users
Re: [OMPI users] [EXTERNAL] Shmem errors on Mac OS Catalina
Jin Tao via users
Re: [OMPI users] [EXTERNAL] Shmem errors on Mac OS Catalina
Jin Tao via users
[OMPI users] running mpirun with grid
Kulshrestha, Vipul via users
Re: [OMPI users] running mpirun with grid
Reuti via users
Re: [OMPI users] running mpirun with grid
Kulshrestha, Vipul via users
Re: [OMPI users] running mpirun with grid
Kulshrestha, Vipul via users
Re: [OMPI users] running mpirun with grid
Kulshrestha, Vipul via users
[OMPI users] Running mpirun with grid
Kulshrestha, Vipul via users
Re: [OMPI users] Running mpirun with grid
John Hearns via users
Re: [OMPI users] Running mpirun with grid
Gilles Gouaillardet via users
Re: [OMPI users] Running mpirun with grid
Ralph Castain via users
Re: [OMPI users] Running mpirun with grid
Kulshrestha, Vipul via users
Re: [OMPI users] Running mpirun with grid
Ralph Castain via users
Re: [OMPI users] Running mpirun with grid
Jeff Squyres (jsquyres) via users
Re: [OMPI users] Running mpirun with grid
Kulshrestha, Vipul via users
Re: [OMPI users] Running mpirun with grid
Ralph Castain via users
Re: [OMPI users] Running mpirun with grid
John Hearns via users
Re: [OMPI users] Running mpirun with grid
Kulshrestha, Vipul via users
Re: [OMPI users] Running mpirun with grid
Gilles Gouaillardet via users
Re: [OMPI users] Running mpirun with grid
Kulshrestha, Vipul via users
[OMPI users] Trouble with Mellanox's hcoll component and MPI_THREAD_MULTIPLE support?
Angel de Vicente via users
Re: [OMPI users] Trouble with Mellanox's hcoll component and MPI_THREAD_MULTIPLE support?
George Bosilca via users
Re: [OMPI users] Trouble with Mellanox's hcoll component and MPI_THREAD_MULTIPLE support?
Angel de Vicente via users
Re: [OMPI users] Trouble with Mellanox's hcoll component and MPI_THREAD_MULTIPLE support?
George Bosilca via users
Re: [OMPI users] Trouble with Mellanox's hcoll component and MPI_THREAD_MULTIPLE support?
Joshua Ladd via users
Re: [OMPI users] Trouble with Mellanox's hcoll component and MPI_THREAD_MULTIPLE support?
Angel de Vicente via users
Re: [OMPI users] Trouble with Mellanox's hcoll component and MPI_THREAD_MULTIPLE support?
Joshua Ladd via users
Re: [OMPI users] Trouble with Mellanox's hcoll component and MPI_THREAD_MULTIPLE support?
Angel de Vicente via users
[OMPI users] OpenFabrics
Bennet Fauber via users
Re: [OMPI users] OpenFabrics
Jeff Squyres (jsquyres) via users
Re: [OMPI users] OpenFabrics
Bennet Fauber via users
Re: [OMPI users] OpenFabrics
Jeff Squyres (jsquyres) via users
Re: [OMPI users] OpenFabrics
Bennet Fauber via users
Re: [OMPI users] OpenFabrics
Jeff Squyres (jsquyres) via users
[OMPI users] OMPI returns error 63 on AMD 7742 when utilizing 100+ processors per node
Collin Strassburger via users
Re: [OMPI users] OMPI returns error 63 on AMD 7742 when utilizing 100+ processors per node
Howard Pritchard via users
Re: [OMPI users] OMPI returns error 63 on AMD 7742 when utilizing 100+ processors per node
Collin Strassburger via users
Re: [OMPI users] [External] Re: OMPI returns error 63 on AMD 7742 when utilizing 100+ processors per node
Ray Sheppard via users
Re: [OMPI users] [External] Re: OMPI returns error 63 on AMD 7742 when utilizing 100+ processors per node
Collin Strassburger via users
Re: [OMPI users] [External] Re: OMPI returns error 63 on AMD 7742 when utilizing 100+ processors per node
Jeff Squyres (jsquyres) via users
Re: [OMPI users] [External] Re: OMPI returns error 63 on AMD 7742 when utilizing 100+ processors per node
Collin Strassburger via users
Re: [OMPI users] [External] Re: OMPI returns error 63 on AMD 7742 when utilizing 100+ processors per node
Ralph Castain via users
Re: [OMPI users] [External] Re: OMPI returns error 63 on AMD 7742 when utilizing 100+ processors per node
Collin Strassburger via users
Re: [OMPI users] [External] Re: OMPI returns error 63 on AMD 7742 when utilizing 100+ processors per node
Ralph Castain via users
Re: [OMPI users] [External] Re: OMPI returns error 63 on AMD 7742 when utilizing 100+ processors per node
Joshua Ladd via users
Re: [OMPI users] [External] Re: OMPI returns error 63 on AMD 7742 when utilizing 100+ processors per node
Collin Strassburger via users
Re: [OMPI users] [External] Re: OMPI returns error 63 on AMD 7742 when utilizing 100+ processors per node
Joshua Ladd via users
Re: [OMPI users] [External] Re: OMPI returns error 63 on AMD 7742 when utilizing 100+ processors per node
Joshua Ladd via users
Re: [OMPI users] [External] Re: OMPI returns error 63 on AMD 7742 when utilizing 100+ processors per node
Ralph Castain via users
Re: [OMPI users] [External] Re: OMPI returns error 63 on AMD 7742 when utilizing 100+ processors per node
Collin Strassburger via users
Re: [OMPI users] [External] Re: OMPI returns error 63 on AMD 7742 when utilizing 100+ processors per node
Joshua Ladd via users
Re: [OMPI users] [External] Re: OMPI returns error 63 on AMD 7742 when utilizing 100+ processors per node
Collin Strassburger via users
Re: [OMPI users] [External] Re: OMPI returns error 63 on AMD 7742 when utilizing 100+ processors per node
Joshua Ladd via users
Re: [OMPI users] [External] Re: OMPI returns error 63 on AMD 7742 when utilizing 100+ processors per node
Joshua Ladd via users
Re: [OMPI users] [External] Re: OMPI returns error 63 on AMD 7742 when utilizing 100+ processors per node
Collin Strassburger via users
Re: [OMPI users] [External] Re: OMPI returns error 63 on AMD 7742 when utilizing 100+ processors per node
Joshua Ladd via users
Re: [OMPI users] [External] Re: OMPI returns error 63 on AMD 7742 when utilizing 100+ processors per node
Collin Strassburger via users
Re: [OMPI users] [External] Re: OMPI returns error 63 on AMD 7742 when utilizing 100+ processors per node
Ralph Castain via users
Re: [OMPI users] [External] Re: OMPI returns error 63 on AMD 7742 when utilizing 100+ processors per node
Collin Strassburger via users
Re: [OMPI users] [External] Re: OMPI returns error 63 on AMD 7742 when utilizing 100+ processors per node
Ralph Castain via users
Re: [OMPI users] [External] Re: OMPI returns error 63 on AMD 7742 when utilizing 100+ processors per node
Collin Strassburger via users
Re: [OMPI users] OMPI returns error 63 on AMD 7742 when utilizing 100+ processors per node
Howard Pritchard via users
Re: [OMPI users] OMPI returns error 63 on AMD 7742 when utilizing 100+ processors per node
Ralph Castain via users
[OMPI users] One-Sided operations CUDA support plan for OpenMPI>2.0
Bicheng Ying via users
[OMPI users] OpenMPI 4.0.2 with PGI 19.10, will not build with hcoll
Raymond Muno via users
Re: [OMPI users] OpenMPI 4.0.2 with PGI 19.10, will not build with hcoll
Åke Sandgren via users
Re: [OMPI users] OpenMPI 4.0.2 with PGI 19.10, will not build with hcoll
Jeff Hammond via users
Re: [OMPI users] OpenMPI 4.0.2 with PGI 19.10, will not build with hcoll
Gilles Gouaillardet via users
Re: [OMPI users] OpenMPI 4.0.2 with PGI 19.10, will not build with hcoll
Åke Sandgren via users
Re: [OMPI users] OpenMPI 4.0.2 with PGI 19.10, will not build with hcoll
Ray Muno via users
[OMPI users] mpicc fails to compile example code when --enable-static --disable-shared is used for installation.
Mehmet ÖREN via users
Re: [OMPI users] mpicc fails to compile example code when --enable-static --disable-shared is used for installation.
Jeff Squyres (jsquyres) via users
Re: [OMPI users] mpicc fails to compile example code when --enable-static --disable-shared is used for installation.
Mehmet ÖREN via users
[OMPI users] Subject: need a tool and its use to verify use of infiniband network
Heinz, Michael William via users
[OMPI users] need a tool and its use to verify use of infiniband network
SOPORTE MODEMAT via users
[OMPI users] need a tool and its use to verify use of infiniband network
Riesen, Lee Ann via users
Re: [OMPI users] need a tool and its use to verify use of infiniband network
Jeff Squyres (jsquyres) via users
[OMPI users] HELP: openmpi is not using the specified infiniband interface !!
SOPORTE MODEMAT via users
Re: [OMPI users] HELP: openmpi is not using the specified infiniband interface !!
Gilles Gouaillardet via users
Re: [OMPI users] HELP: openmpi is not using the specified infiniband interface !!
George Bosilca via users
[OMPI users] Univa Grid Engine and OpenMPI 1.8.7
Lane, William via users
Re: [OMPI users] Univa Grid Engine and OpenMPI 1.8.7
Reuti via users
Earlier messages
Later messages