Hi all,
I am encountering a silent hang involving MPI_Ssend and MPI_Irecv. The
subroutine in question is called by each processor and is structured similar to
the pseudo code below. The subroutine is successfully called several thousand
times before the silent hang behavior manifests and never
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hi,
Am 24.07.2020 um 18:55 schrieb Lana Deere via users:
> I have open-mpi 4.0.4 installed on my desktop and my small test programs are
> working.
>
> I would like to migrate the open-mpi to a cluster and run a larger program
> there. When moved,
Hi
Currently i am approaching a similar problem/workflow with spack and an AWS
S3 shared storage. Mounting the storage from a laptop gives you same layout
as on each node of my AWC EC2 cluster.
As others mentioned before: you still have to recompile your work, to take
advantage of the XEON class c
+1
In my experience moving software, especially something of the complexity of
(Open) MPI,
is much more troublesome (and often just useless frustration) and time
consuming than recompiling it.
Hardware, OS, kernel, libraries, etc, are unlikely to be compatible.
Gus Correa
On Fri, Jul 24, 2020 at 1
On 7/24/20 7:55 PM, Lana Deere via users wrote:
I have open-mpi 4.0.4 installed on my desktop and my small test programs
are working.
I would like to migrate the open-mpi to a cluster and run a larger
program there. When moved, the open-mpi installation is in a different
pathname than it was
While possible, it is highly unlikely that your desktop version is going to be
binary compatible with your cluster...
On Jul 24, 2020, at 9:55 AM, Lana Deere via users mailto:users@lists.open-mpi.org> > wrote:
I have open-mpi 4.0.4 installed on my desktop and my small test programs are
working.
I have open-mpi 4.0.4 installed on my desktop and my small test programs
are working.
I would like to migrate the open-mpi to a cluster and run a larger program
there. When moved, the open-mpi installation is in a different pathname
than it was on my desktop and it doesn't seem to work any longer
Hi, Chris,
The website you gave is almost empty. svn checkout
https://scm.projects.hlrs.de/anonscm/svn/mpitestsuite/ does not work.
Our code uses MPI point to point, collectives, and communicator, attributes,
basically MPI-2.1 stuff.
Thanks
--Junchao Zhang
On Jul 24, 2020, at 2:34 AM,
Hi,
MTT is a testing infrastructure to automate building MPI libraries and tests,
running tests and collecting test results but does not come with MPI testsuites
itself.
Best
Christoph
- Original Message -
From: "Open MPI Users"
To: "Open MPI Users"
Cc: "Joseph Schuchart"
Sent: Frid
Hello,
What do you wanne test in detail?
If you are interested in testing combinations of datatypes and communicators
the mpi_test_suite [1] may be of interest for you.
Best
Christoph Niethammer
[1] https://projects.hlrs.de/projects/mpitestsuite/
- Original Message -
From: "Open MPI
You may want to look into MTT: https://github.com/open-mpi/mtt
Cheers
Joseph
On 7/23/20 8:28 PM, Zhang, Junchao via users wrote:
Hello,
Does OMPI have a test suite that can let me validate MPI
implementations from other vendors?
Thanks
--Junchao Zhang
11 matches
Mail list logo