Edwin,
changes are summarized in the NEWS file
we used to have two github repositories, and they were "merged" recently
with github, you can list the closed PR for a given milestone
https://github.com/open-mpi/ompi-release/milestones?state=closed
then you can click on a milestone, and list
Apologies for the dumb question... There used to be a way to dive in to see
exactly what bugs and features came into 1.10.4, 1.10.3, and on back to 1.8.8.
Is there a way to do that on github?
Ed
___
users mailing list
users@lists.open-mpi.org
https:/
Rick,
v2.0.x uses a 60 seconds hard coded timeout (vs 600 seconds in master)
in ompi/dpm/dpm.c, see OPAL_PMIX_EXCHANGE
I will check your test and likely have the value bumped to 600 seconds
Cheers,
Gilles
On Tuesday, October 4, 2016, Marlborough, Rick
wrote:
> Gilles;
>
> Th
Gilles;
The abort occurs somewhere between 30 and 60 seconds. Is there
some configuration setting that could influence this?
Rick
From: users [mailto:users-boun...@lists.open-mpi.org] On Behalf Of Gilles
Gouaillardet
Sent: Tuesday, October 04, 2016 8:39 AM
To: Open MPI Users
Sub
Rick,
How long does it take between the test fails ?
There were a bug that caused a failure if no connection was received after
2 (3?) seconds, but I think it was fixed in v2.0.1
That being said, you might want to try a nightly snapshot of the v2.0.x
branch
Cheers,
Gilles
On Tuesday, October 4,
Gilles;
Here is the client side code. The start command is “mpirun –n 1
client 10” where 10 is used to size a buffer.
int numtasks, rank, dest, source, rc, count, tag=1;
MPI_Init(&argc,&argv);
if(argc > 1)
{
Christophe,
If i read between the lines, you had Open MPI running just fine, then you
upgraded xcode and that broke OpenMPI. Am i right so far ?
Did you build Open MPI by yourself, or did you get binaries from somewhere
(such as brew) ?
In the first case, you need to rebuild Open MPI.
(You have
Rick,
I do not think ompi_server is required here.
Can you please post a trimmed version of your client and server, and your two
mpirun command lines.
You also need to make sure all ranks have the same root parameter when invoking
MPI_Comm_accept and MPI_Comm_connect
Cheers,
Gilles
"Marlborou