Rob,
I agree with what Dave Love says. The distro packaged OpenMPI packages
should set things up OK for you.
I guess that is true on the head node, but from what you say maybe the
cluster compute nodes are being installed some other way.
On HPC clusters, when you are managing alternate packages
"Rob Malpass" writes:
> Hi
>
>
>
> Sorry if this isn't 100% relevant to this list but I'm at my wits end.
>
>
>
> After a lot of hacking, I've finally configured openmpi on my Ubuntu
> cluster. I had been having awful problems with not being able to find the
> libraries on the remote nodes
Dear George
Thanks for the reply. The code is working properly by uncommenting the barrier.
Regards
Ryan
On Wed, 20 Apr 2016 19:45:09 +0530 George Bosilca wrote
>Ryan,
What you witness in your execution is that your execution over 2 processes
drifted by 49 iterations. Because your example av
Dear Gilles
Thanks for the reply. The code is working properly by uncommenting the barrier.
Regards
Ryan
On Wed, 20 Apr 2016 17:10:08 +0530 Gilles Gouaillardet wrote
>Ryan,
what if you uncomment the barrier before printf ?
I can see a scenario in which printf is invoked on rank 0 before all
Hi Gilles and Ralph,
I was able to sort out my mess. In my last email I compared the
files from "SunOS_sparc/openmpi-2.0.0_64_gcc/lib64/openmpi" from
the attachment of my email to Ralph with the files from
"SunOS_sparc/openmpi-2.0.0_64_cc/lib64/openmpi" from my current
file system. That's the rea