Sorry for the late update. Anyway, per suggestions, here is what I did:
* prevent ssh-login to the nodes except admins
* reconfigure torque with --with-pam (then reinstall torque, openmpi
etc...)
After testing for a few days with some intensive jobs, everything looks
fine :)
Thanks for all the
That was careless of me. Thanks for pointing it out. Declaring "status",
"ierr" and putting "implicit none" solved the problem.
Thanks again.
2013/2/19 Jeff Squyres (jsquyres)
> +1. The problem is that you didn't declare status or ierr. Since you
> didn't declare status, you're buffer overfl
+1. The problem is that you didn't declare status or ierr. Since you didn't
declare status, you're buffer overflowing, and random Bad Things happen from
there.
You should *always* "implicit none" to catch these kinds of errors.
On Feb 18, 2013, at 2:02 PM, Gus Correa wrote:
> Hi Pradeep
>
Hi Pradeep
For what it is worth, in the MPI Fortran bindings/calls the
datatype to use is "MPI_INTEGER", not "mpi_int" (which you used;
MPI_INT is in the MPI C bindings):
http://linux.die.net/man/3/mpi_integer
Also, just to prevent variables to inadvertently come with
the wrong type, you could
Hi Pradeep
I am not sure if this is the reason, but usually it is a bad idea to
force an order of receives (such as you do in your receive loop -
first from sender 1 then from sender 2 then from sender 3)
Unless you implement it so, there is no guarantee the sends are
performed in this order. B
I
I have attached a sample of the MPI program I am trying to write. When I
run this program using "mpirun -np 4 a.out", my output is:
Sender:1
Data received from1
Sender:2
Data received from1
Sender:2
And the run hangs there. I dont u
Thank you for leading me towards the compiler. I found out that I was trying to
compile against version 10.1 instead of 11.0, this fixed my problem to export
the right compiler.
Thank you.
De bedste hilsner / Best regards,
Mads Boye.
> -Original Message-
> From: users-boun...@open-mpi