The problem was easily resolved by adding the missing export statements in
the shell script which was calling configure.
I have exactly the same problem as reported by Paul Hutton and David O
Gunter (2012-12-06)
tail of configure output
checking Fortran 90 kind of MPI_INTEGER_KIND (selected_int_kind(9))...
configure: error: Could not determine kind of
selected_int_kind(MPI_INTEGER_KIND)
selection from config.log
> The Intel Fortran 2013 compiler comes with support for Intel's MPI runtime and
> you are getting that instead of OpenMPI. You need to fix your path for all
> the
> shells you use.
[Tom]
Agree with Michael, but thought I would note something additional.
If you are using OFED's mpi-selector to
It's probably the same problem - try running 'mpirun -npernode 1 -tag-output
ulimit -a" on the remote nodes and see what it says. I suspect you'll find
that they aren't correct.
BTW: the "-tag-output'" option marks each line of output with the rank of the
process. Since all the outputs will be
The Intel Fortran 2013 compiler comes with support for Intel's MPI runtime and
you are getting that instead of OpenMPI. You need to fix your path for all
the shells you use.
On Apr 1, 2013, at 5:12 AM, Pradeep Jha wrote:
> /opt/intel/composer_xe_2013.1.117/mpirt/bin/intel64/mpirun: line 96:
Hello,
When I am trying to run a perfectly running parallel code on a new Linux
machine, using the following command:
--
mpirun -np 16 name_of_executable
--
I am getting the fo
On 3/31/13 12:20 AM, Duke Nguyen wrote:
I should really have asked earlier. Thanks for all the helps.
I think I was excited too soon :). Increasing stacksize does help if I
run a job in a dedicated server. Today I tried to modify the cluster
(/etc/security/limits.conf, /etc/init.d/pbs_mom) an