On Mon, Nov 3, 2008 at 8:59 PM, Terry Frankcombe wrote:
>> On Nov 3, 2008, at 3:36 PM, Gustavo Seabra wrote:
>>
For your fortran issue, the Fortran 90 interface needs the Fortran 77
interface. So you need to supply an F77 as well (the output from
configure
should indicate that
> On Nov 3, 2008, at 3:36 PM, Gustavo Seabra wrote:
>
>>> For your fortran issue, the Fortran 90 interface needs the Fortran 77
>>> interface. So you need to supply an F77 as well (the output from
>>> configure
>>> should indicate that the F90 interface was disabled because the F77
>>> interface w
I added the option for -hostfile machinefile where the machinefile is
a file with the IP of the nodes:
#host names
192.168.0.100 slots=2
192.168.0.101 slots=2
192.168.0.102 slots=2
192.168.0.103 slots=2
192.168.0.104 slots=2
192.168.0.105 slots=2
192.168.0.106 slots=2
192.168.0.107 slots=2
192.168.
On Nov 3, 2008, at 3:59 PM, PattiMichelle wrote:
I just found out I need to switch from mpich2 to openMPI for some
code I'm running. I noticed that it's available in an openSuSE repo
(I'm using openSuSE 11.0 x86_64 on a TYAN 32-processor Opteron 8000
system), but when I was using mpich2 I
The problem is that you didn't specify or allocate any nodes for the
job. At the least, you need to tell us what nodes to use via a hostfile.
Alternatively, are you using a resource manager to assign the nodes?
OMPI didn't see anything from one, but it could be that we just didn't
see the r
I just found out I need to switch from mpich2 to openMPI for some code
I'm running. I noticed that it's available in an openSuSE repo (I'm
using openSuSE 11.0 x86_64 on a TYAN 32-processor Opteron 8000 system),
but when I was using mpich2 I seemed to have better luck compiling it
from code. This
On Nov 3, 2008, at 3:36 PM, Gustavo Seabra wrote:
For your fortran issue, the Fortran 90 interface needs the Fortran 77
interface. So you need to supply an F77 as well (the output from
configure
should indicate that the F90 interface was disabled because the F77
interface was disabled).
Is
Thanks a lot Ralph!
I corrected the no_local to nolocal and now when I try to execute the
script step1 (pls find it attached)
[rchaud@helios amber10]$ ./step1
[helios.structure.uic.edu:16335] [0,0,0] ORTE_ERROR_LOG: Not available
in file ras_bjs.c at line 247
---
On Mon, Nov 3, 2008 at 3:04 PM, Jeff Squyres wrote:
> On Nov 3, 2008, at 2:53 PM, Gustavo Seabra wrote:
>
>> Finally, I was *almost* able to compile OpenMPI in Cygwin using the
>> following configure command:
>>
>> ./configure --prefix=/home/seabra/local/openmpi-1.3b1 \
>> --with-mpi-
On Nov 3, 2008, at 2:53 PM, Gustavo Seabra wrote:
Finally, I was *almost* able to compile OpenMPI in Cygwin using the
following configure command:
./configure --prefix=/home/seabra/local/openmpi-1.3b1 \
--with-mpi-param_check=always --with-threads=posix \
--enable-
Can you replicate the scenario in smaller / different cases?
- write a sample plugin in C instead of C++
- write a non-MPI Fortran application that loads your C++ application
- ...?
In short, *MPI* shouldn't be interfering with Fortran/C++ common
blocks. Try taking MPI out of the picture and
Hi everyone,
Here's a "progress report"... more questions in the end :-)
Finally, I was *almost* able to compile OpenMPI in Cygwin using the
following configure command:
./configure --prefix=/home/seabra/local/openmpi-1.3b1 \
--with-mpi-param_check=always --with-threads=posix \
For starters, there is no "-no_local" option to mpirun. You might want
to look at mpirun --help, or man mpirun.
I suspect the option you wanted was --nolocal. Note that --nolocal
does not take an argument.
Mpirun is confused by the incorrect option and looking for an
incorrectly named exe
Hello!
I am a new user of openmpi -- I've installed openmpi 1.2.6 for our
x86_64 linux scyld beowulf cluster inorder to make it run with amber10
MD simulation package.
The nodes can see the home directory i.e. a bpsh to the nodes works
fine and lists all the files in the home directory where I hav
Helllo Jeff, Gustavo, Mi
Thank for the advice. I am familiar with the difference in the compiler
code generation for C, C++ & FORTRAN. I even tried to look at some of the
common block symbols. The name of the symbol remains the same. The only
difference that I observe is in FORTRAN compiled *.o
15 matches
Mail list logo