On 5 August 2008 at 17:01, Ben Payne wrote:
| Hello. I am not sure if this is the correct list to ask this
| question, so if you know of a more appropriate one please let me know.
|
| I think am looking for a LiveCD that supports MPI, specifically one
| that has mpif90 built in, and can easily m
I don't know anything about wien2k, but I do notice that the link line
of your output doesn't used mpif77 or mpif90. Is there a reason for
that? Indeed, I see the following in the link line:
... -L/usr/local/lib -lmpi
But you clearly installed Open MPI to /opt/intel/linux86_64. So tha
Hi Sergio
Have you tried running anything like "hello world" first ?.
Your output suggests you didn't ... :/
Best,
Roberto
On Tue, 5 Aug 2008, Sergio Yanuen Rodriguez wrote:
> Dear openmpi users:
>
> I am trying to compile wien2k in parallel on a intel core quad processor
> with fedora
Hello. I am not sure if this is the correct list to ask this
question, so if you know of a more appropriate one please let me know.
I think am looking for a LiveCD that supports MPI, specifically one
that has mpif90 built in, and can easily mount external (USB) drives
for storing data.
I have ac
One tip is to use the --log-file=valgrind.out.%
q{OMPI_MCA_ns_nds_vpid} option to valgrind which will name the output
file according to rank. In the 1.3 series the variable has changed from
OMPI_MCA_ns_nds_vpid to OMPI_COMM_WORLD_RANK.
Ashley.
On Tue, 2008-08-05 at 17:51 +0200, George Bosilca w
Dear openmpi users:
I am trying to compile wien2k in parallel on a intel core quad processor
with fedora 8 and 8 GB in RAM but I am getting some errors. I am able to
install and run the serial version.
My software is:
Kernel version 2.6.25
gcc version 4.1.2
Intel Fortran compiler 10.1.015
Intel
Jan,
I'm using valgrind with Open MPI on a [very] regular basis and I never
had any problems. I usually want to know the execution path on the MPI
applications. For this I use:
mpirun -np XX valgrind --tool=callgrind -q --log-file=some_file ./my_app
I just run your example:
mpirun -np 2
Hi,
I wanted to determine the peak heap memory usage of each MPI process in my
application. Using MVAPICH it can be done by simply substituting a wrapper
shell script for the MPI executable and from that wrapper script starting
"valgrind --tool=massif ./prog.exe". However, when I tried the same