Hi Dorian,
thank you for your message.
doriankrause wrote:
The trouble is with an MPI code that runs fine with an openmpi 1.1.2
library compiled without infiniband support (I have tested the
scalability of the code up to 64 cores, the nodes are 4 or 8 cores,
the results are exactly what I
Hi
Biagio Lucini wrote:
Hello,
I am new to this list, where I hope to find a solution for a problem
that I have been having for quite a longtime.
I run various versions of openmpi (from 1.1.2 to 1.2.8) on a cluster
with Infiniband interconnects that I use and administer at the same
time. T
On Tue, Dec/23/2008 02:33:07PM, Jeff Squyres wrote:
> Yes, it works for me... :-\
>
> With initial install dir of /home/jsquyres/bogus (in my $path and
> $LD_LIBRARY_PATH already):
>
> [11:30] svbu-mpi:~/mpi % mpicc hello.c -o hello
> [11:30] svbu-mpi:~/mpi % mpirun -np 2 hello
> stdout: Hello, w
Win Than Aung wrote:
thanks for your reply jeff
so i tried following
#include
#include
int main(int argc, char **argv) {
int np, me, sbuf = -1, rbuf = -2,mbuf=1000;
int data[2];
MPI_Init(&argc,&argv);
MPI_Comm_size(MPI_COMM_WORLD,&np);
MPI_Comm_rank(MPI_COMM
thanks for your reply jeff
so i tried following
#include
#include
int main(int argc, char **argv) {
int np, me, sbuf = -1, rbuf = -2,mbuf=1000;
int data[2];
MPI_Init(&argc,&argv);
MPI_Comm_size(MPI_COMM_WORLD,&np);
MPI_Comm_rank(MPI_COMM_WORLD,&me);
if ( np < 2 ) MPI_Abort(MPI_COMM_WORLD
Yes, it works for me... :-\
With initial install dir of /home/jsquyres/bogus (in my $path and
$LD_LIBRARY_PATH already):
[11:30] svbu-mpi:~/mpi % mpicc hello.c -o hello
[11:30] svbu-mpi:~/mpi % mpirun -np 2 hello
stdout: Hello, world! I am 0 of 2 (svbu-mpi.cisco.com)
stdout: Hello, world!
This looks like a question for the MPICH2 developers.
Specifically, it looks like you are using MPICH2, not Open MPI. They
are entirely different software packages maintained by different
people -- we're not really qualified to answer questions about
MPICH2. The top-level API is the same
In the example you cite below, it looks like you're mixing MPI_Gather
and MPI_Send.
MPI_Gather is a "collective" routine; it must be called by all
processes in the communicator. All processes will send a buffer/
message to the root; only the root process will receive all the
buffers/messa
PS: extra question qsub -I -q standby -l select=1:ncpus=8
mpirun -np 4 ./hello
running mpdallexit on steele-a137.rcac.purdue.edu
LAUNCHED mpd on steele-a137.rcac.purdue.edu via
RUNNING: mpd on steele-a137.rcac.purdue.edu
steele-a137.rcac.purdue.edu_36959 (172.18.24.147)
time for 100 loops = 2.9802
Hi,thanks for your reply. let's say i have 3 processors. I sent msg from
1st,2nd processors and want to gather in processor 0 processor. so i tried
like following. it couldn't receive msg sent from processor 1 and 2.
http://www.nomorepasting.com/getpaste.php?pasteid=22985
PS: is MPI_Recv is b
Win Than Aung wrote:
MPI_Recv() << is it possible to receive the message sent from
other sources? I tried MPI_ANY_SOURCE in place of source but it
doesn't work out
Yes of course. Can you send a short example of what doesn't work? The
example should presumably be less than about 20 line
MPI_Recv() << is it possible to receive the message sent from other
sources? I tried MPI_ANY_SOURCE in place of source but it doesn't work out
thanks
Hi,
Am 23.12.2008 um 12:03 schrieb Sangamesh B:
Hello,
I've compiled MPIBLAST-1.5.0-pio app on Rocks 4.3,Voltaire
infiniband based Linux cluster using Open MPI-1.2.8 + intel 10
compilers.
The job is not running. Let me explain the configs:
SGE job script:
$ cat sge_submit.sh
#!/bin/ba
Hello,
I've compiled MPIBLAST-1.5.0-pio app on Rocks 4.3,Voltaire
infiniband based Linux cluster using Open MPI-1.2.8 + intel 10
compilers.
The job is not running. Let me explain the configs:
SGE job script:
$ cat sge_submit.sh
#!/bin/bash
#$ -N OMPI-Blast-Job
#$ -S /bin/bash
#$ -cwd
Hello,
I am new to this list, where I hope to find a solution for a problem
that I have been having for quite a longtime.
I run various versions of openmpi (from 1.1.2 to 1.2.8) on a cluster
with Infiniband interconnects that I use and administer at the same
time. The openfabric stac is OFED
To make sure you don't use any "leftover" from another system install
when upgrading, you should specify --enable-prefix-by-default when
configuring the source tree for compilation. This will always select
the binaries and libs that are part of the mpirun you are using.
Aurelien
Le 22 dé
You are using MPICH. You should ask submit your question to their
mailing list to get the most accurate answers. From the log you
provide, I can still guess that you need to define a machinefile
containing at least 4 computing ressources. If you need more details
concerning machinefiles in
mpirun -np 4 ./hello
running mpdallexit on steele-a137.rcac.purdue.edu
LAUNCHED mpd on steele-a137.rcac.purdue.edu via
RUNNING: mpd on steele-a137.rcac.purdue.edu
steele-a137.rcac.purdue.edu_36959 (172.18.24.147)
time for 100 loops = 2.98023223877e-05 seconds
too few entries in machinefile
i put
18 matches
Mail list logo