Thanks Reuti. That sorted out the problem.
Now mpiblast is able to run, but only on single node. i.e. mpiformatdb
-> 4 fragments, mpiblast - 4 processes. Since each node is having 4
cores, the job will run on a single node and works fine. With 8
processes, the job fails with following error messag
Biagio Lucini wrote:
Hello,
I am new to this list, where I hope to find a solution for a problem
that I have been having for quite a longtime.
I run various versions of openmpi (from 1.1.2 to 1.2.8) on a cluster
with Infiniband interconnects that I use and administer at the same
time. The o
Pavel Shamis (Pasha) wrote:
Biagio Lucini wrote:
Hello,
I am new to this list, where I hope to find a solution for a problem
that I have been having for quite a longtime.
I run various versions of openmpi (from 1.1.2 to 1.2.8) on a cluster
with Infiniband interconnects that I use and administe
Greetings,
I have observed strange behavior with an application running with
OpenMPI 1.2.8, OFED 1.2. The application runs in two "modes", fast
and slow. The exectution time is either within one second of 108 sec.
or within one second of 67 sec. My cluster has 1 Gig ethernet and
DDR Infiniband so
Hi,
Am 24.12.2008 um 07:55 schrieb Sangamesh B:
Thanks Reuti. That sorted out the problem.
Now mpiblast is able to run, but only on single node. i.e. mpiformatdb
-> 4 fragments, mpiblast - 4 processes. Since each node is having 4
cores, the job will run on a single node and works fine. With 8
Teige, Scott W wrote:
Greetings,
I have observed strange behavior with an application running with
OpenMPI 1.2.8, OFED 1.2. The application runs in two "modes", fast
and slow. The exectution time is either within one second of 108 sec.
or within one second of 67 sec. My cluster has 1 Gig etherne
If the basic test run the installation is ok. So what happens when you
try to run your application ? What is command line ? What is the error
message ? do you run the application on the same set of machines with
the same command line as IMB ?
Pasha
yes to both questions: the OMPI version is
Reuti wrote:
Hi,
Am 24.12.2008 um 07:55 schrieb Sangamesh B:
Thanks Reuti. That sorted out the problem.
Now mpiblast is able to run, but only on single node. i.e. mpiformatdb
-> 4 fragments, mpiblast - 4 processes. Since each node is having 4
cores, the job will run on a single node and works
For your runs with Open MPI over InfiniBand, try using openib,sm,self
for the BTL setting, so that shared memory communications are used
within a node. It would give us another datapoint to help diagnose
the problem. As for other things we would need to help diagnose the
problem, please follow th
thanks Eugene for your example, it helps me a lot.I bump into one more
problems
lets say I have the file content as follow
I have total of six files which all contain real and imaginary value.
"
1.001212 1.0012121 //0th
1.001212 1.0012121 //1st
1.001212 1.0012121 //2nd
1.001212 1
thanks Eugene for your example, it helps me a lot.I bump into one more
problems
lets say I have the file content as follow
I have total of six files which all contain real and imaginary value.
"
1.001212 1.0012121 //0th
1.001212 1.0012121 //1st
1.001212 1.0012121 //2nd
1.001212 1
I got the solution. I just need to set the appropriate tag to send and
receive.sorry for asking
thanks
winthan
On Wed, Dec 24, 2008 at 10:36 PM, Win Than Aung wrote:
> thanks Eugene for your example, it helps me a lot.I bump into one more
> problems
> lets say I have the file content as follow
>
12 matches
Mail list logo