Are you able to run simple MPI applications with 1.4.3 or 1.4.4 on your OS?
E.g., the "ring_c" program in the example/ directory? This might be a good
test to see if OMPI's TCP is working at all.
Assuming that works... Have you tried attaching debuggers to see where your
process is hanging?
Sorry for the delay in replying. :-\
I'm afraid we don't test with the NAG compiler. :-(
Would this be something that the NAG would be willing to do for the Open MPI
community? Companies like Absoft do -- we have a light test suite that can be
fully automated (i.e., run via cron). Ping me o
Huh; wonky.
Can you set the MCA parameter "mpi_abort_delay" to -1 and run your job again?
This will prevent all the processes from dying when MPI_ABORT is invoked. Then
attach a debugger to one of the still-live processes after the error message is
printed. Can you send the stack trace? It
On Jun 28, 2011, at 1:46 PM, Bill Johnstone wrote:
> I have a heterogeneous network of InfiniBand-equipped hosts which are all
> connected to the same backbone switch, an older SDR 10 Gb/s unit.
>
> One set of nodes uses the Mellanox "ib_mthca" driver, while the other uses
> the "mlx4" driver.
On Jul 7, 2011, at 5:59 PM, Qasim Ali wrote:
> Hi Jeff,
>
> Thanks for following up. I have a question to clear things.
>
> 1. If I do not specify any affinity in mpirun, what memory allocation policy
> is used by default?
None.
> a. When it is not compiled with libnuma
> b. when compiled wi
Miguel,
Thanks for the assistance. I don't have the MPI options you spoke of, so
I figured that might have been part of the HPC Pack. I found a couple of
web pages that helped me make progress. I'm not 100% there, but I'm much
closer, say 85% of the way there.
Now I can get an Fortran+MPI program
Prentice,
I didn't have to install the HPC Pack, as far as I know it is only needed
when you want to develop/debug in a cluster. I'm sorry I can't help you
with VS 2010 (I hated it, I switched back to VS 2008), but the
instructions to configure VS 2010 seems to be similar, check the MPICH2
guide f
Hi,
Am 07.07.2011 um 01:09 schrieb Mohan, Ashwin:
> I use the following command (mpirun --prefix /usr/local/openmpi1.4.3 -np 4
> hello) to successfully execute a simple hello world command on a single node.
> Each node has 4 slots. Following the successful execution on one node, I
> wish to
Hi
It seems that you have mixed an "old" LAM-MPI installation with OpenMPI.
To make sure your OpenMPI installation is ok you could try to use the
complete path to mpirun:
/data1/cluster/openmpi/bin/mpirun -np 1 /tmp/openmpi-1.4.3/examples/ring_c
You should also make sure that the compile-com
hello all :
I installed the openmpi-1.4.3 on redhat as the following step :
1. ./configure --prefix=/data1/cluster/openmpi
2. make
3. make install
And I compiled the examples of openmpi-1.4.3 as the following
step :
1. make
10 matches
Mail list logo