Hi,
i have installed OpenMPI 1.2.6, using gcc with bounds checking. But, when i
compile an MPI program, i have many time the same error:
../opal/include/opal/sys/amd64/atomic.h:89:Address in memory:0x8 ..
0xb
../opal/include/opal/sys/amd64/atomic.h:89:Size: 4 bytes
../o
I found that the error starts in this line code:
static opal_atomic_lock_t class_lock = { { OPAL_ATOMIC_UNLOCKED } };
in class/opal_object.c, line 52
and generates the bound error in this code block:
static inline int opal_atomic_cmpset_64( volatile int64_t *addr,
Hello again!
I made a few more tests to call mpirun with different parameters. I also
included the output from
mpirun -debug-daemons -hostfile myhosts -np 2 mpi-test.exe
Daemon [0,0,1] checking in as pid 14725 on host bla.bla.bla.50
Daemon [0,0,2] checking in as pid 6375 on host bla.bla.b
Hi!
Ok, I found the problem. I reinstallen OMPI on both PCs but this time only
locally in the users home directory. Now, the sample code works perfectly.
I'm not sure where the error really was located. It could be that it was a
problem with the Gentoo installation because OMPI is still marked
Hi,
We have Xeon dual cpu cluster on redhat. I have compiled openMPI 1.2.6
with g95 and AMBER (scientific program doing parallel molecular
simulations; Fortran 77&90). Both compilation seems to be fine. However,
AMBER runs from command prompt "mpiexec -np x " successfully,
but using PBS batch
Hi,
are You sure it was not a Firewall issue on the Suse 10.2?
If there are any connections from the Gentoo machine trying to access the
orted on the Suse, check in /var/log/firewall.
For the time being, try stopping the firewall by (as root) with
/etc/init.d/SuSEfirewall2_setup stop
and test whe
Hi Open MPI users and developers,
The mailmain service hosted by the Open Systems Lab will be
upgraded this afternoon at 2pm Eastern, and thus will be
unavailable for about an hour. See our sysadmin's notice:
The new version(2.1.10) of mailman was released on Apr 21 2008.
It has a lot of
I have a weird problem that shows up when i use LAM or OpenMPI but not MPICH.
I have a parallelized code working on a really large matrix. It
partitions the matrix column-wise and ships them off to processors,
so, any given processor is working on a matrix with the same number of
rows as the origi