Hi all, sorry to reply this thread so late. I tried and it works well.
However, it takes me about 12 hrs to compile he while package so I
gonna cross-compile in my laptop w/ proper toolchain I created. Here's
the command line I used.
./configure --build=x86_64-redhat-linux
--host=arm-unknown-linux
Sorry! I removed the mails so I have to post another one.
I stopped the iptables on the three nodes. Ping it's working OK
(pruebaborja to clienteprueba / clienteprueba to pruebaborja).
My /etc/networks/interfaces - node:
pruebaborja Masternode
#The loopback network interface
auto lo
iface lo
Dear Developers,
I am running into memory problems when creating/allocating MPI's window and its
memory frequently. Below is listed a sample code reproducing the problem:
#include
#include
#define NEL8
#define NTIMES 100
int main (int argc,char *argv[]) {
int i;
doublew[
On Wed, 16 Jan 2013 07:46:41 -0800
Ralph Castain wrote:
> This one means that a backend node lost its connection to mpirun. We use a
> TCP socket between the daemon on a node and mpirun to launch the processes
> and to detect if/when that node fails for some reason.
Hm. And what would be the r
I tried building from OMPI 1.6.3 tarball with the following
./configure:
./configure
--prefix=/apotto/home1/homedirs/fsimula/Lavoro/openmpi-1.6.3/install/ \
--disable-mpi-io \
--disable-io-romio \
--enable-dependency-tracking \
--without-slurm \
--with-platform=optimized \
--disable-mpi-f77 \
--
On Jan 17, 2013, at 2:25 AM, Jure Pečar wrote:
> On Wed, 16 Jan 2013 07:46:41 -0800
> Ralph Castain wrote:
>
>> This one means that a backend node lost its connection to mpirun. We use a
>> TCP socket between the daemon on a node and mpirun to launch the processes
>> and to detect if/when th
Configure OMPI with --enable-debug, and then run
mpirun -n 1 -host clienteprueba -mca plm_base_verbose 5 hostname
You should see a daemon getting launched and successfully reporting back to
mpirun, and then the application getting launched on the remote node.
On Jan 17, 2013, at 1:25 AM, borja
Just as an FYI: we have removed the Java bindings from the 1.7.0 release due to
all the reported errors - looks like that code just isn't ready yet for
release. It remains available on the nightly snapshots of the developer's trunk
while we continue to debug it.
With that said, I tried your exa
On Jan 16, 2013, at 6:41 AM, Leif Lindholm wrote:
> That isn't, technically speaking, correct for the Raspberry Pi - but it is a
> workaround if you know you will never actually use the asm implementations of
> the atomics, but only the inline C ones..
>
> This sort of hides the problem that t