On Jun 8, 2012, at 8:51 AM, BOUVIER Benjamin wrote:
> I have downloaded the Netpipe benchmarks suite, launched `make mpi` and
> launched with mpirun the resulting executable.
>
> Here is an interesting fact : by launching this executable on 2 nodes, it
> works ; on 3 nodes, it blocks, I guess o
Hi Jeff,
Thanks for your answer.
I have downloaded the Netpipe benchmarks suite, launched `make mpi` and
launched with mpirun the resulting executable.
Here is an interesting fact : by launching this executable on 2 nodes, it works
; on 3 nodes, it blocks, I guess on connect.
Each process is
Hi Bill,
If you *really* have time, then you can go deep into the log, and find
out why configure failed. It looks like configure failed when it tried
to compile this code:
.text
# .gsym_test_func
.globl .gsym_test_func
.gsym_test_func:
# .gsym_test_func
configure:26752: result: none
conf
On Jun 8, 2012, at 6:43 AM, BOUVIER Benjamin wrote:
> # include
> # include
> # include
>
> int main(int argc, char **argv)
> {
>int rank, size;
>const char someString[] = "Can haz cheezburgerz?";
>
>MPI_Init(&argc, &argv);
>
>MPI_Comm_rank( MPI_COMM_WORLD, & rank );
>MPI
Hi everybody,
I have currently a bug when launching a very simple MPI program with mpirun, on
connected nodes. This happens when I send an INT and then some CHAR strings
from a master node to a worker node.
Here is the minimal code to reproduce the bug :
# include
# include
# include
int
Hello,
> >>> Unfortunately "cc" on Linux creates the following error.
> >>>
> >>> ln -s "../../../openmpi-1.6/opal/asm/generated/
> >>> atomic-ia32-linux-nongas.s" atomic-asm.S
> >>> CPPAS atomic-asm.lo
> >>> :19:0: warning: "__FLT_EVAL_METHOD__" redefined
> >>> [enabled by default]
> >>> :110:0
To be honest, I don't think we've ever tested on Tru64, so I'm not surprised
that it doesn't work. Indeed, I think that it is unlikely that we will ever
support Tru64. :-(
Sorry!
On Jun 7, 2012, at 12:43 PM,
wrote:
>
> Hello,
>
> I am having trouble with the *** Assembler section of th
On Jun 7, 2012, at 10:27 AM, Siegmar Gross wrote:
> thank you very much for your help. You were right with your suggestion
> that one of our system commands is responsible for the segmentation
> fault. After splitting the command in config.status I found out that
> gawk was responsible. We install