./configure does not compile, but generates the Makefile.
Did you run
> make
> make install
after running ./configure?
Notice also that openmpi can very likely be already installed on your
system from ubuntu packages;
anyway, I suggest you use ubuntu packages rather than compiling from
sources un
x27;t find libnuma installed on your machine, so we
> cannot bind memory allocations (but can bind processes).
>
> On Sep 1, 2012, at 3:41 AM, Zbigniew Koza wrote:
>
> Hi,
>
> I have one more question.
> I wanted to experiment with processor affinity command-line optio
Hi,
I have one more question.
I wanted to experiment with processor affinity command-line options on my
ubuntu PC.
When I use OpenMPI compiled from sourecs a few weeks ago, mpirun returns
error messages.
However, the"official" OpenMPI installation on the same machine makes no
problem.
Does it mean
.29, 0.10, 0.03
>> 05:06:13 up 7 days, 6:57, 1 user, load average: 0.29, 0.10, 0.03
>> %
>> -
>>
>> I bound each process to a single core, and mapped them on a round-robin
>> basis by core. Hence, all 4 processes ended up on their own cores on a
>> si
Hi,
consider this specification:
"Curie fat consists in 360 nodes which contains 4 eight cores CPU
Nehalem-EX clocked at 2.27 GHz, let 32 cores / node and 11520 cores for
the full fat configuration"
Suppose I would like to run some performance tests just on a single
processor rather than 4
Hi,
I've just found this information on nVidia's plans regarding enhanced
support for MPI in their CUDA toolkit:
http://developer.nvidia.com/cuda/nvidia-gpudirect
The idea that two GPUs can talk to each other via network cards without
CPU as a middleman looks very promising.
This technology
Look at this declaration:
int MPI_Send(void *buf, int count, MPI_Datatype datatype, int dest, int tag,
MPI_Comm comm)
here*"count" is the**number of elements* (not bytes!) in the send buffer
(nonnegative integer)
Your "count" was defined as
count = rows*matrix_size*sizeof (doub
ssible, and we were trying to avoid
that requirement.
I will submit a ticket against this and see if we can improve this.
Rolf
-Original Message-
From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org]
On Behalf Of Zbigniew Koza
Sent: Tuesday, July 31, 2012 12:38 PM
To:
Hi,
I wrote a simple program to see if OpenMPI can really handle cuda
pointers as promised in the FAQ and how efficiently.
The program (see below) breaks if MPI communication is to be performed
between two devices that are on the same node but under different IOHs
in a dual-processor Intel mac