Hi everyone,
I'm trying to cross compile openmpi-1.10.3 for arm-openwrt-linux-muslgnueabi
on x86_64-linux-gnu with below configure options...
./configure --enable-orterun-prefix-by-default
--prefix="/home/nmahesh/Workspace/ARM_MPI/openmpi"
--build=x86_64-linux-gnu
--host=x86_64-linux-gnu
--targ
Sean,
if i understand correctly, your built a libtransport_mpi.so library that
depends on Open MPI, and your main program dlopen libtransport_mpi.so.
in this case, and at least for the time being, you need to use
RTLD_GLOBAL in your dlopen flags.
Cheers,
Gilles
On 10/18/2016 4:53 AM,
I should have been more precise: you cannot use Fortran's vector subscript
with Open MPI.
George.
On Mon, Oct 17, 2016 at 2:19 PM, Jeff Hammond
wrote:
> George:
>
> http://mpi-forum.org/docs/mpi-3.1/mpi31-report/node422.htm
>
> Jeff
>
> On Sun, Oct 16, 2016 at 5:44 PM, George Bosilca
> wrote:
Folks,
For our code, we have a communication layer that abstracts the code that
does the actual transfer of data. We call these "transports", and we link
them as shared libraries. We have created an MPI transport that
compiles/links against OpenMPI 2.0.1 using the compiler wrappers. When I
compile
George:
http://mpi-forum.org/docs/mpi-3.1/mpi31-report/node422.htm
Jeff
On Sun, Oct 16, 2016 at 5:44 PM, George Bosilca wrote:
> Vahid,
>
> You cannot use Fortan's vector subscript with MPI.
>
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
__
Designation: Non-Export Controlled Content
Gilles;
Yes, your assumption is correct. No communication between
proxies and no communications between sensors. I am using rank to determine
role. Dispatcher being 0. Sensors start at 1. So I should have 3 groups? I am
new to MPI and my
Rick,
So you have three types of tasks
- 1 dispatcher
- several sensors
- several proxies
If proxies do not communicate with each other, and if sensors do not
communicate with each other, then you could end up with 3 inter
communicators
sensorComm: dispatcher in the left group and sensors in the
Designation: Non-Export Controlled Content
Gilles;
My scenario involves a Dispatcher of rank 0, and several
sensors and proxy objects. The Dispatcher triggers activity and gathers
results. The proxies get triggered first. They send data to the sensors, and
the sensors indicate to
Rick,
I re-read the MPI standard and was unable to figure out if sensorgroup is
MPI_GROUP_EMPTY or a group with task 1 on tasks except task 1
(A group that does not contain the current task makes little sense to me,
but I do not see any reason why this group have to be MPI_GROUP_EMPTY)
Regardless
Rick,
In my understanding, sensorgroup is a group with only task 1
Consequently, sensorComm is
- similar to MPI_COMM_SELF on task 1
- MPI_COMM_NULL on other tasks, and hence the barrier fails
I suggest you double check sensorgroup is never MPI_GROUP_EMPTY
and add a test not to call MPI_Barrier on
Designation: Non-Export Controlled Content
George;
Thanks for your response. Your second sentence is a little
confusing. If my world group is P0,P1, visible on both processes, why wouldn't
the sensorList contain P1 on both processes?
Rick
From: users [mailto:users-boun...@lists
11 matches
Mail list logo