Let me steer you on a different course. Can you run "ompi_info" and paste the
output here? It looks to me like someone installed a version that includes
uDAPL support, so you may have to disable some additional things to get it to
run.
On Jun 27, 2014, at 9:53 AM, Jeffrey A Cummings
wrote:
If you don't have control over the MPI version/versions/implementations
installed, you probably can still verify if your environment is
consistently pointing to the same MPI implementation and version.
It is not uncommon to have more than one implementation and version
installed on a computer,
Once again, you guys are assuming (incorrectly) that all your users are
working in an environment where everyone is free (based on corporate IT
policies) to do things like that. As an aside, you're also assuming that
all your users are Unix/Linux experts. I've been following this list for
sev
Hi,
Am 27.06.2014 um 19:56 schrieb Jeffrey A Cummings:
> I appreciate your response and I understand the logic behind your suggestion,
> but you and the other regular expert contributors to this list are frequently
> working under a misapprehension. Many of your openMPI users don't have any
>
I appreciate your response and I understand the logic behind your
suggestion, but you and the other regular expert contributors to this list
are frequently working under a misapprehension. Many of your openMPI
users don't have any control over what version of openMPI is available on
their syst
It may be easier to install the latest OMPI from the tarball,
rather than trying to sort out the error.
http://www.open-mpi.org/software/ompi/v1.8/
The packaged built of (somewhat old) OMPI 1.6.2 that came with
Linux may not have built against the same IB libraries, hardware,
and configuration y
We have recently upgraded our cluster to a version of Linux which comes
with openMPI version 1.6.2.
An application which ran previously (using some version of 1.4) now errors
out with the following messages:
librdmacm: Fatal: no RDMA devices found
librdmacm: Fatal: no RDMA devic
On Jun 27, 2014, at 8:53 AM, Brock Palen wrote:
> Is there a way to import/map memory from a process (data acquisition) such
> that an MPI program could 'take' or see that memory?
>
> We have a need to do data acquisition at the rate of .7TB/s and need todo
> some shuffles/computation on these
MPI "universe" yes, but not necessarily MPI "world". You could have
the two worlds connect/accept or join
(https://www.open-mpi.org/doc/v1.8/man3/MPI_Comm_join.3.php) and then
you should be able to take advantage of the RMA. At least, that is
what is written in the book ...
George.
On Fri, Jun
But this is within the same MPI "universe" right?
Brock Palen
www.umich.edu/~brockp
CAEN Advanced Computing
XSEDE Campus Champion
bro...@umich.edu
(734)936-1985
On Jun 27, 2014, at 10:19 AM, George Bosilca wrote:
> The One-Sided Communications from the Chapter 11 of the MPI standard?
> For
The One-Sided Communications from the Chapter 11 of the MPI standard?
For processes on the same node you might want to look at
MPI_WIN_ALLOCATE_SHARED.
George.
On Fri, Jun 27, 2014 at 9:53 AM, Brock Palen wrote:
> Is there a way to import/map memory from a process (data acquisition) such
> th
Is there a way to import/map memory from a process (data acquisition) such that
an MPI program could 'take' or see that memory?
We have a need to do data acquisition at the rate of .7TB/s and need todo some
shuffles/computation on these data, some of the nodes are directly connected
to the dev
:) Thanks to both
I'll try you solution and I'll give you a feedback
Thanks
2014-06-27 15:01 GMT+02:00 :
>
>
> Hi Luigi,
>
> Please try:
>
> --map-by slot:pe=4
>
> Probably Ralph is very busy, so something sliped his memory...
>
> Regards,
> Tetsuya
>
> > Hi all,
> > My system is a 64 core, wit
Hi Luigi,
Please try:
--map-by slot:pe=4
Probably Ralph is very busy, so something sliped his memory...
Regards,
Tetsuya
> Hi all,
> My system is a 64 core, with Debian 3.2.57 64 bit, GNU gcc 4.7, kernel
Linux 3.2.0 and OpenMPI 1.8.1.
> I developed an application to matching proteins files u
You should add this to your cmd line:
--map-by core:pe=4
This will bind each process to 4 cores
Sent from my iPhone
> On Jun 27, 2014, at 5:22 AM, Luigi Santangelo
> wrote:
>
> Hi all,
> My system is a 64 core, with Debian 3.2.57 64 bit, GNU gcc 4.7, kernel Linux
> 3.2.0 and OpenMPI 1.8.1.
Hi all,
My system is a 64 core, with Debian 3.2.57 64 bit, GNU gcc 4.7, kernel
Linux 3.2.0 and OpenMPI 1.8.1.
I developed an application to matching proteins files using OpenMP+OpenMPI.
I compiled souce code with -fopenmp flag, I set OMP_NUM_THREADS=4 then I
ran binary with mpiexec -np 16.
When the
16 matches
Mail list logo