Because we’ve screwed up in the past? I think the ompi_message_null was me,
and I was in a hurry to prototype for the MPI Forum. And then it stuck.
Brian
On 2/1/23, 3:16 AM, "users on behalf of Jeff Hammond via users"
mailto:users-boun...@lists.open-mpi.org> on
behalf of users@lists.open-mpi
Why do the null handles not follow a consistent scheme, at least in
Open-MPI 4.1.2?
ompi_mpi__null is used except when handle={request,message}, which
drop the "mpi_".
The above have an associated ..null_addr except ompi_mpi_datatype_null and
ompi_message_null.
Why?
Jeff
Open MPI v4.1.2, packa
a users
Sent: Tuesday, November 29, 2022 3:36 AM
To: Gestió Servidors via users
Cc: Gilles Gouaillardet
Subject: Re: [OMPI users] Question about "mca" parameters
Hi,
Simply add
btl = tcp,self
If the openib error message persists, try also adding
osc_rdma_btls = ugni,uct,ucp
or
Hi,
Simply add
btl = tcp,self
If the openib error message persists, try also adding
osc_rdma_btls = ugni,uct,ucp
or simply
osc = ^rdma
Cheers,
Gilles
On 11/29/2022 5:16 PM, Gestió Servidors via users wrote:
Hi,
If I run “mpirun --mca btl tcp,self --mca allow_ib 0 -n 12
./my_prog
Hi,
If I run "mpirun --mca btl tcp,self --mca allow_ib 0 -n 12 ./my_program", I get
to disable some "extra" info in the output file like:
The OpenFabrics (openib) BTL failed to initialize while trying to
allocate some locked memory. This typically can indicate that the
memlock limits are set to
This appears to be a legit bug with the use of MPI_T in the test/example
monitoring app, so I'm going to move the discussion to the Github issue so that
we can track it properly:
https://github.com/open-mpi/ompi/issues/9260
To answer Jong's question: ob1 is one of Open MPI's point-to-point mess
Thank you for the information. I don't know what ob1 is and possible other
choices are. Is there any way for me to check?
Anyhow, I tried a few things but got the same error. Here ia bit more
verbose output:
shell$ mpirun -n 1 --allow-run-as-root --mca pml_base_verbose 10 --mca
mtl_base_verbose 10
You need to enable the monitoring PML in order to get access to the
pml_monitoring_messages_count MPI_T. For this you need to know what PML you
are currently using and add monitoring to the pml MCA variable. As an
example if you use ob1 you should add the following to your mpirun command
"--mca pml
Hi.
I am trying to test if I can compile and run the MPI_T test case:
ompi/mca/common/monitoring/monitoring_prof.c
But, I am getting the following error:
cannot find monitoring MPI_T "pml_monitoring_messages_count" pvar, check
that you have monitoring pml
Should I turn on something when building
o output) when
> I specified btl_base_verbose 100.
> >
> > I will try using the CIDR for the below hosts as an experiment.
> >
> > Regards,
> > Vipul
> >
> >
> >
> > From: Jeff Squyres (jsquyres) [mailto:jsquy...@cisco.com]
> > Sent: Tues
gt; Vipul
>
>
>
> From: Jeff Squyres (jsquyres) [mailto:jsquy...@cisco.com]
> Sent: Tuesday, June 23, 2020 1:36 PM
> To: Open MPI User's List
> Cc: Kulshrestha, Vipul
> Subject: Re: [OMPI users] Question about virtual interface
>
> https://www.open-mpi.
Vipul
Subject: Re: [OMPI users] Question about virtual interface
https://www.open-mpi.org/faq/?category=tcp#ip-virtual-ip-interfaces is
referring to interfaces like "eth0:0", where the Linux kernel will have the
same index for both "eth0" and "eth0:0". This will cause
https://www.open-mpi.org/faq/?category=tcp#ip-virtual-ip-interfaces is
referring to interfaces like "eth0:0", where the Linux kernel will have the
same index for both "eth0" and "eth0:0". This will cause Open MPI to get
confused (because it identifies Ethernet interfaces by their kernel indexes
Hi,
I have read conflicting statements about OMPI support for virtual interfaces.
The Open MPI FAQ mentions that virtual IP interfaces are not supported and this
will not be solved by using either btl_tcp_if_include or btl_tcp_if_exclude.
(https://www.open-mpi.org/faq/?category=tcp#ip-virtual-
> On Mar 13, 2020, at 9:33 AM, Jeffrey Layton via users
> wrote:
>
> Good morning,
>
> I've compiled a hello world MPI code and when I run it, I get some messages
> I'm not familiar with. The first one is,
>
> --
> WARNIN
Good morning,
I've compiled a hello world MPI code and when I run it, I get some messages
I'm not familiar with. The first one is,
--
WARNING: Linux kernel CMA support was requested via the
btl_vader_single_copy_mechanism MCA
Today I came across the two MCA parameters osc_ucx_progress_iterations
and pml_ucx_progress_iterations in Open MPI. My interpretation of the
description is that in a loop such as below, progress in UCX is only
triggered every 100 iterations (assuming opal_progress is only called
once per MPI_Te
, 2019 10:19 AM
To: users@lists.open-mpi.org
Cc: Ewen Chan
Subject: [OMPI users] Question about OpenMPI paths
To Whom It May Concern:
I am trying to run Converge CFD by Converge Science using OpenMPI in CentOS
7.6.1810 x86_64 and I am getting the error:
bash: orted: command not found
I've al
To Whom It May Concern:
I am trying to run Converge CFD by Converge Science using OpenMPI in CentOS
7.6.1810 x86_64 and I am getting the error:
bash: orted: command not found
I've already read the FAQ:
https://www.open-mpi.org/faq/?category=running#adding-ompi-to-path
Here's my system setup,
Yes, yes, and yes. I built everything, both openmpi using icc for the C
compiler, icpc for C++, and ifort for Fortran. All point to the same
installation. My application is built using the installed openmpi front ends,
mpicc, mpicxx, mpifort, which all report they use the intel versions.
John W
On Aug 2, 2018, at 4:40 PM, Grove, John W wrote:
>
> I am compiling an application using openmpi 3.1.1. The application is mixed
> Fortran/C/C++. I am using the intel compiler on a mac pro running OS 10.13.6.
> When I try to use the mpi_f08 interface I get unresolved symbols at load
> time, sp
I am compiling an application using openmpi 3.1.1. The application is mixed
Fortran/C/C++. I am using the intel compiler on a mac pro running OS 10.13.6.
When I try to use the mpi_f08 interface I get unresolved symbols at load time,
specficially
_mpi_f08_types_mp_ompi_comm_op_ne_, _mpi_f08_types
There is already a nice solution for the useful special case of ABI
portability where one wants to use more than one MPI library with an
application binary, but only one MPI library for a given application
invocation:
https://github.com/cea-hpc/wi4mpi
They document support for the Intel MPI and O
Don't forget that there's a lot more to "binary portability" between MPI
implementations than just the ABI (wire protocols, run-time interfaces,
...etc.). This is the main (set of) reasons that ABI standardization of the
MPI specification never really took off -- so much would need to be
stand
On 09/20/17 23:39, Jeff Hammond wrote:
I assume that anyone who is using Fortran 2003 or later has the good sense to
never use compiler flags to change the size of the INTEGER type, because this
is evil.
Actually, only changing INTEGER size without adjusting REAL size is evil (i.e.
breaks assu
This discussion started getting into an interesting question: ABI
standardization for portability by language. It makes sense to have ABI
standardization for portability of objects across environments. At the same
time it does mean that everyone follows the exact same recipe for low level
implement
On Wed, Sep 20, 2017 at 5:55 AM, Dave Love
wrote:
> Jeff Hammond writes:
>
> > Please separate C and C++ here. C has a standard ABI. C++ doesn't.
> >
> > Jeff
>
> [For some value of "standard".] I've said the same about C++, but the
> current GCC manual says its C++ ABI is "industry standard",
On Wed, Sep 20, 2017 at 6:26 AM, Gilles Gouaillardet <
gilles.gouaillar...@gmail.com> wrote:
> On Tue, Sep 19, 2017 at 11:58 AM, Jeff Hammond
> wrote:
>
> > Fortran is a legit problem, although if somebody builds a standalone
> Fortran
> > 2015 implementation of the MPI interface, it would be dec
On Tue, Sep 19, 2017 at 11:58 AM, Jeff Hammond wrote:
> Fortran is a legit problem, although if somebody builds a standalone Fortran
> 2015 implementation of the MPI interface, it would be decoupled from the MPI
> library compilation.
Is this even doable without making any assumptions ?
For exam
Jeff Hammond writes:
> Intel compilers support GOMP runtime interoperability, although I don't
> believe it is the default. You can use the Intel/LLVM OpenMP runtime with
> GCC such that all three OpenMP compilers work together.
For what it's worth, it's trivial to make a shim with a compatible
Jeff Hammond writes:
> Please separate C and C++ here. C has a standard ABI. C++ doesn't.
>
> Jeff
[For some value of "standard".] I've said the same about C++, but the
current GCC manual says its C++ ABI is "industry standard", and at least
Intel document compatibility with recent GCC on GNU/
;> > wrote:
>> >>
>> >> On Linux and Mac, Intel c and c++ are sufficiently compatible with gcc
>> and
>> >> g++ that this should be possible. This is not so for Fortran
>> libraries or
>> >> Windows.
>> >>
>> >>
>>
t;
> >>
> >>
> >> Sent via the Samsung Galaxy S8 active, an AT&T 4G LTE smartphone
> >>
> >> Original message
> >> From: Michael Thomadakis
> >> Date: 9/18/17 3:51 PM (GMT-05:00)
> >> To: users@lists.open-mp
d be possible. This is not so for Fortran libraries
> or
> >> Windows.
> >>
> >>
> >>
> >>
> >>
> >>
> >> Sent via the Samsung Galaxy S8 active, an AT&T 4G LTE smartphone
> >>
> >> Origina
gt;> g++ that this should be possible. This is not so for Fortran libraries
> or
> >> Windows.
> >>
> >>
> >>
> >>
> >>
> >>
> >> Sent via the Samsung Galaxy S8 active, an AT&T 4G LTE smartphone
> >>
>
This is not so for Fortran libraries or
>> Windows.
>>
>>
>>
>>
>>
>>
>> Sent via the Samsung Galaxy S8 active, an AT&T 4G LTE smartphone
>>
>> Original message
>> From: Michael Thomadakis
>> Date: 9/18/17 3:51 P
I think Jeff squires summed it up.
Sent via the Samsung Galaxy S8 active, an AT&T 4G LTE smartphone
Original message From: Michael Thomadakis
Date: 9/18/17 4:57 PM (GMT-05:00) To: Open MPI
Users Cc: n8tm Subject: Re: [OMPI
users] Question concerning compatibilit
; Windows.
>
>
>
>
>
>
> Sent via the Samsung Galaxy S8 active, an AT&T 4G LTE smartphone
>
> Original message
> From: Michael Thomadakis
> Date: 9/18/17 3:51 PM (GMT-05:00)
> To: users@lists.open-mpi.org
> Subject: [OMPI users] Question concerning com
riginal message
> From: Michael Thomadakis
> Date: 9/18/17 3:51 PM (GMT-05:00)
> To: users@lists.open-mpi.org
> Subject: [OMPI users] Question concerning compatibility of languages used
> with building OpenMPI and languages OpenMPI uses to build MPI binaries.
>
> Dear Op
Date: 9/18/17 3:51 PM (GMT-05:00) To:
users@lists.open-mpi.org Subject: [OMPI users] Question concerning
compatibility of languages used with building OpenMPI and languages OpenMPI
uses to build MPI binaries.
Dear OpenMPI list,
As far as I know, when we build OpenMPI itself with GNU or I
Dear OpenMPI list,
As far as I know, when we build OpenMPI itself with GNU or Intel compilers
we expect that the subsequent MPI application binary will use the same
compiler set and run-times.
Would it be possible to build OpenMPI with the GNU tool chain but then
subsequently instruct the OpenMPI
Siegmar,
a noticeable difference is hello_1 does *not* sleep, whereas
hello_2_slave *does*
simply comment out the sleep(...) line, and performances will be identical
Cheers,
Gilles
On 7/31/2017 9:16 PM, Siegmar Gross wrote:
Hi,
I have two versions of a small program. In the first one th
Hi,
I have two versions of a small program. In the first one the process with rank 0
calls the function "master()" and all other ranks call the function "slave()"
and in the second one I have two programs: one for the master task and another
one for the slave task. The run-time for the second v
Thank you for the explanation! I understand what is going on now: there
is a process list for each node whose order is dependent on the mapping
policy, and the ranker, when using "slot," walks through that list.
Makes sense.
Thank you again!
David
On 11/30/2016 04:46 PM, r...@open-mpi.org wro
“slot’ never became equivalent to “socket”, or to “core”. Here is what happened:
*for your first example: the mapper assigns the first process to the first node
because there is a free core there, and you said to map-by core. It goes on to
assign the second process to the second core, and the th
Hello Ralph,
I do understand that "slot" is an abstract term and isn't tied down to
any particular piece of hardware. What I am trying to understand is how
"slot" came to be equivalent to "socket" in my second and third example,
but "core" in my first example. As far as I can tell, MPI ranks s
I think you have confused “slot” with a physical “core”. The two have
absolutely nothing to do with each other.
A “slot” is nothing more than a scheduling entry in which a process can be
placed. So when you --rank-by slot, the ranks are assigned round-robin by
scheduler entry - i.e., you assign
Hello All,
The man page for mpirun says that the default ranking procedure is
round-robin by slot. It doesn't seem to be that straight-forward to me,
though, and I wanted to ask about the behavior.
To help illustrate my confusion, here are a few examples where the
ranking behavior changed ba
Additionally:
- When Open MPI migrated to github, we only brought over relevant open Trac
tickets to Github. As such, many old 1.10 and 1.8 (and earlier) issues were
not brought over.
- Trac is still available in a read-only manner at
https://svn.open-mpi.org/trac/ompi/report.
> On Oct 5, 20
Edwin,
changes are summarized in the NEWS file
we used to have two github repositories, and they were "merged" recently
with github, you can list the closed PR for a given milestone
https://github.com/open-mpi/ompi-release/milestones?state=closed
then you can click on a milestone, and list
Apologies for the dumb question... There used to be a way to dive in to see
exactly what bugs and features came into 1.10.4, 1.10.3, and on back to 1.8.8.
Is there a way to do that on github?
Ed
___
users mailing list
users@lists.open-mpi.org
https:/
What kind of system was this on? ssh, slurm, ...?
> On Jul 28, 2016, at 1:55 PM, Blosch, Edwin L wrote:
>
> I am running cases that are starting just fine and running for a few hours,
> then they die with a message that seems like a startup type of failure.
> Message shown below. The messag
I am running cases that are starting just fine and running for a few hours,
then they die with a message that seems like a startup type of failure.
Message shown below. The message appears in standard output from rank 0
process. I'm assuming there is a failing card or port or something.
What
We already do that as a check, but it came after the 1.6 series - and so you
get the old error message if you mix with the 1.6 series or older versions.
> On May 16, 2016, at 8:22 AM, Gilles Gouaillardet
> wrote:
>
> or this could be caused by a firewall ...
> v1.10 and earlier uses tcp for o
or this could be caused by a firewall ...
v1.10 and earlier uses tcp for oob,
from v2.x, unix sockets are used
detecting consistent version is a good idea,
printing them (mpirun, orted and a.out) can be a first step.
my idea is
mpirun invokes orted with '--ompi_version=x.y.z'
orted checks it is
Ralph Castain writes:
> This usually indicates that the remote process is using a different OMPI
> version. You might check to ensure that the paths on the remote nodes are
> correct.
That seems quite a common problem with non-obvious failure modes.
Is it not possible to have a mechanism that c
the internet and I also performe the following command to find mpirun
path and add it to .bashcr file. However, the results with no effect.
[user@localhost ~]$ which mpirun
/usr/lib64/openmpi/bin/mpirun
Any idea and thanks in advance!
*Subject:* Re: [OMPI users] Question about mpirun
path and add
it to .bashcr file. However, the results with no effect.
[user@localhost ~]$ which mpirun/usr/lib64/openmpi/bin/mpirun
Any idea and thanks in advance!
Subject: Re: [OMPI users] Question about mpirun mca_oob_tcp_recv_handler error.
From: Ralph Castain (rhc_at_[hidden])
Date: 2016-05-10
This usually indicates that the remote process is using a different OMPI
version. You might check to ensure that the paths on the remote nodes are
correct.
On Tue, May 10, 2016 at 8:46 AM, lzfneu wrote:
> Hi everyone,
>
> I have a problem to consult you, when I cd to the /examples folder
> cont
Hi everyone,
I have a problem to consult you, when I cd to the /examples folder contained in
the openmpi-1.8.4 package, and test the hello_c example program with mpirun
command errors occured:
Here are the command and the error messages in details:
[user@localhost examples]$ mpirun -np 2 hello
.org>>
Subject: Re: [OMPI users] Question on MPI_Comm_spawn timing
I honestly don’t think anyone has been concerned about the speed of
MPI_Comm_spawn, and so there hasn’t been any effort made to optimize it
On Apr 3, 2016, at 2:52 AM, Gilles Gouaillardet
mailto:gilles.gouaillar..
I honestly don’t think anyone has been concerned about the speed of
MPI_Comm_spawn, and so there hasn’t been any effort made to optimize it
> On Apr 3, 2016, at 2:52 AM, Gilles Gouaillardet
> wrote:
>
> Hi,
>
> performance of MPI_Comm_spawn in the v1.8/v1.10 series is known to be poor,
> es
Hi,
performance of MPI_Comm_spawn in the v1.8/v1.10 series is known to be poor,
especially compared to v1.6
generally speaking, I cannot recommend v1.6 since it is no more maintained.
that being said, if performance is critical, you might want to give it a
try.
I did not run any performance meas
Hi all,
I am trying to evaluate the time taken for MPI_Comm_spawn operation in the
latest version of OpenMPI. Here a parent communicator (all processes, not
just the root) spawns one new child process (separate executable). The
code I¹m executing is
main(){
{
Š..
// MPI initialization
Š..
start1
I forgot to include a link to the official announcement of the change,
and that info might be helpful in navigating the different versions and
backwards compatibility:
https://www.open-mpi.org/community/lists/announce/2015/06/0069.php
Thanks,
David
On 02/26/2016 10:43 AM, David Shrader wrote:
Hey Edwin,
The versioning scheme changed with 2.x. Prior to 2.x the "Minor" version
had a different definition and did not mention backwards compatibility
at all (at least in my 1.6.x tarballs). As it turned out for 1.8.x and
1.6.x, 1.8.x was not backwards compatible with 1.6.x, so the behavio
I am confused about backwards-compatibility.
FAQ #111 says:
Open MPI reserves the right to break ABI compatibility at new feature release
series. . MPI applications compiled/linked against Open MPI 1.6.x will not
be ABI compatible with Open MPI 1.7.x
But the versioning documentation says:
Hi Everyone!
I would like to understand how the checkpoint tools work on OpenMPI, like
BLCR and DMTCP. I would be glad if you could me answer the following
questions:
1) BLCR and DMTCP take checkpoints on the parallel processes. The
checkpoints are taken on a coordinated way? I mean, there is a
s
Sure:
$ ompi_info --param hwloc all -l 9
…..
MCA hwloc: parameter "hwloc_base_cpu_set" (current value: "",
data source: default, level: 9 dev/all, type:
string)
Comma-separated list of ranges specifying lo
Thank you and one last question. Is it possible to avoid a core and
instruct OMPI to use only the other cores?
On Mon, Dec 22, 2014 at 2:08 PM, Ralph Castain wrote:
>
> On Dec 22, 2014, at 10:45 AM, Saliya Ekanayake wrote:
>
> Hi Ralph,
>
> Yes the report bindings show the correct binding as ex
> On Dec 22, 2014, at 10:45 AM, Saliya Ekanayake wrote:
>
> Hi Ralph,
>
> Yes the report bindings show the correct binding as expected for the
> processes. The doubt I am having is, say I spawn a thread within my process.
> If I don't specify affinity for it, is it possible for it to get sche
Hi Ralph,
Yes the report bindings show the correct binding as expected for the
processes. The doubt I am having is, say I spawn a thread within my
process. If I don't specify affinity for it, is it possible for it to get
scheduled to run in a core outside that of the process?
Second question is,
FWIW: it looks like we are indeed binding to core if PE is set, so if you are
seeing something different, then we may have a bug somewhere.
If you add —report-bindings to your cmd line, you should see where we bound the
procs - does that look correct?
> On Dec 22, 2014, at 9:49 AM, Ralph Casta
They will be bound to whatever level you specified - I believe by default we
bind to socket when mapping by socket. If you want them bound to core, you
might need to add —bind-to core.
I can take a look at it - I *thought* we had reset that to bind-to core when
PE=N was specified, but maybe tha
Hi,
I've been using --map-by socket:PE=N, where N is used to control the number
of cores a proc gets mapped to. Does this also guarantee that a proc is
bound to N cores in the socket? I am asking this because I see some threads
spawned by the process run outside the given N cores in the socket.
I
Hi,
I'm trying to find the correct settings for OFED kernel parameter for the
cluster. Each node has 32G RAM, installed Red Hat Enterprise Linux
Server release 6.4 (Santiago) , OFED 2.1.192, OpenMPI 1.6.5 and
Mellanox Technologies MT27500 Family [ConnectX-3] with 56G actived.
lsmod showe
Ah, yes - so here is what is happening. When no slot info is provided, we use
the number of detected cores on each node as the #slots. So if you want to
loadbalance across the nodes, you need to set —map-by node
Or add slots=1 to each line of your host file to override the default behavior
> On
Here's my command:
/bin/mpirun --machinefile
hosts.dat -np 4
Here's my hosts.dat file:
% cat hosts.dat
node01
node02
node03
node04
All 4 ranks are launched on node01. I don't believe I've ever seen this
before. I had to do a sanity check, so I tried MVAPICH2-2.1a and got what I
expected:
Hi all,
I'm starting with OpenMPI and actually I'm trying to do a simple example of
combining OpenMP and OpenMPI The thing is that when I trying to run with
"mpirun" it's getting hanged.
I send the number of processors parameter and also I set the OMP_THREADS_NUM
Above is my code:
#include "omp
I'm not aware of any way to tell using ompi_info, I'm afraid. I'd have to
ponder a bit as to how we could do so since it's a link to a library down below
the one we directly use.
On Jul 21, 2014, at 3:00 PM, Blosch, Edwin L wrote:
> In making the leap from 1.6 to 1.8, how can I check whether
In making the leap from 1.6 to 1.8, how can I check whether or not
process/memory affinity is supported?
I've built OpenMPI on a system where the numactl-devel package was not
installed, and another where it was, but I can't see anything in the output of
ompi_info that suggests any difference b
Open MPI is distributed under the modified BSD license. Here’s a link to the
v1.8 LICENSE file:
https://svn.open-mpi.org/trac/ompi/browser/branches/v1.8/LICENSE
As long as you abide by the terms of that license, you are fine.
On Jun 17, 2014, at 4:41 AM, Victor Vysotskiy
wrote:
> Dear
Dear Developers,
I would like to clarify a question about the OpenMPI license. We are working
on academic code and our project is non-profitable. Now we are planning to
sale the parallel binaries. The question is whether it is allowed to compile
our project with OpenMPI (v1.8.2) and then dist
rg] on behalf of Ralph Castain
[r...@open-mpi.org]
Sent: Friday, May 16, 2014 2:07 PM
To: Open MPI Users
Subject: Re: [OMPI users] Question about scheduler support
On May 16, 2014, at 1:03 PM, Fabricio Cannini
mailto:fcann...@gmail.com>> wrote:
Em 16-05-2014 10:06, Jeff Squyres (jsqu
gt; >> Nada, zilch, nothing on standard OS X install. I do not want to put an
> >> extra
> requirement on my users. Nor do I want something as simple-minded as CMake.
> autotools works great for me.
> >>
> >> -Nathan
> >>
> >>
From: users [users-boun...@open-mpi.org] on behalf of Ralph Castain
[r...@open-mpi.org]
Sent: Friday, May 16, 2014 2:07 PM
To: Open MPI Users
Subject: Re: [OMPI users] Question about scheduler support
On May 16, 2014, at 1:03 PM, Fabricio Cannini
mailto:fca
Em 16-05-2014 17:07, Ralph Castain escreveu:
FWIW, simply for my own curiosity's sake, if someone could confirm
deny whether cmake:
1. Supports the following compiler suites: GNU (that's a given, I
assume), Clang, OS X native (which is variants of GNU and Clang),
Absoft, PGI, Intel, Cray, HP-UX,
want something as simple-minded as CMake.
> autotools works great for me.
>
> -Nathan
>
>
> From: users [users-boun...@open-mpi.org] on behalf of Ralph Castain
> [r...@open-mpi.org]
> Sent: Friday, May 16, 2014 2:07 PM
> To: Open M
.
autotools works great for me.
-Nathan
From: users [users-boun...@open-mpi.org] on behalf of Ralph Castain
[r...@open-mpi.org]
Sent: Friday, May 16, 2014 2:07 PM
To: Open MPI Users
Subject: Re: [OMPI users] Question about scheduler support
On May 16, 2014
On May 16, 2014, at 1:03 PM, Fabricio Cannini wrote:
> Em 16-05-2014 10:06, Jeff Squyres (jsquyres) escreveu:
>> On May 15, 2014, at 8:00 PM, Fabricio Cannini
>> wrote:
>>
Nobody is disagreeing that one could find a way to make CMake
work - all we are saying is that (a) CMake has iss
Em 16-05-2014 10:06, Jeff Squyres (jsquyres) escreveu:
On May 15, 2014, at 8:00 PM, Fabricio Cannini
wrote:
Nobody is disagreeing that one could find a way to make CMake
work - all we are saying is that (a) CMake has issues too, just
like autotools, and (b) we have yet to see a compelling reas
Le 2014-05-16 09:06, Jeff Squyres (jsquyres) a écrit :
On May 15, 2014, at 8:00 PM, Fabricio Cannini wrote:
Nobody is disagreeing that one could find a way to make CMake work - all we are
saying is that (a) CMake has issues too, just like autotools, and (b) we have
yet to see a compelling re
On May 15, 2014, at 8:00 PM, Fabricio Cannini wrote:
>> Nobody is disagreeing that one could find a way to make CMake work - all we
>> are saying is that (a) CMake has issues too, just like autotools, and (b) we
>> have yet to see a compelling reason to undertake the transition...which
>> woul
Em 15-05-2014 20:48, Ralph Castain escreveu:
Nobody is disagreeing that one could find a way to make CMake work - all we are
saying is that (a) CMake has issues too, just like autotools, and (b) we have
yet to see a compelling reason to undertake the transition...which would have
to be a *very
Nobody is disagreeing that one could find a way to make CMake work - all we are
saying is that (a) CMake has issues too, just like autotools, and (b) we have
yet to see a compelling reason to undertake the transition...which would have
to be a *very* compelling one.
On May 15, 2014, at 4:45 PM
Em 15-05-2014 20:15, Maxime Boissonneault escreveu:
Le 2014-05-15 18:27, Jeff Squyres (jsquyres) a écrit :
On May 15, 2014, at 6:14 PM, Fabricio Cannini wrote:
Alright, but now I'm curious as to why you decided against it.
Could please elaborate on it a bit ?
OMPI has a long, deep history wi
On May 15, 2014, at 4:15 PM, Maxime Boissonneault
wrote:
> Le 2014-05-15 18:27, Jeff Squyres (jsquyres) a écrit :
>> On May 15, 2014, at 6:14 PM, Fabricio Cannini wrote:
>>
>>> Alright, but now I'm curious as to why you decided against it.
>>> Could please elaborate on it a bit ?
>> OMPI has
Le 2014-05-15 18:27, Jeff Squyres (jsquyres) a écrit :
On May 15, 2014, at 6:14 PM, Fabricio Cannini wrote:
Alright, but now I'm curious as to why you decided against it.
Could please elaborate on it a bit ?
OMPI has a long, deep history with the GNU Autotools. It's a very long,
complicated
On May 15, 2014, at 6:14 PM, Fabricio Cannini wrote:
> Alright, but now I'm curious as to why you decided against it.
> Could please elaborate on it a bit ?
OMPI has a long, deep history with the GNU Autotools. It's a very long,
complicated story, but the high points are:
1. The GNU Autotools
Em 15-05-2014 18:40, Ralph Castain escreveu:
On May 15, 2014, at 2:34 PM, Fabricio Cannini wrote:
Em 15-05-2014 07:29, Jeff Squyres (jsquyres) escreveu:
I think Ralph's email summed it up pretty well -- we unfortunately have (at
least) two distinct groups of people who install OMPI:
a) tho
1 - 100 of 331 matches
Mail list logo