I know it's not supposed to matter, but have you tried building
both ompi and slurm against same pmix? That is - first build pmix,
than build slurm with-pmix, and than ompi with both slurm and
pmix=external ?
On 23/04/2020 17:00, Prentice Bi
In src rpm version 4.0.1 if building with --define
'build_all_in_one_rpm 0' the grep -v _mandir docs.files is empty.
The simple workaround is to follow earlier pattern and pipe to
/bin/true, as the spec doesn't really care if the file is empty.
I'm wonderi
Hello,
Assuming I have installed openmpi built with distro stock
gcc(4.4.7 on rhel 6.5), but an app requires a different gcc
version (8.2 manually built on dev machine).
Would there be any issues, or performance penalty, if building
the app u
ce : +966 (0) 12-808-0367
*From:* users on behalf of Ralph H
Castain
*Sent:* Monday, March 4, 2019 5:29 PM
*To:* Open MPI Users
*Subject:* Re: [OMPI users] Building PMIx and Slurm support
On Mar 4, 2019,
Gilles,
On 3/4/19 8:28 AM, Gilles Gouaillardet
wrote:
Daniel,
On 3/4/2019 3:18 PM, Daniel Letai wrote:
So unless you have a specific reason not
to mix both, you might also give the internal PMIx a try
On
3/4/2019 1:08 AM, Daniel Letai wrote:
Sent from my iPhone
On 3 Mar 2019, at 16:31, Gilles
Gouaillardet wrote:
Daniel,
PMIX_MODEX and PMIX_INFO_ARRAY have
ith-pmix=/usr)
>
> Cheers,
>
> Gilles
>
>> On Sun, Mar 3, 2019 at 10:57 PM Daniel Letai wrote:
>>
>> Hello,
>>
>>
>> I have built the following stack :
>>
>> centos 7.5 (gcc 4.8.5-28, libevent 2.0.21-4)
>> MLNX_OFED_LINUX
Hello,
I have built the following stack :
centos 7.5 (gcc 4.8.5-28, libevent 2.0.21-4)
MLNX_OFED_LINUX-4.5-1.0.1.0-rhel7.5-x86_64.tgz built with
--all --without-32bit (this includes ucx 1.5.0)
hwloc from centos 7.5 : 1.11.8-4.el7
On 06/06/2016 06:32 PM, Rob Nagler
wrote:
Thanks, John. I sometimes wonder if I'm
the only one out there with this particular problem.
Ralph, thanks for sticking with me. :)
Using a pool of uids doesn'
That's why they have acl in ZoL, no?
just bring up a new filesystem for each container, with acl so only
the owning container can use that fs, and you should be done, no?
To be clear, each container would have to have a unique uid for this
to work, but together
Did you check shifter?
https://www.nersc.gov/assets/Uploads/cug2015udi.pdf ,
https://www.nersc.gov/assets/Uploads/cug2015udi.pdf ,
http://www.nersc.gov/research-and-development/user-defined-images/ ,
https://github.com/NERSC/shifter
On 06/03/2016 01:58 AM, Rob Na
On 10/20/2015 04:14 PM, Ralph Castain wrote:
On Oct 20, 2015, at 5:47 AM, Daniel Letai <mailto:d...@letai.org.il>> wrote:
Thanks for the reply,
On 10/13/2015 04:04 PM, Ralph Castain wrote:
On Oct 12, 2015, at 6:10 AM, Daniel Letai <mailto:d...@letai.org.il>> wrote:
Hi,
Thanks for the reply,
On 10/13/2015 04:04 PM, Ralph Castain wrote:
On Oct 12, 2015, at 6:10 AM, Daniel Letai wrote:
Hi,
After upgrading to 1.8.8 I can no longer see the map. When looking at the man
page for mpirun, display-map no longer exists. Is there a way to show the map
in 1.8.8 ?
I
Hi,
After upgrading to 1.8.8 I can no longer see the map. When looking at
the man page for mpirun, display-map no longer exists. Is there a way to
show the map in 1.8.8 ?
Another issue - I'd like to map 2 process per node - 1 to each socket.
What is the current "correct" syntax? --map-by ppr:2
the
issue if that if my guess is proven right
Cheers,
Gilles
On Sunday, June 21, 2015, Daniel Letai <mailto:d...@letai.org.il>> wrote:
MCA coll: parameter "coll_ml_priority" (current value: "0", data
source: default, level: 9 dev/all, type: int)
Not s
s is really odd...
you can run
ompi_info --all
and search coll_ml_priority
it will display the current value and the origin
(e.g. default, system wide config, user config, cli, environment variable)
Cheers,
Gilles
On Thursday, June 18, 2015, Daniel Letai <mailto:d...@letai.org.il>> w
user config, cli, environment variable)
Cheers,
Gilles
On Thursday, June 18, 2015, Daniel Letai <mailto:d...@letai.org.il>> wrote:
No, that's the issue.
I had to disable it to get things working.
That's why I included my config settings - I couldn't figure ou
s not ready for production and is disabled by default.
Did you explicitly enable this module ?
If yes, I encourage you to disable it
Cheers,
Gilles
On Thursday, June 18, 2015, Daniel Letai <mailto:d...@letai.org.il>> wrote:
given a simple hello.c:
#include
#include
given a simple hello.c:
#include
#include
int main(int argc, char* argv[])
{
int size, rank, len;
char name[MPI_MAX_PROCESSOR_NAME];
MPI_Init(&argc, &argv);
MPI_Comm_size(MPI_COMM_WORLD, &size);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
MPI_Get_proc
19 matches
Mail list logo