Thanks, Ralph. I'm not sure I explained the problem clearly. Salt and
JupyterHub are distractions, sorry.
I have code which "wires up" a cluster for MPI. What I need is scheduler
that allows users to:
* Select which Docker image they'd like to wire up
* Request a number of nodes/cores
* Understan
I’m afraid I’m not familiar with JupyterHub at all, or Salt. All you really
need is:
* a scheduler that understands the need to start all the procs at the same time
- i.e., as a block
* wireup support for the MPI procs themselves
If JupyterHub can do the first, then you could just have it laun
Hi Durga,
Here is a short summary:
PSM: is intended for Intel TrueScale InfiniBand product series. It is also
known as PSM gen 1, uses libpsm_infinipath.so
PSM2: is intended for Intel’s next generation fabric called OmniPath. PSM gen2,
uses libpsm2.so. I didn’t know about the owner.txt missing.
We would like to use MPI on Docker with arbitrarily configured clusters
(e.g. created with StarCluster or bare metal). What I'm curious about is if
there is a queue manager that understands Docker, file systems, MPI, and
OpenAuth. JupyterHub does a lot of this, but it doesn't interface with MPI.
Id
Hello all
What is the difference between PSM and PSM2? Any pointer to more
information is appreciated. Also, the PSM2 MTL does not seem to have a
owner.txt file (on master, at least). Why is that?
Thanks
Durga
We learn from history that we never learn from history.
The syntax is
configure --enable-mpirun-prefix-by-default --prefix= ...
all hosts must be able to ssh each other passwordless.
that means you need to generate a user ssh key pair on all hosts, add your
public keys to the list of authorized keys, and ssh to all hosts in order
to populate your known
Hi,
thank you Gilles for your suggestion. I tried: mpirun --prefix --host hostname, then it works.
I’m sure both IPs are the ones of the VM on which mpirun is running, and they
are unique.
I also configured Open MPI with --enable-mpirun-prefix-by-default, but I still
need to add --pre
are you saying both IP are the ones of the VM on which mpirun is running ?
orted is only launched on all the machines *except* the one running mpirun.
can you double/triple check the IPs are ok and unique ?
for example, mpirun --host /sbin/ifconfig -a
can you also make sure Open MPI is installed
Possibly - did you configure —enable-orterun-prefix-by-default as the error
message suggests?
> On Jun 2, 2016, at 7:44 AM, Ping Wang wrote:
>
> Hi,
>
> I've installed Open MPI v1.10.2. Every VM on the cloud has two IPs (internal
> IP, public IP).
> When I run: mpirun --host hostname, the o
Hi,
I've installed Open MPI v1.10.2. Every VM on the cloud has two IPs (internal
IP, public IP).
When I run: mpirun --host hostname, the output is the hostname of
the VM.
But when I run: mpirun --host hostname, the output is
bash: orted: command not found
-
Gilles,
I think the semantics of MPI_File_close does not necessarily mandate
that there has to be an MPI_Barrier based on that text snippet. However,
I think what the Barrier does in this scenario is 'hide' a consequence
of an implementation aspect. So the MPI standard might not mandate a
Bar
Hi,
may I ask why you need/want to launch orted manually ?
unless you are running under a batch manager, Open MPI uses the rsh pml to
remotely start orted.
basically, it does
ssh host orted
the best I can suggest is you do
mpirun --mca orte_rsh_agent myrshagent.sh --mca orte_launch_agent
mylau
Hi folks
Starting from Open MPI, I can launch mpi application a.out as following
on host1
mpirun --allow-run-as-root --host host1,host2 -np 4 /tmp/a.out
On host2, I saw an proxy, say orted here is spawned:
orted --hnp-topo-sig 4N:2S:4L3:20L2:20L1:20C:40H:x86_64 -mca ess env
-mca orte_ess_jobi
13 matches
Mail list logo