I can confirm that mpirun will not direct-launch the applications under Torque.
This is done for wireup support - if/when Torque natively supports PMIx, then
we could revisit that design.
Gilles: the benefit is two-fold:
* Torque has direct visibility of the application procs. When we launch vi
Ken,
iirc, and under torque when Open MPI is configure'd with --with -tm
(this is the default, so assuming your torque headers/libs can be found,
you do not even have to specify --with-tm), mpirun does tm_spawn the
orted daemon on all nodes except the current one.
then mpirun and orted will
I am using openmpi version 1.10.2 with Torque 6.0.1.
I launch a job with the following syntax:
qsub -L tasks=2:lprocs=2:maxtpn=1 -I
This starts an interactive job which is using two nodes.
I then use mpirun as follows from the command line of the interactive job.
mpirun -np 4 sleep 6
Just looking at this output, it would appear that Windows is configured in a
way that prevents the procs from connecting to each other via TCP while on the
same node, and shared memory is disqualifying itself - which leaves no way for
two procs on the same node to communicate.
> On Jun 7, 2016
I have developed a set of C++ MPI programs for performing a series of
scientific calculations. The master 'scheduler' program spawns off sets of
parallelized 'executor' programs using the MPI_Comm_spawn routine; these
executors communicate back and forth with the scheduler (only small amounts o
On 06/02/2016 06:41 AM, Edgar Gabriel wrote:
Gilles,
I think the semantics of MPI_File_close does not necessarily mandate
that there has to be an MPI_Barrier based on that text snippet. However,
I think what the Barrier does in this scenario is 'hide' a consequence
of an implementation aspect.
On 2016/6/7 22:55, Ralph Castain wrote:
>On Jun 7, 2016, at 7:17 AM, Du, Fan wrote:
>
>
>
>On 2016/6/6 18:00, Ralph Castain wrote:
>>Perhaps it would help if you could give us some idea of the interest
>>here? The prior Mesos integration was done as an academic project, which
>>is why it died
Hi,
I installed openmpi-v2.x-dev-1468-g6011906 on my "SUSE Linux Enterprise
Server 12 (x86_64)" with Sun C 5.13 and gcc-6.1.0. Unfortunately I
get an error for the combination of "--host" and "--slot-list" for a
small program, while the program runs as expected with a single option
"--host" or "
On 06/06/2016 06:32 PM, Rob Nagler
wrote:
Thanks, John. I sometimes wonder if I'm
the only one out there with this particular problem.
Ralph, thanks for sticking with me. :)
Using a pool of uids doesn'
> On Jun 7, 2016, at 7:17 AM, Du, Fan wrote:
>
>
>
> On 2016/6/6 18:00, Ralph Castain wrote:
>> Perhaps it would help if you could give us some idea of the interest
>> here? The prior Mesos integration was done as an academic project, which
>> is why it died once the student graduated.
>
> Co
On 2016/6/6 18:00, Ralph Castain wrote:
Perhaps it would help if you could give us some idea of the interest
here? The prior Mesos integration was done as an academic project, which
is why it died once the student graduated.
Could you point me the repo of previous work?
The intention is simpl
On 2016/6/6 10:22, Ralph Castain wrote:
On Jun 5, 2016, at 4:30 PM, Du, Fan mailto:fan...@intel.com>> wrote:
Thanks for your reply!
On 2016/6/5 3:01, Ralph Castain wrote:
The closest thing we have to what you describe is the “orte-dvm” - this
allows one to launch a persistent collection of
You might want to specify a wider range of ports.
depending on how the socket is closed, a given port might or might not be
available right after a job completes. iirc, and with default TCP settings,
the worst case is a few minutes.
I will double check sockets are created with SO_REUSE (or somethin
Hello all,
after the correct configuration, mpirun (v 1.10.2) works fine when all tpc
ports are open. I can ssh to all hosts without a password.
Then it comes back to my first question: how to specify the ports for MPI
communication?
I opened the ports 4-5 for outgoing traffic, when
14 matches
Mail list logo