I am having a problem on my linux desktop where mpi_init hangs for
approximately 64 seconds if I have my vpn client connected but runs immediately
if I disconnect the vpn. I've picked through the FAQ and Google but have failed
to come up with a solution.
Some potentially relevant information: I a
Okay, I found it - fix coming in a bit.
Thanks!
Ralph
On Mar 21, 2013, at 4:02 PM, tmish...@jcity.maeda.co.jp wrote:
>
>
> Hi Ralph,
>
> Sorry for late reply. Here is my result.
>
> mpirun -v -np 8 -hostfile pbs_hosts -x OMP_NUM_THREADS --display-allocation
> -mca ras_base_verbose 5 -mca rma
Hi Ralph,
Sorry for late reply. Here is my result.
mpirun -v -np 8 -hostfile pbs_hosts -x OMP_NUM_THREADS --display-allocation
-mca ras_base_verbose 5 -mca rmaps_base_verb
ose 5 /home/mishima/Ducom/testbed/mPre m02-ld
[node04.cluster:28175] mca:base:select:( ras) Querying component
[loadlevele
Thank you, Ralph.
I will try to use a rankfile.
In any case, the --cpus-per-proc option is a very useful feature:
for hybrid MPI+OpenMP programs, for these processors with one FPU
shared by two cores, etc.
If it gets fixed in a later release of OMPI that would be great.
Thank you,
Gus Correa
I've heard this from a couple of other sources - it looks like there is a
problem on the daemons when they compute the location for -cpus-per-proc. I'm
not entirely sure why that would be as the code is supposed to be common with
mpirun, but there are a few differences.
I will take a look at it
On 03/21/2013 03:12 PM, Reuti wrote:
Am 21.03.2013 um 20:01 schrieb Gus Correa:
Dear Open MPI Pros
I am having trouble using mpiexec with --cpus-per-proc
on multiple nodes in OMPI 1.6.4.
I know there is an ongoing thread on similar runtime issues
of OMPI 1.7.
By no means I am trying to hijack
Am 21.03.2013 um 20:01 schrieb Gus Correa:
> Dear Open MPI Pros
>
> I am having trouble using mpiexec with --cpus-per-proc
> on multiple nodes in OMPI 1.6.4.
>
> I know there is an ongoing thread on similar runtime issues
> of OMPI 1.7.
> By no means I am trying to hijack T. Mishima's questions.
Dear Open MPI Pros
I am having trouble using mpiexec with --cpus-per-proc
on multiple nodes in OMPI 1.6.4.
I know there is an ongoing thread on similar runtime issues
of OMPI 1.7.
By no means I am trying to hijack T. Mishima's questions.
My question is genuine, though, and perhaps related to his
Hmmm...okay, let's try one more thing. Can you please add the following to your
command line:
-mca ras_base_verbose 5 -mca rmaps_base_verbose 5
Appreciate your patience. For some reason, we are losing your head node from
the allocation when we start trying to map processes. I'm trying to track
Hi
today I tried to build openmpi-1.7rc8r28176 and openmpi-1.9r28175
on "Solaris 10, x86_64" and "Solaris 10, sparc" with "Sun C 5.12".
I used the following commands for openmpi-1.7.x and similar commands
for the other package:
../openmpi-1.7rc8r28176/configure --prefix=/usr/local/openmpi-1.7_64_
Hi,
You can use the btl_tcp_disable_family and oob_tcp_disable_family MCA
parameters to disable the use of a specific IP family addresses. Set both
parameters to 6 to disable IPv6 or set them both to 4 in order to disable
IPv4.
Kind regards,
Hristo
> -Original Message-
> From: users-bou
Hi
I've been fighting trying to run comparitive test of IMB using OpenMPI
1.6.3 on the same node using an Intel Truescale card and the onboard
Ethernet.
Turns out that all of the problems were due to the IP v6 addresses being
firewalled on the nodes but OpenMPI was trying to use the IPv6 add
Hi ,
thank you all for your help.
":" was a typo that I did not see;-). I also neglected to apply the 'make'
command in the example files to convert the **.c files in executable's. mpi
is running fine.
I
Thank you
mfg
Bruno
2013/3/20 Reuti
> Am 20.03.2013 um
Hi Ralph,
Here is the result on patched openmpi-1.7rc8.
mpirun -v -np 8 -hostfile pbs_hosts -x OMP_NUM_THREADS
--display-allocation /home/mishima/Ducom/testbed/mPre m02-ld
== ALLOCATED NODES ==
Data for node: node06 Num slots: 4Max slots: 0
D
Please try it again with the attached patch. The --disable-vt is fine.
Thanks
Ralph
user2.diff
Description: Binary data
On Mar 20, 2013, at 7:47 PM, tmish...@jcity.maeda.co.jp wrote:
>
>
> Hi Ralph,
>
> I have completed rebuild of openmpi1.7rc8.
> To save time, I added --disable-vt. ( Is i
15 matches
Mail list logo