Dear openmpi users,
My basic problem is that I am able to run mpirun as root, but not at a user
level. I have tried installing openmpi via several methods, but all seem to
yield the same problem. I fear that I am missing something very basic and
zero-order, but I can't seem to resolve my problem w
Hi,
> Okay, I have a fix for not specifying the number of procs when
> using a rankfile.
>
> As for the binding pattern, the problem is a syntax error in
> your rankfile. You need a semi-colon instead of a comma to
> separate the sockets for rank0:
>
> > rank 0=bend001 slot=0:0-1,1:0-1 => rank
Hi,
> 3) I have a problem on "tyr" (Solaris 10 sparc).
>
> tyr rankfiles 106 mpiexec -report-bindings \
> -rf rf_tyr_semicolon -np 1 hostname
> [tyr.informatik.hs-fulda.de:29849] [[53951,0],0] ORTE_ERROR_LOG:
> Not found in file
>
> ../../../../../openmpi-1.9a1r29097/orte/mca/rmaps/rank_f
On Sep 1, 2013, at 23:36 , Huangwei wrote:
> Hi George,
>
> Thank you for your reply. Please see below.
> best regards,
> Huangwei
>
>
>
>
> On 1 September 2013 22:03, George Bosilca wrote:
>
> On Aug 31, 2013, at 14:56 , Huangwei wrote:
>
>> Hi All,
>>
>> I would like to send an a
Am 03.09.2013 um 06:48 schrieb Ian Czekala:
> Dear openmpi users,
>
> My basic problem is that I am able to run mpirun as root, but not at a user
> level. I have tried installing openmpi via several methods, but all seem to
> yield the same problem. I fear that I am missing something very basic
Heck if I know what might be wrong - it works fine for me, regardless of what
machine I run it from.
If this is compiled with --enable-debug, try adding "--display-allocation -mca
rmaps_base_verbose 5" to your cmd line to see what might be going on.
On Sep 3, 2013, at 1:20 AM, Siegmar Gross
Yes! Thank you for your help. Doing
$./configure --disable-pty-support --prefix=/usr/local/openmpi
$make all
$sudo make install
fixed the issue
$ mpirun -np 2 /bin/pwd
/home/ian
/home/ian
Thanks a bunch,
Ian
On Tue, Sep 3, 2013 at 6:26 AM, Reuti wrote:
> Am 03.09.2013 um 06:48 schrieb Ian
Nathan,
Thanks for the help. I can run a job using openmpi, assigning a signle
process per node. However, I have been failing to run a job using
multiple MPI ranks in a single node. In other words, "mpiexec
--bind-to-core --npernode 16 --n 16 ./test" never works (apron -n 16 works
fine). DO yo
How does it fail?
On Sep 3, 2013, at 1:19 PM, "Teranishi, Keita" wrote:
> Nathan,
>
> Thanks for the help. I can run a job using openmpi, assigning a signle
> process per node. However, I have been failing to run a job using
> multiple MPI ranks in a single node. In other words, "mpiexec
> -
It is what I got.
--
There are not enough slots available in the system to satisfy the 16 slots
that were requested by the application:
/home/knteran/test-openmpi/cpi
Either request fewer slots for your application, or make
Interesting - and do you have an allocation? If so, what was it - i.e., can you
check the allocation envar to see if you have 16 slots?
On Sep 3, 2013, at 1:38 PM, "Teranishi, Keita" wrote:
> It is what I got.
>
> --
> Th
Hi,
Here is what I put in my PBS script to allocate only single node (I want
to use 16 MPI processes in a single node).
#PBS -l mppwidth=16
#PBS -l mppnppn=16
#PBS -l mppdepth=1
Here is the output from aprun (aprun -n 16 -N 16).
Process 2 of 16 is on nid00017
Process 5 of 16 is on nid00017
Proce
Hmm, what CLE release is your development cluster running? It is the value
after PrgEnv. Ex. on Cielito we have 4.1.40.
32) PrgEnv-gnu/4.1.40
We have not yet fully tested Open MPI on CLE 5.x.x.
-Nathan Hjelm
HPC-3, LANL
On Tue, Sep 03, 2013 at 10:33:57PM +, Teranishi, Keita wrote:
> Hi,
>
Nathan,
It is close to Cielo and use resource manager under
/opt/cray/xe-sysroot/4.1.40/usr.
Currently Loaded Modulefiles:
1) modules/3.2.6.7 17)
csa/3.0.0-1_2.0401.37452.4.50.gem
2) craype-network-gemini 18)
job/1.5.5-0.1_2.0401.35380.1.10.gem
3) c
Interesting. That should work then. I haven't tested it under batch mode
though. Let
me try to reproduce on Cielito and see what happens.
-Nathan
On Tue, Sep 03, 2013 at 11:04:40PM +, Teranishi, Keita wrote:
> Nathan,
>
> It is close to Cielo and use resource manager under
> /opt/cray/xe-sy
15 matches
Mail list logo