gt; error.
>
> I will be very glade for the kind suggestions.
There are certain parameters to set the range of used ports, but using any up
to 1024 should not be the default:
http://www.open-mpi.org/community/lists/users/2011/11/17732.php
Are any of these set by accident beforehand
for a node than are actually present.
>>
>
> That is true.
> If you lie to the queue system about your resources,
> it will believe you and oversubscribe.
> Torque has this same feature.
> I don't know about SGE.
It's possible too.
-- Reuti
> You may choo
ny binding request is only a soft request.
UGE: here you can request a hard binding. But I have no clue whether this
information is read by Open MPI too.
If in doubt: use only complete nodes for each job (which is often done for
massively parallel jobs anyway).
-- Reuti
> A cursory reading of
Am 27.03.2014 um 23:59 schrieb Dave Love:
> Reuti writes:
>
>> Do all of them have an internal bookkeeping of granted cores to slots
>> - i.e. not only the number of scheduled slots per job per node, but
>> also which core was granted to which job? Does Open MPI read t
l - i.e., you can't
> emulate an x86 architecture on top of a Sparc or Power chip or vice versa.
Well - you have to emulate the CPU. There were products running a virtual x86
PC on a Mac with PowerPC chip. And IBM has a product called PowerVM Lx86 to run
software compiled for Linux x
in a special way. You can change the shell only by
"-S /bin/bash" then (or redefine the queue to have "shell_start_mode
unix_behavior" set and get the expected behavior when starting a script [side
effect: the shell is not started as login shell any longer. See also `man
sge_conf` =>
o
> Hello
>
> % /bin/csh hello
> Hello
>
> % . hello
> /bin/.: Permission denied
. as a bash shortcut for `source` will also be interpreted by `csh` an generate
this error. You can try to change your interactive shell by: `chsh`.
-- Reuti
> I think I n
)
Without more information it looks like an error in the application and not Open
MPI. Worth to note is that Mac users are in /Users and not /home.
-- Reuti
> Could some one please tell me what might be the problem ?
>
>
> Thanks,
> Bow
> ___
so/TEST/simplempihello.exe
Using the --hostfile on your own would mean to violate the granted slot
allocation by PBS. Just leave this option out. How do you submit your job?
-- Reuti
> And the hostfile /home/sasso/TEST/hosts.file contains 24 entries (the first
> 16 being host node0001
Am 06.06.2014 um 21:04 schrieb Ralph Castain:
> Supposed to, yes - but I don't know how much testing it has seen. I can try
> to take a look
Wasn't it on the list recently, that 1.8.1 should do it even without
passphraseless SSH between the nodes?
-- Reuti
> On Jun 6, 201
how can I fix it?
How do you start the program - just with `mpiexec` and a proper hostfile and
number of slots?
-- Reuti
> Every help and guess is appreciated and will be tested...
> Thanks in advance,
>
> Kurt
> ___
> users mail
anner in
hostnames, my experience is that not all calls are mapping it in a proper way.
To avoid any confusion because of this, it's best to have them all in
lowercase. I don't know whether this is related to your observation.
-- Reuti
> It is arranged that the sum over the n_i is equal t
nal ~/local/openmpi-1.8 or alike. Pointing your $PATH and $LD_LIBRARY_PATH
to your own version will supersede installed system one.
-- Reuti
> Jeffrey A. Cummings
> Engineering Specialist
> Performance Modeling and Analysis Department
> Systems Analysis and Simulation Subdivision
n this up in a proper way when Control-C is pressed?
But maybe there is something left in /tmp like "openmpi-sessions-...@..." which
needs to be removed.
-- Reuti
> On 08/07/2014 11:16 AM, Jane Lewis wrote:
>> Hi all,
>>
>> This is a really simple problem (I hope) wh
cally. I assume
not, and it must be read out of the machine file (there ought to be an extra
column for it in their version) and feed to Open MPI by some measures.
-- Reuti
> Thanks in advance
> Antonio
>
>
>
>
> On 12 Aug 2014, at 14:10, Jeff Squyres (jsquyres) wrote:
ther case the generated $PE_HOSTFILE needs to
be adjusted, as you have to request 14 times 8 cores in total for your
computation to avoid that SGE will oversubscribe the machines.
-- Reuti
* This will also forward the environment variables to the slave machines.
Without the Tight Integration there
ted slots
per machine by OMP_NUM_THREADS, c) throw an error in case it's not divisible by
OMP_NUM_THREADS. Then start one process per quotient.
Would this work for you?
-- Reuti
PS: This would also mean to have a couple of PEs in SGE having a fixed
"allocation_rule". While this works
equesting this PE with an overall slot count of 80
c) copy and alter the $PE_HOSTFILE to show only (granted core count per
machine) divided by (OMP_NUM_THREADS) per entry, change $PE_HOSTFILE so that it
points to the altered file
d) Open MPI with a Tight Integration will now start only N process pe
+ no change in the SGE installation
+ no change to the jobscript
+ OMP_NUM_THREADS can be altered for different steps of the jobscript while
staying inside the granted allocation automatically
o should MKL_NUM_THREADS be covered too (does it use OMP_NUM_THREADS already)?
-- Reuti
> echo
Hi,
Am 20.08.2014 um 06:26 schrieb Tetsuya Mishima:
> Reuti and Oscar,
>
> I'm a Torque user and I myself have never used SGE, so I hesitated to join
> the discussion.
>
> From my experience with the Torque, the openmpi 1.8 series has already
> resolved the issue yo
Hi,
Am 20.08.2014 um 13:26 schrieb tmish...@jcity.maeda.co.jp:
> Reuti,
>
> If you want to allocate 10 procs with N threads, the Torque
> script below should work for you:
>
> qsub -l nodes=10:ppn=N
> mpirun -map-by slot:pe=N -np 10 -x OMP_NUM_THREADS=N ./inverse.exe
Am 20.08.2014 um 16:26 schrieb Ralph Castain:
> On Aug 20, 2014, at 6:58 AM, Reuti wrote:
>
>> Hi,
>>
>> Am 20.08.2014 um 13:26 schrieb tmish...@jcity.maeda.co.jp:
>>
>>> Reuti,
>>>
>>> If you want to allocate 10 procs with N
etero-nodes to your command line
> (as the nodes appear to be heterogeneous to us).
b) Aha, it's not about different type CPU types, but also same CPU type but
different allocations between the nodes? It's not in the `mpiexec` man-page of
1.8.1 though. I'll have a look at it.
> So it is up to the RM to set the constraint - we just live within it.
Fine.
-- Reuti
Am 20.08.2014 um 23:16 schrieb Ralph Castain:
>
> On Aug 20, 2014, at 11:16 AM, Reuti wrote:
>
>> Am 20.08.2014 um 19:05 schrieb Ralph Castain:
>>
>>>>
>>>> Aha, this is quite interesting - how do you do this: scanning the
>>>> /pro
Hi,
Am 21.08.2014 um 01:56 schrieb tmish...@jcity.maeda.co.jp:
> Reuti,
>
> Sorry for confusing you. Under the managed condition, actually
> -np option is not necessary. So, this cmd line also works for me
> with Torque.
>
> $ qsub -l nodes=10:ppn=N
> $ mpirun -map-by
ine.
$ qsub -pe orte 80 ...
export OMP_NUM_THREADS=8
awk -v omp_num_threads=$OMP_NUM_THREADS '{ $2/=omp_num_threads; print }'
$PE_HOSTFILE > $TMPDIR/machines
export PE_HOSTFILE=$TMPDIR/machines
mpirun -bind-to none ./yourapp.exe
===
I hope having all three versions in one email
Am 21.08.2014 um 15:45 schrieb Ralph Castain:
> On Aug 21, 2014, at 2:51 AM, Reuti wrote:
>
>> Am 20.08.2014 um 23:16 schrieb Ralph Castain:
>>
>>>
>>> On Aug 20, 2014, at 11:16 AM, Reuti wrote:
>>>
>>>> Am 20.08.2014 um 19:05 schri
Am 21.08.2014 um 16:00 schrieb Ralph Castain:
>
> On Aug 21, 2014, at 6:54 AM, Reuti wrote:
>
>> Am 21.08.2014 um 15:45 schrieb Ralph Castain:
>>
>>> On Aug 21, 2014, at 2:51 AM, Reuti wrote:
>>>
>>>> Am 20.08.2014 um 23:16 schrieb Ralph Ca
Am 21.08.2014 um 16:50 schrieb Reuti:
> Am 21.08.2014 um 16:00 schrieb Ralph Castain:
>
>>
>> On Aug 21, 2014, at 6:54 AM, Reuti wrote:
>>
>>> Am 21.08.2014 um 15:45 schrieb Ralph Castain:
>>>
>>>> On Aug 21, 2014, at 2:51 AM, Reuti w
Am 21.08.2014 um 16:50 schrieb Reuti:
> Am 21.08.2014 um 16:00 schrieb Ralph Castain:
>
>>
>> On Aug 21, 2014, at 6:54 AM, Reuti wrote:
>>
>>> Am 21.08.2014 um 15:45 schrieb Ralph Castain:
>>>
>>>> On Aug 21, 2014, at 2:51 AM, Reuti w
le.out
> #$ -pe ompi* 6
Which PEs can be addressed here? What are their allocation rules (looks like
you need "$pe_slots").
What version of SGE?
What version of Open MPI?
Compiled with --with-sge?
For me it's working in either way.
-- Reuti
> ./app
>
> error m
Am 21.08.2014 um 16:50 schrieb Reuti:
> Am 21.08.2014 um 16:00 schrieb Ralph Castain:
>
>>
>> On Aug 21, 2014, at 6:54 AM, Reuti wrote:
>>
>>> Am 21.08.2014 um 15:45 schrieb Ralph Castain:
>>>
>>>> On Aug 21, 2014, at 2:51 AM, Reuti w
Am 25.08.2014 um 13:23 schrieb Pengcheng Wang:
> Hi Reuti,
>
> A simple hello_world program works without the h_vmem limit. Honestly, I am
> not familiar with Open MPI. The command qconf -spl and qconf -sp ompi give
> the information below.
Thx.
> But strangely, it begi
Hi,
Am 27.08.2014 um 09:57 schrieb Tetsuya Mishima:
> Hi Reuti and Ralph,
>
> How do you think if we accept bind-to none option even when the pe=N option
> is provided?
>
> just like:
> mpirun -map-by slot:pe=N -bind-to none ./inverse
Yes, this would be ok to cover
spread around in the cluster.
If there is much communication maybe it's better on less machines, but if each
process has heavy I/O to the local scratch disk spreading it around may be the
preferred choice. This doesn't make any difference to Open MPI, as the
generated $PE_HOSTFIL
CPU, while
the other CPU first has to send the data to the other CPU to get to the NIC
(besides that the integrated NICs may be connect to the chipset).
Did anyone ever made some benchmarks whether there is a difference in which CPU
was used in the system, i.e. the one to which the network adapt
gt; --
The above line comes from "stop_proc_args" defined in the "mpi" PE and can be
ignored. In fact: you don't need any "stop_proc_args" at all. Maybe you can
define a new PE solely
nefile $TMPDIR/machines ${CPMD_PATH}cpmd.x input ${PP_PATH}/PP/
In the PE "orte" is no "start_proc_args" defined which could generate the
machinefile. Please try to start the application with:
/home/SWcbbc/openmpi-1.6.5/bin/mpirun -mca btl openib ${CPMD_PATH}cpmd.x inp
UM_THREADS=1
>
> CPMD_PATH=/home/tanzi/myroot/X86_66intel-mpi/
> PP_PATH=/home/tanzi
> /home/SWcbbc/openmpi-1.6.5/bin/mpirun ${CPMD_PATH}cpmd.x input
> ${PP_PATH}/PP/ > out
Is this text below in out, file.out or file.err - any hint in the other files?
-- Reuti
>
> The p
to use 1.8.1 where it should work.
Some notes on it:
https://blogs.cisco.com/performance/java-bindings-for-open-mpi/
-- Reuti
> ___
> users mailing list
> us...@open-mpi.org
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/user
-
How many cores are physically installed on this machine - two as mentioned
above?
- -- Reuti
> I ran with --oversubscribed and got the expected host list, which matched
> $PBS_NODEFILE and was 64 entires long:
>
> mpirun -overload-allowed -report-bindings -np 64 --oversubscribe h
_HOSTFILE suggests it distributes the slots sensibly so it
> seems there is an option for openmpi required to get 16 cores per node?
Was Open MPI configured with --with-sge?
-- Reuti
> I tried both 1.8.2, 1.8.3 and also 1.6.5.
>
> Thanks for some clarification that anyone can give.
&g
riggers this delay.
Did anyone else notice it?
-- Reuti
> HTH
> Ralph
>
>
>> On Nov 8, 2014, at 8:13 PM, Brock Palen wrote:
>>
>> Ok I figured, i'm going to have to read some more for my own curiosity. The
>> reason I mention the Resource Manager we use,
Am 10.11.2014 um 12:24 schrieb Reuti:
> Hi,
>
> Am 09.11.2014 um 05:38 schrieb Ralph Castain:
>
>> FWIW: during MPI_Init, each process “publishes” all of its interfaces. Each
>> process receives a complete map of that info for every process in the job.
>> So w
Am 10.11.2014 um 12:50 schrieb Jeff Squyres (jsquyres):
> Wow, that's pretty terrible! :(
>
> Is the behavior BTL-specific, perchance? E.G., if you only use certain BTLs,
> does the delay disappear?
You mean something like:
reuti@annemarie:~> date; mpiexec -mca btl self
sing eth1 resp.
eth0 for the internal network of the cluster.
I tried --hetero-nodes with no change.
Then I turned to:
reuti@annemarie:~> date; mpiexec -mca btl self,tcp --mca oob_tcp_if_include
192.168.154.0/26 -n 4 --hetero-nodes --hostfile machines ./mpihello; date
and the applica
us the content of PE_HOSTFILE?
>
>
>> On Nov 11, 2014, at 4:51 AM, SLIM H.A. wrote:
>>
>> Dear Reuti and Ralph
>>
>> Below is the output of the run for openmpi 1.8.3 with this line
>>
>> mpirun -np $NSLOTS --display-map --display-allocati
Am 11.11.2014 um 17:52 schrieb Ralph Castain:
>
>> On Nov 11, 2014, at 7:57 AM, Reuti wrote:
>>
>> Am 11.11.2014 um 16:13 schrieb Ralph Castain:
>>
>>> This clearly displays the problem - if you look at the reported “allocated
>>> nodes”, you se
Am 11.11.2014 um 19:29 schrieb Ralph Castain:
>
>> On Nov 11, 2014, at 10:06 AM, Reuti wrote:
>>
>> Am 11.11.2014 um 17:52 schrieb Ralph Castain:
>>
>>>
>>>> On Nov 11, 2014, at 7:57 AM, Reuti wrote:
>>>>
>>>> Am 11.1
-mca hwloc_base_binding_policy none
So, the bash was removed. But I don't think that this causes anything.
-- Reuti
> Cheers,
>
> Gilles
>
> On Mon, Nov 10, 2014 at 5:56 PM, Reuti wrote:
> Hi,
>
> Am 10.11.2014 um 16:39 schrieb Ralph Castain:
>
>
the internal or external name of the headnode
given in the machinefile - I hit ^C then. I attached the output of Open MPI
1.8.1 for this setup too.
-- Reuti
Wed Nov 12 16:43:12 CET 2014
[annemarie:01246] mca: base: components_register: registering oob components
[annemarie:0124
Am 12.11.2014 um 17:27 schrieb Reuti:
> Am 11.11.2014 um 02:25 schrieb Ralph Castain:
>
>> Another thing you can do is (a) ensure you built with —enable-debug, and
>> then (b) run it with -mca oob_base_verbose 100 (without the tcp_if_include
>> option) so we can watch
> no problem obfuscating the ip of the head node, i am only interested in
> netmasks and routes.
>
> Ralph Castain wrote:
>>
>>> On Nov 12, 2014, at 2:45 PM, Reuti wrote:
>>>
>>> Am 12.11.2014 um 17:27 schrieb Reuti:
>>>
>>>>
Gus,
Am 13.11.2014 um 02:59 schrieb Gus Correa:
> On 11/12/2014 05:45 PM, Reuti wrote:
>> Am 12.11.2014 um 17:27 schrieb Reuti:
>>
>>> Am 11.11.2014 um 02:25 schrieb Ralph Castain:
>>>
>>>> Another thing you can do is (a) ensure you built with —e
Am 13.11.2014 um 00:34 schrieb Ralph Castain:
>> On Nov 12, 2014, at 2:45 PM, Reuti wrote:
>>
>> Am 12.11.2014 um 17:27 schrieb Reuti:
>>
>>> Am 11.11.2014 um 02:25 schrieb Ralph Castain:
>>>
>>>> Another thing you can do is (a) ensure y
to both the oob and tcp/btl?
Yes.
> Obviously, this won’t make it for 1.8 as it is going to be fairly intrusive,
> but we can probably do something for 1.9
>
>> On Nov 13, 2014, at 4:23 AM, Reuti wrote:
>>
>> Am 13.11.2014 um 00:34 schrieb Ralph Castain:
>&g
ppreciate your replies and will read them thoroughly. I think it's best to
continue with the discussion after SC14. I don't want to put any burden on
anyone when time is tight.
-- Reuti
> These points are in no particular order...
>
> 0. Two fundamental points have been
Hi,
please have a look here:
http://www.open-mpi.org/faq/?category=building#installdirs
-- Reuti
Am 09.12.2014 um 07:26 schrieb Manoj Vaghela:
> Hi OpenMPI Users,
>
> I am trying to build OpenMPI libraries using standard configuration and
> compile procedure. It is just the on
them more easily (i.e.
terminate, suspend,...).
-- Reuti
http://www.drmaa.org/
https://arc.liv.ac.uk/SGE/howto/howto.html#DRMAA
> Alex
>
> 2014-12-12 22:35 GMT-02:00 Gilles Gouaillardet
> :
> Alex,
>
> You need MPI_Comm_disconnect at least.
> I am not sure if this is 1
different stdin/-out/-err in DRMAA by setting
drmaa_input_path/drmaa_output_path/drmaa_error_path for example?
-- Reuti
> mpi_comm_spawn("/bin/sh","-c","siesta < infile",..) definitely does not work.
>
> Patching siesta to start as "siesta
tional blank.
==
I also notice, that I have to supply "-ldl" to `mpicc` to allow the compilation
of an application to succeed in 2.0.0.
-- Reuti
___
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users
> Am 11.08.2016 um 13:28 schrieb Reuti :
>
> Hi,
>
> In the file orte/mca/plm/rsh/plm_rsh_component I see an if-statement, which
> seems to prevent the tight integration with SGE to start:
>
>if (NULL == mca_plm_rsh_component.agent) {
>
> Why is it there (i
a
>>
>> mostly because you still get to set the path once and use it many times
>> without duplicating code.
>>
>>
>> For what it's worth, I've seen Ralph's suggestion generalized to something
>> like
>>
>> PREFIX=$PWD/arch
d, try again later.
Sure, the name of the machine is allowed only after the additional "-inherit"
to `qrsh`. Please see below for the complete in 1.10.3, hence the
assembly seems also not to be done in the correct way.
-- Reuti
> On Aug 11, 2016, at 4:28 AM,
macro: AC_PROG_LIBTOOL
I recall seeing in already before, how to get rid of it? For now I fixed the
single source file just by hand.
-- Reuti
> As for the blank in the cmd line - that is likely due to a space reserved for
> some entry that you aren’t using (e.g., when someone manually
> how/why it got deleted.
>
> https://github.com/open-mpi/ompi/pull/1960
Yep, it's working again - thx.
But for sure there was a reason behind the removal, which may be elaborated in
the Open MPI team to avoid any side effects by fixing this issue.
-- Reuti
PS: The other items
Am 16.08.2016 um 13:26 schrieb Jeff Squyres (jsquyres):
> On Aug 12, 2016, at 2:15 PM, Reuti wrote:
>>
>> I updated my tools to:
>>
>> autoconf-2.69
>> automake-1.15
>> libtool-2.4.6
>>
>> but I face with Open MPI's ./autogen.pl:
ch-mp as this is a
different implementation of MPI, not Open MPI. Also the default location of
Open MPI isn't mpich-mp.
- what does:
$ mpicc -show
$ which mpicc
output?
- which MPI library was used to build the parallel FFTW?
-- Reuti
> Undefined symbols for archit
I didn't find the time to look further into
it. See my post from Aug 11, 2016. With older versions of Open MPI it wasn't
necessary to supply it in addition.
-- Reuti
>
> Cheers,
>
> Gilles
>
>
>
> On Wednesday, September 14, 2016, Mahmood Naderan
> wrot
t; I build libverbs from source first? Am I on the right direction?
The "-l" includes already the "lib" prefix when it tries to find the library.
Hence "-libverbs" might be misleading due to the "lib" in the word, as it
unction `load_driver':
> (.text+0x331): undefined reference to `dlerror'
> /usr/lib/gcc/x86_64-redhat-linux/4.4.7/../../../../lib64/libibverbs.a(src_libibverbs_la-init.o):
> In function `ibverbs_init':
> (.text+0xd25): undefined reference to `dlopen'
> /usr/lib/gcc/
d and computes).
Would it work to compile with a shared target and copy it to /shared on the
frontend?
-- Reuti
> An important question is that, how can I find out what is the name of the
> illegal instruction. Then, I hope to find the document that points which
> instruction se
march=bdver1 what
Gilles mentioned) or to tell me what he thinks it should compile for?
For pgcc there is -show and I can spot the target it discovered in the
USETPVAL= line.
-- Reuti
>
> The solution was (as stated by guys) building Siesta on the compute node. I
> have to say that I teste
ved from all nodes.
>
> While I know there are better ways to test OpenMPI's functionality,
> like compiling and using the programs in examples/, this is the method
> a specific client chose.
There are small "Hello world" programs like here:
http://mpitutorial.com/tutor
to. When I type in the command mpiexec -f hosts -n 4 ./applic
>
> I get this error
> [mpiexec@localhost.localdomain] HYDU_parse_hostfile
> (./utils/args/args.c:323): unable to open host file: hosts
As you mentioned MPICH and their Hydra startup, you better ask at their list:
http://www.mpi
her. For a first test you can
start both with "mpiexec --bind-to none ..." and check whether you see a
different behavior.
`man mpiexec` mentions some hints about threads in applications.
-- Reuti
>
>
> Regards,
> Mahmood
>
>
> ___
ured to use SSH? (I mean the entries in `qconf
-sconf` for rsh_command resp. daemon).
-- Reuti
> Can see the gridengine component via:
>
> $ ompi_info -a | grep gridengine
> MCA ras: gridengine (MCA v2.1.0, API v2.0.0, Component v2.0.2)
> MCA ras gridengin
er.
Under which user account the DVM daemons will run? Are all users using the same
account?
-- Reuti
signature.asc
Description: Message signed with OpenPGP using GPGMail
___
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users
using DVM often leads to a terminated DVM once a process
returned with a non-zero exit code. But once the DVM is gone, the queued jobs
might be lost too I fear. I would wish that the DVM could be more forgivable
(or this feature be adjustable what to do in case of a non-zero exit code).
-- Reuti
Hi,
Only by reading recent posts I got aware of the DVM. This would be a welcome
feature for our setup*. But I see not all options working as expected - is it
still a work in progress, or should all work as advertised?
1)
$ soft@server:~> orte-submit -cf foo --hnp file:/home/reuti/dvmuri -
o the 1.10.6 (use SGE/qrsh)
> one? Are there mca params to set this?
>
> If you need more info, please let me know. (Job submitting machine and target
> cluster are the same with all tests. SW is residing in AFS directories
> visible on all machines. Parameter "plm_rsh_disable_qrsh&
> Am 22.03.2017 um 15:31 schrieb Heinz-Ado Arnolds
> :
>
> Dear Reuti,
>
> thanks a lot, you're right! But why did the default behavior change but not
> the value of this parameter:
>
> 2.1.0: MCA plm rsh: parameter "plm_rsh_agent" (current value: &
gone after the hints on the discussion's link you posted?
As I face it there still about "libeevent".
-- Reuti
>
> *** C++ compiler and preprocessor
> checking whether we are using the GNU C++ compiler... yes
> checking whether pgc++ accepts -g... yes
> checking
or the same part of the CPU, essentially
becoming a bottleneck. But using each half of a CPU for two (or even more)
applications will allow a better interleaving in the demand for resources. To
allow this in the best way: no taskset or binding to cores, let the Linux
kernel and CPU do their best - Y
d by the configure tests, that's a bit of a problem, Just
> adding another -E before $@, should fix the problem.
It's often suggested to use printf instead of the non-portable echo.
- -- Reuti
>
> Prentice
>
> On 04/03/2017 03:54 PM, Prentice Bisbal wrote:
>>
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Am 03.04.2017 um 23:07 schrieb Prentice Bisbal:
> FYI - the proposed 'here-doc' solution below didn't work for me, it produced
> an error. Neither did printf. When I used printf, only the first arg was
> passed along:
>
> #!/bin/bash
>
> realcmd=
mpilation in my home
directory by a plain `export`. I can spot:
$ ldd libmpi_cxx.so.20
…
libstdc++.so.6 =>
/home/reuti/local/gcc-6.2.0/lib64/../lib64/libstdc++.so.6 (0x7f184d2e2000)
So this looks fine (although /lib64/../lib64/ looks nasty). In the library, the
ht it might be because of:
- We define plm_rsh_agent=foo in $OMPI_ROOT/etc/openmpi-mca-params.conf
- We compiled with --with-sge
But also started on the command line by `ssh` to the nodes, there seems no
automatic core binding to take place any longer.
--
this socket has
other jobs running (by accident).
So, this is solved - I wasn't aware of the binding by socket.
But I can't see a binding by core for number of processes <= 2. Does it mean 2
per node or 2 overall for the `mpiexec`?
- -- Reuti
>
>> On Apr 9, 2017, at 3:4
y that warning in addition about the memory
couldn't be bound.
BTW: I always had to use -ldl when using `mpicc`. Now, that I compiled in
libnuma, this necessity is gone.
-- Reuti
___
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users
h looks like being bound to socket.
-- Reuti
> You can always override these behaviors.
>
>> On Apr 9, 2017, at 3:45 PM, Reuti wrote:
>>
>>>> But I can't see a binding by core for number of processes <= 2. Does it
>>>> mean 2 per node or 2 ov
> Am 10.04.2017 um 00:45 schrieb Reuti :
> […]BTW: I always had to use -ldl when using `mpicc`. Now, that I compiled in
> libnuma, this necessity is gone.
Looks like I compiled too many versions in the last couple of days. The -ldl is
necessary in case --disable-shared --enable-s
> Am 10.04.2017 um 17:27 schrieb r...@open-mpi.org:
>
>
>> On Apr 10, 2017, at 1:37 AM, Reuti wrote:
>>
>>>
>>> Am 10.04.2017 um 01:58 schrieb r...@open-mpi.org:
>>>
>>> Let me try to clarify. If you launch a job that has only 1 o
MPI process or is the application issuing many `mpiexec` during
its runtime?
Is there any limit how often `ssh` may access a node in a timeframe? Do you use
any queuing system?
-- Reuti
signature.asc
Description: Message signed with OpenPGP using GPGMail
___
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users
Due to the last post in this thread this copy I suggested seems not to be
possible, but I also want to test whether this post goes through to the list
now.
-- Reuti
===
Hi,
> Am 19.04.2017 um 19:53 schrieb Jim Edwards :
>
> Hi,
>
> I have openmpi-2.0.2 builds on two differe
ed
place, an appropriate output should go to stderr and the exit code set to 1.
-- Reuti
___
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Am 25.04.2017 um 17:27 schrieb Reuti:
> Hi,
>
> In case Open MPI is moved to a different location than it was installed into
> initially, one has to export OPAL_PREFIX. While checking for the availability
> of the GridEngine
the intended task the only option is to use a single machine with as many
cores as possible AFAICS.
- -- Reuti
-BEGIN PGP SIGNATURE-
Comment: GPGTools - https://gpgtools.org
iEYEARECAAYFAlkQ8Y8ACgkQo/GbGkBRnRq4jgCeKI39e2U22qsx9f6VeNZyUqNK
QzQAoNsb
ing to download the
community edition (even the evaluation link on the Spectrum MPI page does the
same).
-- Reuti
> based on OpenMPI, so I hope there are some MPI expert can help me to solve
> the problem.
>
> When I run a simple Hello World MPI program, I get the follow error message:
As I think it's not relevant to Open MPI itself, I answered in PM only.
-- Reuti
> Am 18.05.2017 um 18:55 schrieb do...@mail.com:
>
> On Tue, 9 May 2017 00:30:38 +0200
> Reuti wrote:
>> Hi,
>>
>> Am 08.05.2017 um 23:25 schrieb David Niklas:
>>
1 - 100 of 548 matches
Mail list logo