Re: [OMPI users] Setting RPATH for Open MPI libraries

2012-11-11 Thread Reuti
e for novice users or implement the RPATH option? For curiosity: on Mac OS X it finds the necessary library automatically, by which setting is this achieved? `otool -L ` lists the correct one, which was used during compilation. -- Reuti > > On Wed, Sep 12, 2012 at 1:52 PM, Jed Brown wrote

Re: [OMPI users] Setting RPATH for Open MPI libraries

2012-11-11 Thread Reuti
e for novice users or implement the RPATH option? I have another question here. It could be fixed by specifying before the compilation with `mpicc`: export OMPI_LDFLAGS="-Wl,-rpath,/home/reuti/local/openmpi-1.6.3_shared_gcc/lib" But this doesn't work, as the original included -L

Re: [OMPI users] What is the default install library for PATH and LD_LIBRARY_PATH

2012-11-13 Thread Reuti
to spot the location. But I wonder about the version. The actual one seems to be libopen-pal.so.4 -> libopen-pal.so.4.0.3 - which version are you using? -- Reuti

Re: [OMPI users] What is the default install library for PATH and LD_LIBRARY_PATH

2012-11-13 Thread Reuti
Am 13.11.2012 um 19:26 schrieb huaibao zhang: > Hi Reuti, > > Thanks for your answer. I really appreciate it. > I am using an old version 1.4.3. for my code. If I only type $./configure, > it will compile Well, it will "configure", but afterwards you need `make` an

Re: [OMPI users] Setting RPATH for Open MPI libraries

2012-11-14 Thread Reuti
Am 12.11.2012 um 10:53 schrieb Iliev, Hristo: > Hi, Reuti, > > Here is an informative article on dynamic libraries path in OS X: > > https://blogs.oracle.com/dipol/entry/dynamic_libraries_rpath_and_mac Thx. - Reuti > The first comment is very informative too. > >

Re: [OMPI users] cluster with iOS or Android devices?

2012-12-01 Thread Reuti
FAICS this paper refers to the IOS from Cisco, not iOS from Apple. -- Reuti > I would never envision a system where a user has a device in his pocket that > is actually doing "something" behind is back...mine was a simple issue with > having devices sitting on my desk, which I use

Re: [OMPI users] EXTERNAL: Re: Problems with shared libraries while launching jobs

2012-12-18 Thread Reuti
g the command, and all the shared libraries were resolved: You checked the mpirun, but not the orted which misses a "libimf.so" from Intel. The Intel libimf.so from the redistributable archive is present on all nodes? -- Reuti > > ldd /release/cfd/openmpi-intel/bin/mpirun

Re: [OMPI users] Invalid filename?

2013-01-21 Thread Reuti
t; you will tell MPI that the file "foobar" is located on NFS. (page 392 in the MPI 2.2 standard) -- Reuti

Re: [OMPI users] control openmpi or force to use pbs?

2013-02-05 Thread Reuti
n use them all. Can the users use a plain ssh between the nodes? If they are forced to use the TM of Torque instead, it should be impossible to start a job on a non-granted machine. - -- Reuti > Thanks, > > D. > > ___ > users mai

Re: [OMPI users] control openmpi or force to use pbs?

2013-02-06 Thread Reuti
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Am 06.02.2013 um 12:23 schrieb Duke Nguyen: > On 2/6/13 1:03 AM, Gus Correa wrote: > > > > > On 02/05/2013 08:52 AM, Jeff Squyres (jsquyres) wrote: > > > >> To add to what Reuti said, if you enable PBS

Re: [OMPI users] control openmpi or force to use pbs?

2013-02-06 Thread Reuti
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Am 06.02.2013 um 16:45 schrieb Duke Nguyen: > On 2/6/13 10:06 PM, Jeff Squyres (jsquyres) wrote: >> On Feb 6, 2013, at 5:11 AM, Reuti wrote: >> >>>> Thanks Reuti and Jeff, you are right, users should not be allowed to ss

Re: [OMPI users] newbie: Submitting Open MPI jobs to SGE ( `qsh -pe orte 4` fails)

2013-02-08 Thread Reuti
: executing task of job 84552 failed: execution daemon on host > "node02" didn't accept task This is a good sign, as it tries to use `qrsh -inherit ...` already. Can you confirm the following settings: $ qconf -sp orte ... control_slaves TRUE $ qconf -sq all.q ... shell_start

Re: [OMPI users] newbie: Submitting Open MPI jobs to SGE ( `qsh, -pe orte 4` fails)

2013-02-11 Thread Reuti
t set environment variables in case you use `qrsh` without a command or `qlogin` and some default hostfile will be used instead (unless you provide one). Below with the supplied command it should be fine. -- Reuti > time /commun/data/packages/openmpi/bin/mpirun -np 7 /path/to/a.outarg1 &

[OMPI users] Option -cpus-per-proc 2 not working with given machinefile?

2013-02-27 Thread Reuti
gt; fails as expected So, it got the slot counts in the correct way. What do I miss? -- Reuti reuti@node006:~> mpiexec -cpus-per-proc 2 -report-bindings -hostfile machines -np 64 ./mpihello -- An invalid physical processor

Re: [OMPI users] Option -cpus-per-proc 2 not working with given machinefile?

2013-02-28 Thread Reuti
count. What should it be in this case - "unknown", "infinity"? -- Reuti > > I'll have to take a look at this and get back to you on it. > > On Feb 27, 2013, at 3:15 PM, Reuti wrote: > >> Hi, >> >> I have an issue using the option -cp

Re: [OMPI users] Option -cpus-per-proc 2 not working with given machinefile?

2013-02-28 Thread Reuti
Am 28.02.2013 um 08:58 schrieb Reuti: > Am 28.02.2013 um 06:55 schrieb Ralph Castain: > >> I don't off-hand see a problem, though I do note that your "working" version >> incorrectly reports the universe size as 2! > > Yes, it was 2 in the case when it w

Re: [OMPI users] Option -cpus-per-proc 2 not working with given machinefile?

2013-02-28 Thread Reuti
Am 28.02.2013 um 17:29 schrieb Ralph Castain: > > On Feb 28, 2013, at 6:17 AM, Reuti wrote: > >> Am 28.02.2013 um 08:58 schrieb Reuti: >> >>> Am 28.02.2013 um 06:55 schrieb Ralph Castain: >>> >>>> I don't off-hand see a problem, though

Re: [OMPI users] Option -cpus-per-proc 2 not working with given machinefile?

2013-02-28 Thread Reuti
cessary? It is of course for now a feasible workaround to get the intended behavior by supplying just an additional hostfile. But regarding my recent eMail I also wonder about the difference between running on the command line and inside SGE. In the latter case the overall universe is correc

Re: [OMPI users] Option -cpus-per-proc 2 not working with given machinefile?

2013-02-28 Thread Reuti
Am 28.02.2013 um 19:21 schrieb Ralph Castain: > > On Feb 28, 2013, at 9:53 AM, Reuti wrote: > >> Am 28.02.2013 um 17:54 schrieb Ralph Castain: >> >>> Hmmmthe problem is that we are mapping procs using the provided slots >>> instead of dividing th

Re: [OMPI users] Option -cpus-per-proc 2 not working with given machinefile?

2013-02-28 Thread Reuti
Am 28.02.2013 um 19:50 schrieb Reuti: > Am 28.02.2013 um 19:21 schrieb Ralph Castain: > >> >> On Feb 28, 2013, at 9:53 AM, Reuti wrote: >> >>> Am 28.02.2013 um 17:54 schrieb Ralph Castain: >>> >>>> Hmmmthe problem is that we are m

Re: [OMPI users] openmpi-1.6.4 mpicc fails to compile hello_c.c

2013-03-07 Thread Reuti
9.o: In function `main': > hello_c.c:(.text+0x1d): undefined reference to `ompi_mpi_comm_world' > hello_c.c:(.text+0x2b): undefined reference to `ompi_mpi_comm_world' What is the output if you compile in addition with -v (verbose)? -- Reuti > collect2: ld returned 1 exit

Re: [OMPI users] Anyone has OpenMPI Logo (Vector Graphics only)

2013-03-11 Thread Reuti
7;s homepage and the second hit will bring you there. -- Reuti > Thanks a lot for any help. > > Devendra > > ___ > users mailing list > us...@open-mpi.org > http://www.open-mpi.org/mailman/listinfo.cgi/users

Re: [OMPI users] a problem about mpirun and SSH when using Open MPI 1.7rc8

2013-03-14 Thread Reuti
what I want. > Is there any way to control that all the SSH requests are sent from the node > where mpirun executed, to all the nodes? > I had checked all the orte parameters, and no answer found. Please give some > suggestions. Depending on the amount of nodes and in case you don'

Re: [OMPI users] modified hostfile does not work with openmpi1.7rc8

2013-03-19 Thread Reuti
this is related to this issue, but at least worth to be mentioned in this context. -- Reuti > Regards, > Tetsuya Mishima > >> Oy; that's weird. >> >> I'm afraid we're going to have to wait for Ralph to answer why that is > happening -- sorry! >&g

Re: [OMPI users] mpirun error output

2013-03-20 Thread Reuti
> I suspect that your original LD_LIBRARY_PATH was empty, so now the path > starts with a ":" and makes bash unhappy. Try reversing the order as above > and it might work. AFAIK additional colons don't matter, but nevertheless I prefer indeed for cosmetic reasons: $

Re: [OMPI users] Problem with mpiexec --cpus-per-proc in multiple nodes in OMPI 1.6.4

2013-03-21 Thread Reuti
; -- I got this when I ran it on the command line and specified a hostfile on my own. The weird thing was, that it was working fine when the job was submitted by SGE. Then the allocation was correct like the hostfile being h

Re: [OMPI users] Running openmpi jobs on two system

2013-03-22 Thread Reuti
s to outside machines of the set up cluster: it's necessary to negotiate with the admin to allow jobs being scheduled thereto. -- Reuti > Thanks > Ahsan > ___ > users mailing list > us...@open-mpi.org > http://www.open-mpi.org/mailman/listinfo.cgi/users

Re: [OMPI users] memory per core/process

2013-03-30 Thread Reuti
ler version they suggest and Open MPI was compiled with the same version? It's running fine in serial mode? The `make check` of abinit succeeded? -- Reuti > mpirun noticed that process rank 0 with PID 16099 on node biobos exited on > signal 11 (Segmentation fault). > > The system

Re: [OMPI users] memory per core/process

2013-03-30 Thread Reuti
limited; it may be limited by a > system limit or implementation. As we run up to 120 threads per rank and > many applications have threadprivate data regions, ability to run without > considering stack limit is the exception rather than the rule. Even if I would be the only user on a cluster of machines, I would define this in any queuingsystem to set the limits for the job. -- Reuti > -- > Tim Prince > > ___ > users mailing list > us...@open-mpi.org > http://www.open-mpi.org/mailman/listinfo.cgi/users >

Re: [OMPI users] memory per core/process

2013-04-02 Thread Reuti
-- > mpirun was unable to launch the specified application as it could not find an > executable: `ulimit` is a shell builtin: $ type ulimit ulimit is a shell builtin It should work wit: $ /usr/local/bin/mpirun -npernode 1 -tag-output sh -c

Re: [OMPI users] memory per core/process

2013-04-02 Thread Reuti
's up to the queuingsystem then to avoid that additional jobs are scheduled to a machine unless the remaining memory is sufficient for their execution in such a situation. -- Reuti > Duke Nguyen a écrit : >> On 3/30/13 3:13 PM, Patrick Bégou wrote: >>> I do not know about you

Re: [OMPI users] memory per core/process

2013-04-02 Thread Reuti
Hi, Am 30.03.2013 um 15:35 schrieb Gustavo Correa: > On Mar 30, 2013, at 10:02 AM, Duke Nguyen wrote: > >> On 3/30/13 8:20 PM, Reuti wrote: >>> Am 30.03.2013 um 13:26 schrieb Tim Prince: >>> >>>> On 03/30/2013 06:36 AM, Duke Nguyen wrote: >>>

Re: [OMPI users] Confused on simple MPI/OpenMP program

2013-04-04 Thread Reuti
> > If I comment out the MPI lines and run the program serially (but still > compiled with mpif90), then I get the expected output value 4. Nope, for me it's still 0 then. -- Reuti > I'm sure I must be overlooking something basic here. Please enlighten me. > Does this ha

Re: [OMPI users] configure problem

2013-04-05 Thread Reuti
your own with which version of the original gcc? -- Reuti

Re: [OMPI users] Copying installed runtimes to another machine and using it

2013-04-23 Thread Reuti
to the path of the Open MPI copy before executing `mpiexec`. http://www.open-mpi.org/faq/?category=building#installdirs -- Reuti > > Thanks > MM > ___ > users mailing list > us...@open-mpi.org > http://www.open-mpi.org/mailman/listinfo.cgi/users

Re: [OMPI users] job termination on grid

2013-04-30 Thread Reuti
with status 137 while attempting > to launch so we are aborting. I wonder why it rose the failure only after running for hours. As 137 = 128 + 9 it was killed, maybe by the queuing system due to the set time limit? If you check the accouting, what is the output of: $ qacct -j -- Reuti &

Re: [OMPI users] running openmpi with specified lib path

2013-05-07 Thread Reuti
ich mpirun show? Did you also recompile your application? -- Reuti > mca: base: component_find: unable to open > /usr/local/lib/openmpi/mca_ess_hnp: > /usr/local/lib/openmpi/mca_ess_hnp.so: undefined symbol: > orte_local_jobdata (ignored) > mca: base: component_find: unable to open &

Re: [OMPI users] Force mpirun to only run under gridengine

2013-06-04 Thread Reuti
mes a copy of `mpiexec`... -- Reuti > Sent from my iPhone > > On Jun 4, 2013, at 11:18 AM, Orion Poplawski wrote: > >> I'd like to be able to force mpirun to require being run under a gridengine >> environment. Any ideas on how to achieve this, if possible?

Re: [OMPI users] openmpi 1.6.3 fails to identify local host if its IP is 127.0.1.1

2013-06-19 Thread Reuti
do you observe in case you use it? The `qrsh` startup is working fine for a long time now. -- Reuti > here's the > relevant snippet from our startup script: > ># the OMPI/SGE integration does not seem to work with ># our SGE version; so use the `mpi` PE and direct OMPI >

Re: [OMPI users] openmpi 1.6.3 fails to identify local host if its IP is 127.0.1.1

2013-06-19 Thread Reuti
Am 19.06.2013 um 22:14 schrieb Riccardo Murri: > On 19 June 2013 20:42, Reuti wrote: >> Am 19.06.2013 um 19:43 schrieb Riccardo Murri : >> >>> On 19 June 2013 16:01, Ralph Castain wrote: >>>> How is OMPI picking up this hostfile? It isn't being specifie

Re: [OMPI users] requirement on ssh when run openmpi

2013-07-27 Thread Reuti
> parallely? as long as you execute `mpiexec` only on c1, it should work. But you can't start a job on c2. -- Reuti > Regards, > meng > > > ___ > users mailing list > us...@open-mpi.org > http://www.open-mpi.org/mailman/listinfo.cgi/users

Re: [OMPI users] requirement on ssh when run openmpi

2013-07-29 Thread Reuti
ractive login in your ~/.bashrc to include this location on the slave node. -- Reuti > On 29/07//2013 05:00, meng wrote: >> Dear Reuti, >> Thank you for the reply. >> In examples directory on the computer c1, command "mpiexec -np 4 hello_c" >> is correctly e

Re: [OMPI users] errors testing openmpi1.6.5 ----

2013-07-29 Thread Reuti
ifort` --disable-shared and: http://www.open-mpi.org/faq/?category=building#static-build Looks like you disabled the shared build without enabling a static build. -- Reuti >> $ make –j8 all >> $ make install >> >> It does not work out. >> >> Thank you. >>

Re: [OMPI users] openmpi+infiniband

2013-07-30 Thread Reuti
" shows: > MCA btl: openib (MCA v2.0, API v2.0, Component v1.6.5) > > So I now compiled openmpi with the option "--with-openib" and tried to > run the intel MPI test. What's the "intel MPI test" - is this an application from Intel's

Re: [OMPI users] requirement on ssh when run openmpi

2013-08-01 Thread Reuti
Am 01.08.2013 um 00:45 schrieb meng: > Dear Dani and Reuti, > > >> either install openmpi on each node, and setup > >> /etc/profile.d/openmpi.{c,}sh and /etc/ld.so.conf.d/openmpi.conf files on > >> both (preferred) or install to a common file system (e.

Re: [OMPI users] Re-locate OpenMPI installation on OS X

2013-08-16 Thread Reuti
/eheien/B/ > [xxx] ~ > mpicc > dyld: Library not loaded: /Users/eheien/A/lib/libopen-pal.4.dylib > Referenced from: /Users/eheien/B/bin/mpicc Besides setting the OPAL_PREFIX, also the DYLD_LIBRARY_PATH needs to be adjusted to point to the new /Users/eheien/B/lib location. -- Reuti > R

Re: [OMPI users] mpirun tries to ssh to local machine

2013-08-26 Thread Reuti
name`? Does it match the one in the machinefile? For systemd there is a new command `hostnamectl --static set-hostname [NAME]` to set it. -- Reuti

Re: [OMPI users] mpirun tries to ssh to local machine

2013-08-26 Thread Reuti
Am 26.08.2013 um 12:53 schrieb Federico Carotenuto: > Kind Reuti, > > Thanks for your quick reply! > > > I'm afraid I didn't set a machinefile...that may be the problem: I'm fairly > new to MPI and SSH and I'm still quite confused even after reading so

Re: [OMPI users] mpirun tries to ssh to local machine

2013-08-26 Thread Reuti
Am 26.08.2013 um 14:33 schrieb Federico Carotenuto: > Kind Reuti, > > I'm start thinking I've got some compilation issue with MPI: I'm afraid I've > got the MPICH 1 coming with the PGI compiler installation, because if I try > to run mpiexec the terminal answ

Re: [OMPI users] mpirun tries to ssh to local machine

2013-08-26 Thread Reuti
Hi, Am 26.08.2013 um 18:10 schrieb Federico Carotenuto: > Kind Reuti, > > as you suggested I proceeded to install Openmpi 1.6.5 Good. > and changed the environmental variable MPI_ROOT No, there is no such variable necessary to be set (at least from Open MPI points of view).

Re: [OMPI users] Unable to schedule an MPI tasks

2013-08-27 Thread Reuti
even though this is possible with the -a option), but requesting the intended number of cores will allow a job to run as soon as these resources are available. I.e. you can submit several jobs at once but they will be executed one after the other without oversubscribing the available resources. --

Re: [OMPI users] Unable to schedule an MPI tasks

2013-08-27 Thread Reuti
lp: #!/bin/sh . ~/.profile which mpiexec # remove this line when it's working mpiexec ... (Replace ~/.profile with ~/.bash_profile or ~/.bash_login in case you use these. In case `mpiexec` is available even without these: is there something like /etc/profile which needs to be sourced?) -- Reuti &

Re: [OMPI users] Unable to schedule an MPI tasks

2013-08-27 Thread Reuti
end up there. -- Reuti > Thank you in advance. > > Best Regards, > Shi Wei > > > From: re...@staff.uni-marburg.de > > Date: Tue, 27 Aug 2013 09:03:54 +0200 > > To: us...@open-mpi.org > > Subject: Re: [OMPI users] Unable to schedule an MPI tasks >

Re: [OMPI users] mpirun tries to ssh to local machine

2013-08-29 Thread Reuti
Am 29.08.2013 um 10:41 schrieb Federico Carotenuto: > Kind Reuti, > > the output of which mpicc is that such program may be found in various > packages (which can be installed with apt-get), while which mpiexec outputs > nothing (goes back to the prompt). You can compile and i

Re: [OMPI users] Able to run mpirun as root, but not as a user.

2013-09-03 Thread Reuti
; /home/ian > > After doing some searching on the web (and coming across this thread), There is another one: http://www.open-mpi.org/community/lists/users/2010/03/12291.php -- Reuti > I suspected that the issue might be with some PATH setup or user permissions > that weren'

Re: [OMPI users] Changing directory from /tmp

2013-09-04 Thread Reuti
Hi, Am 04.09.2013 um 19:21 schrieb Ralph Castain: > you can specify it with OMPI_TMPDIR in your environment, or "-mca > orte_tmpdir_base " on your cmd line Wouldn't --tmpdir=... do the same with `mpirun` for way the latter you mentioned? -- Reuti > On Sep 4, 201

Re: [OMPI users] linker library file for both fortran and c compilers

2013-09-08 Thread Reuti
def" What about using: F77 = mpif77 resp. CC = mpicc which should supply the paths and names automatically. -- Reuti > i need to know what is the linker library file for both fortran and c > compilers and where i can find them in the build folder ( i think i should > find the

Re: [OMPI users] Query name of appfile

2013-09-17 Thread Reuti
onment variables therein: -x file=baz -x server=foobar in each line and use these in each of the applications. Well, yes - this would be hardcoded in some way. -- Reuti > > Thanks > claas > > > > The information contained in this m

Re: [OMPI users] openmpi, stdin and qlogic infiniband

2013-09-19 Thread Reuti
rt (tried Version 14.0.0.080 Build 20130728 Using: openmpi-1.4.5_gcc_4.7.2_shared openmpi-1.6.5_intel_12.1.5_static openmpi-1.6.5_intel_13.1.3_static it's working as intended for me - no Infiniband in the game though. -- Reuti > and Version 11.1Build 20100806) > (c compiler is

Re: [OMPI users] intermittent node file error running with torque/maui integration

2013-09-20 Thread Reuti
is there, but mpirun still fails in the way my original message specified. > > Has anyone ever seen anything like this before? Is the location for the spool directory local or shared by NFS? Disk full? -- Reuti

Re: [OMPI users] (no subject)

2013-10-07 Thread Reuti
list of hosts it should use to the application? Maybe it's now just running on only one machine - and/or can make use only of local OpenMP inside MKL (yes, OpenMP here which is bound to run on a single machine only). -- Reuti PS: Do you have 16 real cores or 8 plus Hyperthreading? >

Re: [OMPI users] MPI SGE and IB don't work together

2013-10-28 Thread Reuti
compiled for a different version of Open MPI? > (ignored) Is the same version of Open MPI available on all machines and the first one in $LD_LIBRARY_PATH resp. $PATH to be targeted? -- Reuti > c1bay2.31114Driver initialization failure on /dev/ipath (err=23) > c1bay2.31116Driver initializ

Re: [OMPI users] Problem running on multiple nodes with Java bindings

2013-11-11 Thread Reuti
ss ssh is also possible between the nodes? Using hostbased authentication it's also possible to enable it for all users without the necessity to prepare the ssh keys. -- Reuti > /Christoffer > > > 2013/11/11 Ralph Castain > Add --enable-debug to your configure and run it wi

Re: [OMPI users] Oversubscription of nodes with Torque and OpenMPI

2013-11-22 Thread Reuti
, > despite the fact that some nodes only have two processors and some have more. You submitted the job itself by requesting 24 cores for it too? -- Reuti > Is there a way to have OpenMPI "gracefully" oversubscribe nodes by allocating > instances based on the "np=

Re: [OMPI users] Oversubscription of nodes with Torque and OpenMPI

2013-11-22 Thread Reuti
Am 22.11.2013 um 18:56 schrieb Jason Gans: > On 11/22/13 10:47 AM, Reuti wrote: >> Hi, >> >> Am 22.11.2013 um 17:32 schrieb Gans, Jason D: >> >>> I would like to run an instance of my application on every *core* of a >>> small cluster. I am using

Re: [OMPI users] Request for help/suggestion

2013-11-22 Thread Reuti
one in few of the nodes(total it will be >8 for 8 > nodes) Do you have more than one network interface in each machine with different names? -- Reuti > compute-0-6: tel 12279 1 0 18:54 ?00:00:00 orted > --daemonize -mca ess env -mca orte_ess_jobid 744292352 -mca

Re: [OMPI users] Oversubscription of nodes with Torque and OpenMPI

2013-11-22 Thread Reuti
des file to Torque (where all of the nodes were > listed > as having a large number of processors, i.e. np=8). Now I can submit jobs > using: > > qsub -I -l nodes=n:ppn=2+n0001:ppn=2+n0002:ppn=8+... This shouldn't be necessary when Torque knows the number of cores in each m

Re: [OMPI users] open-mpi on Mac OS 10.9 (Mavericks)

2013-11-25 Thread Reuti
Am 25.11.2013 um 14:25 schrieb Meredith, Karl: > I do have these two environment variables set: > > LD_LIBRARY_PATH=/Users/meredithk/tools/openmpi/lib On a Mac it should DYLD_LIBRARY_PATH - and there are *.dylib files in your /Users/meredithk/tools/openmpi/lib? -- Reuti >

Re: [OMPI users] openmpi-1.7.4a1r29646 with -hostfile option under Torque manager

2013-11-26 Thread Reuti
waiting for job 8116.manage.cluster to start > qsub: job 8116.manage.cluster ready Are the environment variables of Torque also set in an interactive session? What is the output of: $ env | grep PBS inside such a session. -- Reuti > [mishima@node03 ~]$ cd ~/Ducom/testbed2 > [mishim

Re: [OMPI users] Error: Unable to create the sub-directory (/tmp/openmpi etc...)

2013-12-17 Thread Reuti
nerate any temporary directory therein, and any additional one created by a batch job should go to $TMPDIR which is created and removed by Torque for your particular job. It might be that Open MPI is not tightly integrated into your Torque installation. Did you ever have the chance to p

Re: [OMPI users] random error bugging me..

2014-01-19 Thread Reuti
e before.on the first 8 nodes. The same version of Open MPI is installed also on the new nodes? -- Reuti > But after adding new nodes, everything is fucked up and i have no idea why:( > > #*** The MPI_Comm_f2c() function was called after MPI_FINALIZE was invoked. > *** This is dis

Re: [OMPI users] Open MPI and multiple Torque versions

2014-01-27 Thread Reuti
en major release of Open MPI is guaranteed to have the same ABI. -- Reuti > Grepping through the Open MPI installations for torque used during > installation gave me some hits in binaries like mpirun or the static libs. > I would appreciate any hints.

Re: [OMPI users] Running on two nodes slower than running on one node

2014-01-29 Thread Reuti
5-2400 has only 4 cores and no threads. It depends on the algorithm how much data has to be exchanged between the processes, and this can indeed be worse when used across a network. Also: is the algorithm scaling linear when used on node1 only with 8 cores? When it's "35.7615 " with 4 cores, what result do you get with 8 cores on this machine. -- Reuti

Re: [OMPI users] Running on two nodes slower than running on one node

2014-01-29 Thread Reuti
Quoting Victor : Thanks for the reply Reuti, There are two machines: Node1 with 12 physical cores (dual 6 core Xeon) and Do you have this CPU? http://ark.intel.com/de/products/37109/Intel-Xeon-Processor-X5560-8M-Cache-2_80-GHz-6_40-GTs-Intel-QPI -- Reuti Node2 with 4 physical cores (i5

Re: [OMPI users] Compiling OpenMPI with PGI pgc++

2014-01-29 Thread Reuti
http://www.pgroup.com/lit/articles/insider/v4n1a2.htm pgc++ uses the new ABI. -- Reuti > Do you have the latest version of your PGI compiler suite in that series? > > > On Jan 29, 2014, at 12:10 PM, Jiri Kraus wrote: > >> Hi Jeff, >> >> thanks for tak

Re: [OMPI users] writev error: Bad address

2014-01-31 Thread Reuti
Bad address(1) > > > Any ideas about what is going on or what I can do to fix it? > > I am using the openmpi-bin 1.4.2-4 Debian package on a cluster running Debian > squeeze. I suggest to move to a newer version, either stable 1.6.5 or feature 1.7.3 before doing fu

Re: [OMPI users] Compiling OpenMPI with PGI pgc++

2014-01-31 Thread Reuti
e link compatible with `gcc`, while `pgcpp` links with `pgcc` objects. -- Reuti > Thanks > > Jiri > >> -Original Message- >> Date: Wed, 29 Jan 2014 18:12:46 + >> From: "Jeff Squyres (jsquyres)" >> To: Open MPI Users >>

Re: [OMPI users] Compiling OpenMPI with PGI pgc++

2014-02-01 Thread Reuti
with g++. On the > other hand pgcpp implements its own ABI and is compatible with itself. thx for this clarification. -- Reuti > Jiri > > Sent from my Nexus 7, I apologize for spelling errors and auto correction > typos. > > -Original Message- > Date:

Re: [OMPI users] opal_os_dirpath_create: Error: Unable to create the, sub-directory

2014-02-03 Thread Reuti
l of > /tmp/openmpi-sessions-${USER}* > Since we do that kind of testing since many years, I also agree it is not a > widespread issue... But it just occured 2 times in the last 3 days!!! :-/ What about using a queuing system? Open MPI will put the created files into a subdirectory dedi

Re: [OMPI users] OpenMpi installation

2014-02-07 Thread Reuti
The ~/.bashrc is by default only sourced for a non-interactive login. Do you have access to the command when you source it by hand? -- Reuti NB: To avoid getting any system wide `mpicc`... first, it's safer to specify your custom installation first in both assignments of the environment vari

Re: [OMPI users] OpenMpi installation

2014-02-07 Thread Reuti
ve logins or simply define it therein too. Please have a look at `man bash` section "INVOCATION". -- Reuti > On Fri, Feb 7, 2014 at 8:05 PM, Reuti wrote: > Hi, > > Am 07.02.2014 um 17:42 schrieb Talla: > > > I downloaded openmpi 1.7 and followed the i

Re: [OMPI users] one more finding in openmpi-1.7.5a1

2014-02-14 Thread Reuti
Am 14.02.2014 um 11:23 schrieb tmish...@jcity.maeda.co.jp: > You've found it in the dream, interesting! It happens sometimes to get insights while dreaming: https://skeptics.stackexchange.com/questions/5317/was-the-periodic-table-discovered-in-a-dream-by-dmitri-mendeleyev -- Reuti &

Re: [OMPI users] hwloc error in topology.c in OMPI 1.6.5

2014-02-28 Thread Reuti
ro to report the bug and ask them to provide the old 3.0 BIOS > in the meantime. One thing to try: load default values in the BIOS. Sometimes they get screwed up. Then: some flash applications allow to save the original BIOS before any upgrade. Maybe you can save the BIOS of one of the old nod

Re: [OMPI users] Heterogeneous cluster problem - mixing AMD and Intel nodes

2014-03-02 Thread Reuti
starting the job with > /opt/openmpi-1.7.4/bin/mpirun. For a non-interactive login ~/.bashrc is used. -- Reuti > Regarding open-mx, yes I will look into that next to see if the job is indeed > using it. My msa flag is --mca mx self > ___ &

Re: [OMPI users] Compiling OpenMPI with PGI pgc++

2014-03-10 Thread Reuti
Hi, Am 02.02.2014 um 00:23 schrieb Reuti: >> >>> Thanks for taking a look. I just learned from PGI that this is a known bug >>> that will be fixed in the 14.2 release (Februrary 2014). Just for curiosity: was there any update on this issue - looks like PGI 14.3 is

Re: [OMPI users] ssh error

2014-03-11 Thread Reuti
s=8 max-slots=8 > > command: > mpirun --host texthost -np 2 /home/client3/espresso-5.0.2/bin/pw.x -in > AdnAu.rx.in | tee AdnAu.rx.out --host denotes particular host(s). Please have a look at the -hostfile/-machinefile option in the man page to use a file. -- Reuti > error: > ssh

Re: [OMPI users] Cannot run a job with more than 3 nodes

2014-03-12 Thread Reuti
our Ubuntu Linux is installed on all machines? -- Reuti > But if I add a fourth node into the hostfile eg: > > Node1 slots=8 max-slots=8 > Node2 slots=8 max-slots=8 > Node3 slots=8 max-slots=8 > Node4 slots=8 max-slots=8 > > I get this error after attempting mpirun -np 32 --h

Re: [OMPI users] trying to use personal copy of 1.7.4

2014-03-12 Thread Reuti
I see "libtorque" in the output below - were these jobs running inside a queuing system? The set paths might be different therein, and need to be set in the job script in this case. -- Reuti > See http://www.open-mpi.org/faq/?category=running#run-prereqs, > http://www.open-m

Re: [OMPI users] sge qdel fails

2007-07-23 Thread Reuti
gnore SIGUSR1/2 in orted (and maybe in mpirun, otherwise it also must be trapped there). So it could be included in the action to the --no- daemonize option given to orted when running under SGE. For now you would also need this in the job script: #!/bin/sh trap '' usr2 expo

Re: [OMPI users] mpi daemon

2007-08-02 Thread Reuti
aemon identified, or how should it be started? The output of: ps f -eo pid,ppid,pgrp,user,group,command might be more informative. -- Reuti Thanks francesco pietra __ __ Shape Yahoo! in your own

Re: [OMPI users] Two different compilation of openmpi

2007-09-14 Thread Reuti
this specific mpiexec therein later on by adjusting the $PATH accordingly. As we have only two different versions, we don't use the mentioned "modules" package for now, but hardcode the appropriate PATH in the jobscript for our queuing system. --- Reuti

Re: [OMPI users] another mpirun + xgrid question

2007-09-17 Thread Reuti
XGrid handling it, as you refer to command-line-tool? If you have a jobscript with three mpirun-commands for three different programs, XGrid will transfer all three programs to the nodes for this job, or is it limited to be just one mpirun is just one XGrid job? -- Reuti Brian

Re: [OMPI users] another mpirun + xgrid question

2007-09-17 Thread Reuti
te machines before a job starts as well. Both options are useful when working in non-shared file systems. this is fantasic - but is this a hidden feature, compile-time option, or lack of documentation? When I issue "mpirun -help" I don't get these options listed. Hence I wasn&#

Re: [OMPI users] another mpirun + xgrid question

2007-09-18 Thread Reuti
Am 18.09.2007 um 01:17 schrieb Josh Hursey: What version of Open MPI are you using? This feature is not in any release at the moment, but is scheduled for v1.3. It is available in the subversion development trunk which Aha - thx. I simply looked at the latest 1.2.3 only. - Reuti can be

Re: [OMPI users] ompi-1.2.4 fails to make on iMac (10.4.10)

2007-10-08 Thread Reuti
/usr/bin/gcc as for me or something more? -- Reuti OMPI_CONFIGURE_DATE = Sat Oct 6 16:05:59 PDT 2007 OMPI_CONFIGURE_HOST = michael-clovers-computer.local OMPI_CONFIGURE_USER = mrc OMPI_CXX_ABSOLUTE = DISPLAY known /usr/bin/g++ OMPI_F77_ABSOLUTE = none OMPI_F90_ABSOLUTE = none I am a

Re: [OMPI users] OpenMPI + SGE Problem

2007-10-17 Thread Reuti
. For this to work, control_slaves must be set to true, and there must not run any firewall on the machines, as a random port will be used for communication. -- Reuti

Re: [OMPI users] orterun "by hand"

2007-10-24 Thread Reuti
: with PVM it's possible, but rarely used I think. -- Reuti george. On Oct 24, 2007, at 12:06 PM, Dean Dauger, Ph. D. wrote: Hello, I'd like to run Open MPI "by hand". I have a few ordinary workstations I'd like to run a code using Open MPI on. They're i

Re: [OMPI users] Run a process double

2007-11-29 Thread Reuti
), instead of share the calcules between the two processor, the system duplicate the executable and send one to each processor. this seems to be fine. What were you expecting? With OpenMP you will see threads, and with OpenMPI processes. -- Reuti i don´t know what the h$%& is goin

Re: [OMPI users] Torque and OpenMPI 1.2

2007-12-18 Thread Reuti
ranted machines and slots. -- Reuti "mpirun". To allow the modification of the hostfile, I downloaded OpenMPI 1.2 and attempted to do a "configure" with the options shown below: ./configure --prefix /opt/openmpi-1.2 --with-openib=/usr/local/ofed --with-tm=/usr/local/pbs

Re: [OMPI users] No output from mpirun

2007-12-30 Thread Reuti
immediately to the prompt. Here's my program: is the mpirun the one from Open MPI? -- Reuti #include #include using namespace std; int main(int argc,char* argv[]) { int rank,nproc; cout<<"Before"<I also tried version 1.2.4 but still no

<    1   2   3   4   5   6   >