e for novice users or implement the RPATH option?
For curiosity: on Mac OS X it finds the necessary library automatically, by
which setting is this achieved? `otool -L ` lists the correct one, which
was used during compilation.
-- Reuti
>
> On Wed, Sep 12, 2012 at 1:52 PM, Jed Brown wrote
e for novice users or implement the RPATH option?
I have another question here. It could be fixed by specifying before the
compilation with `mpicc`:
export OMPI_LDFLAGS="-Wl,-rpath,/home/reuti/local/openmpi-1.6.3_shared_gcc/lib"
But this doesn't work, as the original included -L
to spot the location. But I wonder about the version. The actual one seems to
be libopen-pal.so.4 -> libopen-pal.so.4.0.3 - which version are you using?
-- Reuti
Am 13.11.2012 um 19:26 schrieb huaibao zhang:
> Hi Reuti,
>
> Thanks for your answer. I really appreciate it.
> I am using an old version 1.4.3. for my code. If I only type $./configure,
> it will compile
Well, it will "configure", but afterwards you need `make` an
Am 12.11.2012 um 10:53 schrieb Iliev, Hristo:
> Hi, Reuti,
>
> Here is an informative article on dynamic libraries path in OS X:
>
> https://blogs.oracle.com/dipol/entry/dynamic_libraries_rpath_and_mac
Thx. - Reuti
> The first comment is very informative too.
>
>
FAICS this paper refers to the IOS from Cisco, not iOS from Apple.
-- Reuti
> I would never envision a system where a user has a device in his pocket that
> is actually doing "something" behind is back...mine was a simple issue with
> having devices sitting on my desk, which I use
g the command, and all the shared libraries were resolved:
You checked the mpirun, but not the orted which misses a "libimf.so" from
Intel. The Intel libimf.so from the redistributable archive is present on all
nodes?
-- Reuti
>
> ldd /release/cfd/openmpi-intel/bin/mpirun
t; you will tell MPI that the file "foobar" is located
on NFS.
(page 392 in the MPI 2.2 standard)
-- Reuti
n use them all.
Can the users use a plain ssh between the nodes? If they are forced to use the
TM of Torque instead, it should be impossible to start a job on a non-granted
machine.
- -- Reuti
> Thanks,
>
> D.
>
> ___
> users mai
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Am 06.02.2013 um 12:23 schrieb Duke Nguyen:
> On 2/6/13 1:03 AM, Gus Correa wrote:
> >
>
> > On 02/05/2013 08:52 AM, Jeff Squyres (jsquyres) wrote:
>
>
> >> To add to what Reuti said, if you enable PBS
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Am 06.02.2013 um 16:45 schrieb Duke Nguyen:
> On 2/6/13 10:06 PM, Jeff Squyres (jsquyres) wrote:
>> On Feb 6, 2013, at 5:11 AM, Reuti wrote:
>>
>>>> Thanks Reuti and Jeff, you are right, users should not be allowed to ss
: executing task of job 84552 failed: execution daemon on host
> "node02" didn't accept task
This is a good sign, as it tries to use `qrsh -inherit ...` already. Can you
confirm the following settings:
$ qconf -sp orte
...
control_slaves TRUE
$ qconf -sq all.q
...
shell_start
t set environment variables in case you use `qrsh`
without a command or `qlogin` and some default hostfile will be used instead
(unless you provide one). Below with the supplied command it should be fine.
-- Reuti
> time /commun/data/packages/openmpi/bin/mpirun -np 7 /path/to/a.outarg1
&
gt; fails as expected
So, it got the slot counts in the correct way.
What do I miss?
-- Reuti
reuti@node006:~> mpiexec -cpus-per-proc 2 -report-bindings -hostfile machines
-np 64 ./mpihello
--
An invalid physical processor
count. What should it be in this case - "unknown",
"infinity"?
-- Reuti
>
> I'll have to take a look at this and get back to you on it.
>
> On Feb 27, 2013, at 3:15 PM, Reuti wrote:
>
>> Hi,
>>
>> I have an issue using the option -cp
Am 28.02.2013 um 08:58 schrieb Reuti:
> Am 28.02.2013 um 06:55 schrieb Ralph Castain:
>
>> I don't off-hand see a problem, though I do note that your "working" version
>> incorrectly reports the universe size as 2!
>
> Yes, it was 2 in the case when it w
Am 28.02.2013 um 17:29 schrieb Ralph Castain:
>
> On Feb 28, 2013, at 6:17 AM, Reuti wrote:
>
>> Am 28.02.2013 um 08:58 schrieb Reuti:
>>
>>> Am 28.02.2013 um 06:55 schrieb Ralph Castain:
>>>
>>>> I don't off-hand see a problem, though
cessary?
It is of course for now a feasible workaround to get the intended behavior by
supplying just an additional hostfile.
But regarding my recent eMail I also wonder about the difference between
running on the command line and inside SGE. In the latter case the overall
universe is correc
Am 28.02.2013 um 19:21 schrieb Ralph Castain:
>
> On Feb 28, 2013, at 9:53 AM, Reuti wrote:
>
>> Am 28.02.2013 um 17:54 schrieb Ralph Castain:
>>
>>> Hmmmthe problem is that we are mapping procs using the provided slots
>>> instead of dividing th
Am 28.02.2013 um 19:50 schrieb Reuti:
> Am 28.02.2013 um 19:21 schrieb Ralph Castain:
>
>>
>> On Feb 28, 2013, at 9:53 AM, Reuti wrote:
>>
>>> Am 28.02.2013 um 17:54 schrieb Ralph Castain:
>>>
>>>> Hmmmthe problem is that we are m
9.o: In function `main':
> hello_c.c:(.text+0x1d): undefined reference to `ompi_mpi_comm_world'
> hello_c.c:(.text+0x2b): undefined reference to `ompi_mpi_comm_world'
What is the output if you compile in addition with -v (verbose)?
-- Reuti
> collect2: ld returned 1 exit
7;s homepage and the second hit will bring you
there.
-- Reuti
> Thanks a lot for any help.
>
> Devendra
>
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
what I want.
> Is there any way to control that all the SSH requests are sent from the node
> where mpirun executed, to all the nodes?
> I had checked all the orte parameters, and no answer found. Please give some
> suggestions.
Depending on the amount of nodes and in case you don'
this is related to this issue, but at least worth to be
mentioned in this context.
-- Reuti
> Regards,
> Tetsuya Mishima
>
>> Oy; that's weird.
>>
>> I'm afraid we're going to have to wait for Ralph to answer why that is
> happening -- sorry!
>&g
> I suspect that your original LD_LIBRARY_PATH was empty, so now the path
> starts with a ":" and makes bash unhappy. Try reversing the order as above
> and it might work.
AFAIK additional colons don't matter, but nevertheless I prefer indeed for
cosmetic reasons:
$
; --
I got this when I ran it on the command line and specified a hostfile on my
own. The weird thing was, that it was working fine when the job was submitted
by SGE. Then the allocation was correct like the hostfile being h
s to outside
machines of the set up cluster: it's necessary to negotiate with the admin to
allow jobs being scheduled thereto.
-- Reuti
> Thanks
> Ahsan
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
ler version they suggest and Open MPI was compiled with the same version?
It's running fine in serial mode? The `make check` of abinit succeeded?
-- Reuti
> mpirun noticed that process rank 0 with PID 16099 on node biobos exited on
> signal 11 (Segmentation fault).
>
> The system
limited; it may be limited by a
> system limit or implementation. As we run up to 120 threads per rank and
> many applications have threadprivate data regions, ability to run without
> considering stack limit is the exception rather than the rule.
Even if I would be the only user on a cluster of machines, I would define this
in any queuingsystem to set the limits for the job.
-- Reuti
> --
> Tim Prince
>
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>
--
> mpirun was unable to launch the specified application as it could not find an
> executable:
`ulimit` is a shell builtin:
$ type ulimit
ulimit is a shell builtin
It should work wit:
$ /usr/local/bin/mpirun -npernode 1 -tag-output sh -c
's up to the queuingsystem then to avoid that additional jobs are
scheduled to a machine unless the remaining memory is sufficient for their
execution in such a situation.
-- Reuti
> Duke Nguyen a écrit :
>> On 3/30/13 3:13 PM, Patrick Bégou wrote:
>>> I do not know about you
Hi,
Am 30.03.2013 um 15:35 schrieb Gustavo Correa:
> On Mar 30, 2013, at 10:02 AM, Duke Nguyen wrote:
>
>> On 3/30/13 8:20 PM, Reuti wrote:
>>> Am 30.03.2013 um 13:26 schrieb Tim Prince:
>>>
>>>> On 03/30/2013 06:36 AM, Duke Nguyen wrote:
>>>
>
> If I comment out the MPI lines and run the program serially (but still
> compiled with mpif90), then I get the expected output value 4.
Nope, for me it's still 0 then.
-- Reuti
> I'm sure I must be overlooking something basic here. Please enlighten me.
> Does this ha
your own with which version of the original gcc?
-- Reuti
to the path of the Open MPI copy before
executing `mpiexec`.
http://www.open-mpi.org/faq/?category=building#installdirs
-- Reuti
>
> Thanks
> MM
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
with status 137 while attempting
> to launch so we are aborting.
I wonder why it rose the failure only after running for hours. As 137 = 128 + 9
it was killed, maybe by the queuing system due to the set time limit? If you
check the accouting, what is the output of:
$ qacct -j
-- Reuti
&
ich mpirun
show? Did you also recompile your application?
-- Reuti
> mca: base: component_find: unable to open
> /usr/local/lib/openmpi/mca_ess_hnp:
> /usr/local/lib/openmpi/mca_ess_hnp.so: undefined symbol:
> orte_local_jobdata (ignored)
> mca: base: component_find: unable to open
&
mes a copy of `mpiexec`...
-- Reuti
> Sent from my iPhone
>
> On Jun 4, 2013, at 11:18 AM, Orion Poplawski wrote:
>
>> I'd like to be able to force mpirun to require being run under a gridengine
>> environment. Any ideas on how to achieve this, if possible?
do you observe in case you use it? The `qrsh` startup is
working fine for a long time now.
-- Reuti
> here's the
> relevant snippet from our startup script:
>
># the OMPI/SGE integration does not seem to work with
># our SGE version; so use the `mpi` PE and direct OMPI
>
Am 19.06.2013 um 22:14 schrieb Riccardo Murri:
> On 19 June 2013 20:42, Reuti wrote:
>> Am 19.06.2013 um 19:43 schrieb Riccardo Murri :
>>
>>> On 19 June 2013 16:01, Ralph Castain wrote:
>>>> How is OMPI picking up this hostfile? It isn't being specifie
> parallely?
as long as you execute `mpiexec` only on c1, it should work. But you can't
start a job on c2.
-- Reuti
> Regards,
> meng
>
>
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
ractive login in your ~/.bashrc to include this location on the slave
node.
-- Reuti
> On 29/07//2013 05:00, meng wrote:
>> Dear Reuti,
>> Thank you for the reply.
>> In examples directory on the computer c1, command "mpiexec -np 4 hello_c"
>> is correctly e
ifort` --disable-shared
and: http://www.open-mpi.org/faq/?category=building#static-build
Looks like you disabled the shared build without enabling a static build.
-- Reuti
>> $ make –j8 all
>> $ make install
>>
>> It does not work out.
>>
>> Thank you.
>>
" shows:
> MCA btl: openib (MCA v2.0, API v2.0, Component v1.6.5)
>
> So I now compiled openmpi with the option "--with-openib" and tried to
> run the intel MPI test.
What's the "intel MPI test" - is this an application from Intel's
Am 01.08.2013 um 00:45 schrieb meng:
> Dear Dani and Reuti,
>
> >> either install openmpi on each node, and setup
> >> /etc/profile.d/openmpi.{c,}sh and /etc/ld.so.conf.d/openmpi.conf files on
> >> both (preferred) or install to a common file system (e.
/eheien/B/
> [xxx] ~ > mpicc
> dyld: Library not loaded: /Users/eheien/A/lib/libopen-pal.4.dylib
> Referenced from: /Users/eheien/B/bin/mpicc
Besides setting the OPAL_PREFIX, also the DYLD_LIBRARY_PATH needs to be
adjusted to point to the new /Users/eheien/B/lib location.
-- Reuti
> R
name`? Does it match the one in the machinefile?
For systemd there is a new command `hostnamectl --static set-hostname [NAME]`
to set it.
-- Reuti
Am 26.08.2013 um 12:53 schrieb Federico Carotenuto:
> Kind Reuti,
>
> Thanks for your quick reply!
>
>
> I'm afraid I didn't set a machinefile...that may be the problem: I'm fairly
> new to MPI and SSH and I'm still quite confused even after reading so
Am 26.08.2013 um 14:33 schrieb Federico Carotenuto:
> Kind Reuti,
>
> I'm start thinking I've got some compilation issue with MPI: I'm afraid I've
> got the MPICH 1 coming with the PGI compiler installation, because if I try
> to run mpiexec the terminal answ
Hi,
Am 26.08.2013 um 18:10 schrieb Federico Carotenuto:
> Kind Reuti,
>
> as you suggested I proceeded to install Openmpi 1.6.5
Good.
> and changed the environmental variable MPI_ROOT
No, there is no such variable necessary to be set (at least from Open MPI
points of view).
even
though this is possible with the -a option), but requesting the intended number
of cores will allow a job to run as soon as these resources are available. I.e.
you can submit several jobs at once but they will be executed one after the
other without oversubscribing the available resources.
--
lp:
#!/bin/sh
. ~/.profile
which mpiexec # remove this line when it's working
mpiexec ...
(Replace ~/.profile with ~/.bash_profile or ~/.bash_login in case you use
these. In case `mpiexec` is available even without these: is there something
like /etc/profile which needs to be sourced?)
-- Reuti
&
end up there.
-- Reuti
> Thank you in advance.
>
> Best Regards,
> Shi Wei
>
> > From: re...@staff.uni-marburg.de
> > Date: Tue, 27 Aug 2013 09:03:54 +0200
> > To: us...@open-mpi.org
> > Subject: Re: [OMPI users] Unable to schedule an MPI tasks
>
Am 29.08.2013 um 10:41 schrieb Federico Carotenuto:
> Kind Reuti,
>
> the output of which mpicc is that such program may be found in various
> packages (which can be installed with apt-get), while which mpiexec outputs
> nothing (goes back to the prompt).
You can compile and i
; /home/ian
>
> After doing some searching on the web (and coming across this thread),
There is another one:
http://www.open-mpi.org/community/lists/users/2010/03/12291.php
-- Reuti
> I suspected that the issue might be with some PATH setup or user permissions
> that weren'
Hi,
Am 04.09.2013 um 19:21 schrieb Ralph Castain:
> you can specify it with OMPI_TMPDIR in your environment, or "-mca
> orte_tmpdir_base " on your cmd line
Wouldn't --tmpdir=... do the same with `mpirun` for way the latter you
mentioned?
-- Reuti
> On Sep 4, 201
def"
What about using:
F77 = mpif77
resp.
CC = mpicc
which should supply the paths and names automatically.
-- Reuti
> i need to know what is the linker library file for both fortran and c
> compilers and where i can find them in the build folder ( i think i should
> find the
onment
variables therein:
-x file=baz -x server=foobar
in each line and use these in each of the applications. Well, yes - this would
be hardcoded in some way.
-- Reuti
>
> Thanks
> claas
>
>
>
> The information contained in this m
rt (tried Version 14.0.0.080 Build 20130728
Using:
openmpi-1.4.5_gcc_4.7.2_shared
openmpi-1.6.5_intel_12.1.5_static
openmpi-1.6.5_intel_13.1.3_static
it's working as intended for me - no Infiniband in the game though.
-- Reuti
> and Version 11.1Build 20100806)
> (c compiler is
is there, but mpirun still fails in the way my original message specified.
>
> Has anyone ever seen anything like this before?
Is the location for the spool directory local or shared by NFS? Disk full?
-- Reuti
list of hosts it should use to the application? Maybe it's
now just running on only one machine - and/or can make use only of local OpenMP
inside MKL (yes, OpenMP here which is bound to run on a single machine only).
-- Reuti
PS: Do you have 16 real cores or 8 plus Hyperthreading?
>
compiled for a different version of Open MPI?
> (ignored)
Is the same version of Open MPI available on all machines and the first one in
$LD_LIBRARY_PATH resp. $PATH to be targeted?
-- Reuti
> c1bay2.31114Driver initialization failure on /dev/ipath (err=23)
> c1bay2.31116Driver initializ
ss ssh is also possible between the nodes? Using hostbased
authentication it's also possible to enable it for all users without the
necessity to prepare the ssh keys.
-- Reuti
> /Christoffer
>
>
> 2013/11/11 Ralph Castain
> Add --enable-debug to your configure and run it wi
,
> despite the fact that some nodes only have two processors and some have more.
You submitted the job itself by requesting 24 cores for it too?
-- Reuti
> Is there a way to have OpenMPI "gracefully" oversubscribe nodes by allocating
> instances based on the "np=
Am 22.11.2013 um 18:56 schrieb Jason Gans:
> On 11/22/13 10:47 AM, Reuti wrote:
>> Hi,
>>
>> Am 22.11.2013 um 17:32 schrieb Gans, Jason D:
>>
>>> I would like to run an instance of my application on every *core* of a
>>> small cluster. I am using
one in few of the nodes(total it will be >8 for 8
> nodes)
Do you have more than one network interface in each machine with different
names?
-- Reuti
> compute-0-6: tel 12279 1 0 18:54 ?00:00:00 orted
> --daemonize -mca ess env -mca orte_ess_jobid 744292352 -mca
des file to Torque (where all of the nodes were
> listed
> as having a large number of processors, i.e. np=8). Now I can submit jobs
> using:
>
> qsub -I -l nodes=n:ppn=2+n0001:ppn=2+n0002:ppn=8+...
This shouldn't be necessary when Torque knows the number of cores in each
m
Am 25.11.2013 um 14:25 schrieb Meredith, Karl:
> I do have these two environment variables set:
>
> LD_LIBRARY_PATH=/Users/meredithk/tools/openmpi/lib
On a Mac it should DYLD_LIBRARY_PATH - and there are *.dylib files in your
/Users/meredithk/tools/openmpi/lib?
-- Reuti
>
waiting for job 8116.manage.cluster to start
> qsub: job 8116.manage.cluster ready
Are the environment variables of Torque also set in an interactive session?
What is the output of:
$ env | grep PBS
inside such a session.
-- Reuti
> [mishima@node03 ~]$ cd ~/Ducom/testbed2
> [mishim
nerate any temporary directory therein, and any additional one created by a
batch job should go to $TMPDIR which is created and removed by Torque for your
particular job. It might be that Open MPI is not tightly integrated into your
Torque installation. Did you ever have the chance to p
e before.on the first 8 nodes.
The same version of Open MPI is installed also on the new nodes?
-- Reuti
> But after adding new nodes, everything is fucked up and i have no idea why:(
>
> #*** The MPI_Comm_f2c() function was called after MPI_FINALIZE was invoked.
> *** This is dis
en major release of Open MPI is guaranteed to have the same ABI.
-- Reuti
> Grepping through the Open MPI installations for torque used during
> installation gave me some hits in binaries like mpirun or the static libs.
> I would appreciate any hints.
5-2400 has only 4 cores and no threads.
It depends on the algorithm how much data has to be exchanged between the
processes, and this can indeed be worse when used across a network.
Also: is the algorithm scaling linear when used on node1 only with 8 cores?
When it's "35.7615 " with 4 cores, what result do you get with 8 cores on this
machine.
-- Reuti
Quoting Victor :
Thanks for the reply Reuti,
There are two machines: Node1 with 12 physical cores (dual 6 core Xeon) and
Do you have this CPU?
http://ark.intel.com/de/products/37109/Intel-Xeon-Processor-X5560-8M-Cache-2_80-GHz-6_40-GTs-Intel-QPI
-- Reuti
Node2 with 4 physical cores (i5
http://www.pgroup.com/lit/articles/insider/v4n1a2.htm
pgc++ uses the new ABI.
-- Reuti
> Do you have the latest version of your PGI compiler suite in that series?
>
>
> On Jan 29, 2014, at 12:10 PM, Jiri Kraus wrote:
>
>> Hi Jeff,
>>
>> thanks for tak
Bad address(1)
>
>
> Any ideas about what is going on or what I can do to fix it?
>
> I am using the openmpi-bin 1.4.2-4 Debian package on a cluster running Debian
> squeeze.
I suggest to move to a newer version, either stable 1.6.5 or feature 1.7.3
before doing fu
e link compatible with `gcc`, while `pgcpp` links with
`pgcc` objects.
-- Reuti
> Thanks
>
> Jiri
>
>> -Original Message-
>> Date: Wed, 29 Jan 2014 18:12:46 +
>> From: "Jeff Squyres (jsquyres)"
>> To: Open MPI Users
>>
with g++. On the
> other hand pgcpp implements its own ABI and is compatible with itself.
thx for this clarification.
-- Reuti
> Jiri
>
> Sent from my Nexus 7, I apologize for spelling errors and auto correction
> typos.
>
> -Original Message-
> Date:
l of
> /tmp/openmpi-sessions-${USER}*
> Since we do that kind of testing since many years, I also agree it is not a
> widespread issue... But it just occured 2 times in the last 3 days!!! :-/
What about using a queuing system? Open MPI will put the created files into a
subdirectory dedi
The ~/.bashrc is by default only sourced for a non-interactive login. Do you
have access to the command when you source it by hand?
-- Reuti
NB: To avoid getting any system wide `mpicc`... first, it's safer to specify
your custom installation first in both assignments of the environment vari
ve logins
or simply define it therein too.
Please have a look at `man bash` section "INVOCATION".
-- Reuti
> On Fri, Feb 7, 2014 at 8:05 PM, Reuti wrote:
> Hi,
>
> Am 07.02.2014 um 17:42 schrieb Talla:
>
> > I downloaded openmpi 1.7 and followed the i
Am 14.02.2014 um 11:23 schrieb tmish...@jcity.maeda.co.jp:
> You've found it in the dream, interesting!
It happens sometimes to get insights while dreaming:
https://skeptics.stackexchange.com/questions/5317/was-the-periodic-table-discovered-in-a-dream-by-dmitri-mendeleyev
-- Reuti
&
ro to report the bug and ask them to provide the old 3.0 BIOS
> in the meantime.
One thing to try: load default values in the BIOS. Sometimes they get screwed
up.
Then: some flash applications allow to save the original BIOS before any
upgrade. Maybe you can save the BIOS of one of the old nod
starting the job with
> /opt/openmpi-1.7.4/bin/mpirun.
For a non-interactive login ~/.bashrc is used.
-- Reuti
> Regarding open-mx, yes I will look into that next to see if the job is indeed
> using it. My msa flag is --mca mx self
> ___
&
Hi,
Am 02.02.2014 um 00:23 schrieb Reuti:
>>
>>> Thanks for taking a look. I just learned from PGI that this is a known bug
>>> that will be fixed in the 14.2 release (Februrary 2014).
Just for curiosity: was there any update on this issue - looks like PGI 14.3 is
s=8 max-slots=8
>
> command:
> mpirun --host texthost -np 2 /home/client3/espresso-5.0.2/bin/pw.x -in
> AdnAu.rx.in | tee AdnAu.rx.out
--host denotes particular host(s). Please have a look at the
-hostfile/-machinefile option in the man page to use a file.
-- Reuti
> error:
> ssh
our Ubuntu Linux is installed on all machines?
-- Reuti
> But if I add a fourth node into the hostfile eg:
>
> Node1 slots=8 max-slots=8
> Node2 slots=8 max-slots=8
> Node3 slots=8 max-slots=8
> Node4 slots=8 max-slots=8
>
> I get this error after attempting mpirun -np 32 --h
I see "libtorque" in the output below - were these jobs running inside a
queuing system? The set paths might be different therein, and need to be set in
the job script in this case.
-- Reuti
> See http://www.open-mpi.org/faq/?category=running#run-prereqs,
> http://www.open-m
gnore
SIGUSR1/2 in orted (and maybe in mpirun, otherwise it also must be
trapped there). So it could be included in the action to the --no-
daemonize option given to orted when running under SGE. For now you
would also need this in the job script:
#!/bin/sh
trap '' usr2
expo
aemon
identified, or how
should it be started?
The output of:
ps f -eo pid,ppid,pgrp,user,group,command
might be more informative.
-- Reuti
Thanks
francesco pietra
__
__
Shape Yahoo! in your own
this specific mpiexec
therein later on by adjusting the $PATH accordingly.
As we have only two different versions, we don't use the mentioned
"modules" package for now, but hardcode the appropriate PATH in the
jobscript for our queuing system.
--- Reuti
XGrid handling it, as you refer to
command-line-tool? If you have a jobscript with three mpirun-commands
for three different programs, XGrid will transfer all three programs
to the nodes for this job, or is it limited to be just one mpirun is
just one XGrid job?
-- Reuti
Brian
te machines before a job starts as
well. Both options are useful when working in non-shared file systems.
this is fantasic - but is this a hidden feature, compile-time option,
or lack of documentation? When I issue "mpirun -help" I don't get
these options listed. Hence I wasn
Am 18.09.2007 um 01:17 schrieb Josh Hursey:
What version of Open MPI are you using?
This feature is not in any release at the moment, but is scheduled
for v1.3. It is available in the subversion development trunk which
Aha - thx. I simply looked at the latest 1.2.3 only. - Reuti
can be
/usr/bin/gcc as for me or something more?
-- Reuti
OMPI_CONFIGURE_DATE = Sat Oct 6 16:05:59 PDT 2007
OMPI_CONFIGURE_HOST = michael-clovers-computer.local
OMPI_CONFIGURE_USER = mrc
OMPI_CXX_ABSOLUTE = DISPLAY known
/usr/bin/g++
OMPI_F77_ABSOLUTE = none
OMPI_F90_ABSOLUTE = none
I am a
. For this
to work, control_slaves must be set to true, and there must not run
any firewall on the machines, as a random port will be used for
communication.
-- Reuti
: with PVM it's possible, but rarely used I think.
-- Reuti
george.
On Oct 24, 2007, at 12:06 PM, Dean Dauger, Ph. D. wrote:
Hello,
I'd like to run Open MPI "by hand". I have a few ordinary
workstations I'd like to run a code using Open MPI on. They're i
), instead of share the
calcules between the two
processor, the system duplicate the executable and send one to each
processor.
this seems to be fine. What were you expecting? With OpenMP you will
see threads, and with OpenMPI processes.
-- Reuti
i don´t know what the h$%& is goin
ranted machines and slots.
-- Reuti
"mpirun". To allow the modification of the hostfile, I downloaded
OpenMPI
1.2 and attempted to do a "configure" with the options shown below:
./configure --prefix /opt/openmpi-1.2 --with-openib=/usr/local/ofed
--with-tm=/usr/local/pbs
immediately to the prompt. Here's my program:
is the mpirun the one from Open MPI?
-- Reuti
#include
#include
using namespace std;
int main(int argc,char* argv[])
{
int rank,nproc;
cout<<"Before"<I also tried version 1.2.4 but still no
301 - 400 of 548 matches
Mail list logo