Am 20.09.2011 um 13:52 schrieb Tim Prince:
> On 9/20/2011 7:25 AM, Reuti wrote:
>> Hi,
>>
>> Am 20.09.2011 um 00:41 schrieb Blosch, Edwin L:
>>
>>> I am observing differences in floating-point results from an application
>>> program that appear
ation?
You can compile with the option -S to get the assembler output.
-- Reuti
> Again, most numbers going into the routine were checked, and there were no
> differences in the numbers out to 18 digits (i.e. beyond the precision of the
> FP representation). Yet, coming out of the ro
ss the necessary source's configure).
AFAIK this is fully supported by the GNU build tools and to me it
looks working.
BTW: you are using PGI 10.9 as it's the last version you have access
to? They are at 11.8 by now.
-- Reuti
It seems to be a configure behavior, but I don't unde
disable the tm completely --without-tm if I'm not mistaken. But then
you lose a tight integration (at least when you are running across
multiple nodes).
-- Reuti
Cheers,
Bert
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/ma
here is also a FAQ if you need fully static binaries:
http://www.open-mpi.org/faq/?category=mpi-apps#static-mpi-apps
-- Reuti
Brice
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
ssh" to start a slave task might get
a different entry for DISPLAY otherwise, depending on the free display ports,
so forwarding localhost:10.0 will most likely not work.
-- Reuti
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
Am 28.09.2011 um 22:40 schrieb Xin Tong:
I am wondering what the proper way of stop a mpirun process and the
child process it created. I tried to send SIGTERM, it does not
respond to it ? What kind of signal should I be sending to it ?
To whom did you sent the signal?
-- Reuti
Thanks
be necessary to send the sigterm to a
complete processgroup.
-- Reuti
Thanks
Xin
On Fri, Sep 30, 2011 at 10:10 PM, Ralph Castain
wrote:
Sigterm should work - what version are you using?
Ralph
Sent from my iPad
On Sep 28, 2011, at 1:40 PM, Xin Tong
wrote:
> I am wondering w
you a parallel version faster. http://openmp.org/wp/
Nowadays many compilers support it. Nevertheless you have to touch your
application by hand and modify the source.
-- Reuti
the first place. They even maybe operated headless.
Otherwise: is there X11 running on all the nodes, or would it help to write
something to the local virtual console like /dev/vcs7 or /dev/console in a text
based session?
-- Reuti
> It may be an easy task, but I'm new to this and did
processes to anywhere other than the screen where mpirun is executing.
What about writing to a local file (maybe a pipe) and the user has to tail this
file on this particular machine?
-- Reuti
>
>>
>> 2011/11/14 Ralph Castain :
>>>
>>> On Nov 14, 2011, at
the one specified in the hostfile, as Open MPI won't use
this lookup file:
Host remotehost.com
User user
ssh should then use the entries therein to initiate the connection. For details
you can have a look at `man ssh_config`.
-- Reuti
Hi Ralph,
Am 25.11.2011 um 03:47 schrieb Ralph Castain:
>
> On Nov 24, 2011, at 2:00 AM, Reuti wrote:
>
>> Hi,
>>
>> Am 24.11.2011 um 05:26 schrieb Jaison Paul:
>>
>>> I am trying to access OpenMPI processes over Internet using ssh and not
>>
have more than one queue per machine, the admin
might already have set up some RQS (Resource Quota Set) or an absolut limit of
slots across all queues residing on a host in teh exechost definition. In this
case this needs to be adjusted too.
-- Reuti
> We had bad results [program hanging or a
t the
slot count.
$ man sge_pe # Check the options for the PE.
$ qconf -spl # Shows what PEs are defined.
$ qconf -mp orte # Check the allocation rule, what's there?
Then change in the job script the 8 to the number you used above.
-- Reuti
> what change we should do to allow for oversu
it sounds like the ~/.ssh/authorized_keys on the master isn't containing its
own public key (as in a plain sever you don't need it). Hence if you mount it
on the slaves, it's missing again.
-- Reuti
>> Please help me on this matter.
>>
>> ___
suggested. You could even define a started_method in SGE to
define it for all users by default and avoid to use -V:
#!/bin/sh
module() { ...command...here... }
export -f module
exec "${@}"
-- Reuti
> The modules environment is defined, and works- only jonbs that run across
> mu
Am 12.01.2012 um 12:17 schrieb Shaandar Nyamtulga:
> Dear Reuti
>
> Then what I should do? I am novice in ssh, OpenMPI. Can you direct me little
> bit further? I am quite confused.
> Thank you
$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
on the file server.
-- Re
ssary to have a
high number of slots on the headnode for this queue, and request always one
slot on this machine in addition to the necessary ones on the computing node.
-- Reuti
> Sent from my iPad
>
> On Jan 24, 2012, at 5:54 AM, Ilias Miroslav wrote:
>
>> Dear experts,
>&
x24-x86/qrsh -inherit
-nostdin -V pc15370 orted -mca ess env -mca orte_ess_jobid 1491402752 -mca
orte_ess_vpid 1 -mca orte_ess_num_procs 2 --hnp-uri
"1491402752.0;tcp://192.168.151.101:41663"
22302 ?S 0:00 \_ /home/reuti/mpitest/Mpitest --child
and on the other side:
s compile for a common AMD64 platform, independent where the compiler
runs on and w/o command line options; maybe you can adjust it for your case
along the brief description in the user's guide.
-- Reuti
nds more like a real-time
queuing system and application, where this can be ensured to happen in time.
-- Reuti
> For this reason, I don't recall seeing any users using spawn_multiple
> (and also, IIRC, the call was introduced in MPI-2)... and you might
> want to make sure that nor
sufficient here (thread support won't hurt though).
> My co-worker clarified today that he actually had this exact code working
> last year on a test cluster that we set up. Now we're trying to put
> together a production cluster with the latest version of Open MPI and SGE
&g
count as process. I get a 3+1 allocation instead of 2+2 (what is
granted by SGE). If started with "mpiexec -np 1 ./Mpitest" all is fine.
-- Reuti
> On Jan 31, 2012, at 1:08 PM, Dave Love wrote:
>
>> Reuti writes:
>>
>>> Maybe it's a side effect
\_ ./Mpitest
9509 ?Sl 0:00 \_ /usr/sge/bin/lx24-x86/qrsh -inherit
-nostdin -V pc15381 orted -mca
9513 ?S 0:00 \_ /home/reuti/mpitest/Mpitest --child
2861 ?Sl10:47 /usr/sge/bin/lx24-x86/sge_execd
25434 ?Sl 0:0
Am 31.01.2012 um 21:25 schrieb Ralph Castain:
>
> On Jan 31, 2012, at 12:58 PM, Reuti wrote:
>
>>
>> Am 31.01.2012 um 20:38 schrieb Ralph Castain:
>>
>>> Not sure I fully grok this thread, but will try to provide an answer.
>>>
>>> When
Am 01.02.2012 um 15:38 schrieb Ralph Castain:
> On Feb 1, 2012, at 3:49 AM, Reuti wrote:
>
>> Am 31.01.2012 um 21:25 schrieb Ralph Castain:
>>
>>> On Jan 31, 2012, at 12:58 PM, Reuti wrote:
>>
>> BTW: is there any default for a hostfile for Open MPI -
Am 01.02.2012 um 17:16 schrieb Ralph Castain:
> Could you add --display-allocation to your cmd line? This will tell us if it
> found/read the default hostfile, or if the problem is with the mapper.
Sure:
reuti@pc15370:~> mpiexec --display-allocation -np 4 .
re you have many cores per node and even use only one `qrsh
-inherit` per slave machine and then forks or threads for the additional
processes, this setting is less meaningful and would need some new options in
the PE:
https://arc.liv.ac.uk/trac/SGE/ticket/197
-- Reuti
> 1. I'm still su
0:00 \_ /usr/sge/bin/lx24-x86/qrsh
>> -inherit -nostdin -V pc15381 orted -mca
>> 9513 ?S 0:00 \_ /home/reuti/mpitest/Mpitest --child
>>
>> 2861 ?Sl10:47 /usr/sge/bin/lx24-x86/sge_execd
>> 25434 ?Sl 0
the machinefile.
> 4. Based on "d", I thought that I could follow the approach in "a". That
> is, for experiment "e", I used mpiexec -np 1, but I also used -pe orte 5-5.
> I thought that this would make the multi-machine queue reserve only the 5
> slot
Am 06.02.2012 um 22:28 schrieb Tom Bryan:
> On 2/6/12 8:14 AM, "Reuti" wrote:
>
>>> If I need MPI_THREAD_MULTIPLE, and openmpi is compiled with thread support,
>>> it's not clear to me whether MPI::Init_Thread() and
>>> MPI::Inint_Thread(MPI::
* The MPI_Init_thread() function was called before MPI_INIT was invoked.
> *** This is disallowed by the MPI standard.
> *** Your MPI job will now abort.
Interesting error message, as it's not true to be disallowed.
-- Reuti
due to its age http://www.cs.usfca.edu/~peter/ppmpi/),
Parallel Programming in C with MPI and OpenMP by Michael Quinn.
-- Reuti
> best wishes to you & good luck
> yours alex
>
> alexalex43210
>
> From: Brad Benton
> Date: 2012-02-15 09:48
> To: announce
> Subject:
t in the new one (i.e. pointing to SSH
there)?
-- Reuti
> I cannot determine WHY the behavior is different from RHEL 5 to RHEL 6. In
> the former I'm using the openmpi 1.4.3 package, in the latter I'm using
> openmpi 1.5.3. Both are supposedly built to support the grideng
Am 15.02.2012 um 22:59 schrieb Brian McNally:
> For for responding so quickly Reuti!
>
> To be clear my RHEL 5 and RHEL 6 nodes are part of the same cluster. In the
> RHEL 5 case qrsh -inherit gets called via mpirun. In the RHEL 6 case
> /usr/bin/ssh gets called directly f
supposedly built to support the
>> gridengine ras.
>
> See the release notes for 1.5.4. The workaround I was given is
> plm = ^rshd
Aha, thx for reminding me of it - so it's still broken.
-- Reuti
> Does "the same parallel environment setup" mean mixing 1.4 and
essage as a singleton startup. This is not consistent, but I see the
intention.
Maybe it could be checked against MPI_ERR_LASTCODE: if it's lower or equal:
output MPI error message as observed, otherwise treat it as an application
error unrelated to MPI, forward it without error message.
CPUs which are in the
nodes. Like compiling on a machine with Shanghai-64 CPU might fail on older
Opterons, unless you use compiler switches to target the least common
instruction.
-- Reuti
> Many thanks
>
> Salvatore
> ___
> u
he intended behavior of mpirun? It looks like -np is eating
-hostlist as a numeric argument? Shouldn't it complain about: argument for -np
missing or argument not being numeric?
-- Reuti
>
> On Feb 27, 2012, at 10:29 PM, Syed Ahsan Ali wrote:
>
>> The following co
than 1 core, it executes with the error:
>mpirun noticed that process rank 1 with PID 8260 on node
> tscco28017 exited on signal 4 (Illegal instruction).
was the appication and Open MPI compiled on one and the same machine and the
cpu type is the same across the involved n
achines?
If you want to limit the number of available PEs in your setup for the user,
you could request a PE by a wildcard and once a PE is selected SGE will stay in
this PE. Attaching each PE to only one queue allows this way to avoid the
mixture of slots from different queues (orte1 PE =>
Am 14.03.2012 um 17:44 schrieb Ralph Castain:
> Hi Reuti
>
> I appreciate your help on this thread - I confess I'm puzzled by it. As you
> know, OMPI doesn't use SGE to launch the individual processes, nor does SGE
> even know they exist. All SGE is used for is t
Am 14.03.2012 um 18:30 schrieb Joshua Baker-LePain:
> On Wed, 14 Mar 2012 at 9:33am, Reuti wrote
>
>>> I can run as many threads as I like on a single system with no problems,
>>> even if those threads are running at different nice levels.
>>
>> How do
Am 14.03.2012 um 23:48 schrieb Joshua Baker-LePain:
> On Wed, 14 Mar 2012 at 6:31pm, Reuti wrote
>
>> I just tested with two different queues on two machines and a small mpihello
>> and it is working as expected.
>
> At this point the narrative is getting very conf
Am 15.03.2012 um 05:22 schrieb Joshua Baker-LePain:
> On Wed, 14 Mar 2012 at 5:50pm, Ralph Castain wrote
>
>> On Mar 14, 2012, at 5:44 PM, Reuti wrote:
>
>>> (I was just typing when Ralph's message came in: I can confirm this. To
>>> avoid it, it would
was a unique host, so it doesn't even check to see if there is
> duplication. Easy fix - can shoot it to you today.
But even with the fix the nice value will be the same for all processes forked
there. Either all have the nice value of his low priority queue or the high
priority queue.
Am 15.03.2012 um 15:50 schrieb Ralph Castain:
>
> On Mar 15, 2012, at 8:46 AM, Reuti wrote:
>
>> Am 15.03.2012 um 15:37 schrieb Ralph Castain:
>>
>>> Just to be clear: I take it that the first entry is the host name, and the
>>> second is the
Am 15.03.2012 um 18:14 schrieb Joshua Baker-LePain:
> On Thu, 15 Mar 2012 at 1:53pm, Reuti wrote
>
>> PS: In your example you also had the case 2 slots in the low priority queue,
>> what is the actual setup in your cluster?
>
> Our actual setup is:
>
> o lab.q, s
Hi,
Am 27.03.2012 um 23:46 schrieb Hameed Alzahrani:
> When I run any parallel job I get the answer just from the submitting node
what do you mean by submitting node: you use a queuing system - which one?
-- Reuti
> even when I tried to benchmark the cluster using LINPACK but it loo
u have a shared home directory with the applications?
-- Reuti
> when I ran mpirun from a machine and checking the memory status for the three
> machines that I have it appear that the memory usage increased just in the
> same machine.
>
> Regards,
>
> > From: re...@staff.
ust used). I'm not aware that it can be
specified on the command line with different values for each machine:
host1 slots=4
host2 slots=2
host3 slots=2
-- Reuti
>
> Regards,
>
> > From: re...@staff.uni-marburg.de
> > Date: Wed, 28 Mar 2012 16:42:21 +0200
> > To:
hostfile foobar
-- Reuti
> Regards,
>
> > From: re...@staff.uni-marburg.de
> > Date: Wed, 28 Mar 2012 17:21:39 +0200
> > To: us...@open-mpi.org
> > Subject: Re: [OMPI users] Can not run a parallel job on all the nodes in
> > thecluster
> >
> &g
lso work to create a static version of Open MPI by --enable-static
--disable-shared and recompile the application.
-- Reuti
>
>
> On Mon, Apr 2, 2012 at 2:52 PM, Rayson Ho wrote:
> On Sun, Apr 1, 2012 at 11:27 PM, Rohan Deshpande wrote:
> > error while loading shar
wise:
"All nodes which are allocated for this job are already filled."
What does 1.5 offer in detail in this area?
-- Reuti
> Sent from my iPad
>
> On Apr 2, 2012, at 6:53 AM, Rémi Palancher wrote:
>
>> Hi there,
>>
>> I'm encountering a problem wh
used.
you configured Open MPI to support SGE tight integration and used a PE for
submitting the job? Can you please post the defintion of the PE.
What was the allocation you saw in SGE's `qstat -g t ` for the job?
-- Reuti
> If you need further information, please let me know.
>
>
Am 03.04.2012 um 16:59 schrieb Eloi Gaudry:
> Hi Reuti,
>
> I configured OpenMPI to support SGE tight integration and used the defined
> below PE for submitting the job:
>
> [16:36][eg@moe:~]$ qconf -sp fill_up
> pe_namefill_up
> slots 80
&
Am 03.04.2012 um 17:24 schrieb Eloi Gaudry:
> -Original Message-
> From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On
> Behalf Of Reuti
> Sent: mardi 3 avril 2012 17:13
> To: Open MPI Users
> Subject: Re: [OMPI users] sge tight intregration leads
ined for `qstat` to reformat the output
> > (or a ~/.sge_qstat defined)?
> >
> > [eg: ] sorry, i forgot about sge_qstat being defined. As I don't have any
> > slot available right now, I cannot relaunch the job to get the output
> > updated.
> Reuti, here i
Am 05.04.2012 um 18:58 schrieb Eloi Gaudry:
> -Original Message-
> From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On
> Behalf Of Reuti
> Sent: jeudi 5 avril 2012 18:41
> To: Open MPI Users
> Subject: Re: [OMPI users] sge tight intregration leads
I'll try to do this tomorrow, as soon as some slots become free.
> > Thanks for your feedback Reuti, I appreciate.
>
> hi reuti, here is the information related to another run that is failing in
> the same way:
>
> qstat -g t:
>
> ---
Am 10.04.2012 um 16:55 schrieb Eloi Gaudry:
> Hi Ralf,
>
> I haven't tried any of the 1.5 series yet (we have chosen not to use the
> features releases) but if this is mandatory for you to work on this topic, I
> will.
>
> This might be of interest to Reuti and you
Am 11.04.2012 um 04:26 schrieb Ralph Castain:
> Hi Reuti
>
> Can you replicate this problem on your machine? Can you try it with 1.5?
No. It's also working fine in 1.5.5 in some tests. I even forced an uneven
distribution by limiting the slots setting for some machine
Am 20.04.2012 um 15:04 schrieb Eloi Gaudry:
>
> Hi Ralph, Reuti,
>
> I've just observed the same issue without specifying -np.
> Please find attached the ps -elfax output from the computing nodes and some
> sge related information.
What about these error message:
co
mstances". How often does one link an Intel compiled program with
a library form which you only use the *.so files which are already included in
the distribution without recompiling the whole stuff? E.g. when I compile a
graphical application I don't recompile the X11 libraries, which were
Am 12.05.2012 um 12:18 schrieb Rohan Deshpande:
> Hi,
>
> Can anyone point me to good resources and books to understand the detailed
> architecture and working of MPI.
>
> I would like to know all the minute details.
http://mitpress.mit.edu/catalog/item/default.asp?ttype=2&tid=3945
http://m
to
MPI_Comm_f2c in the source. Do you want to compile the ScaLAPACK or plain MPI
version?
-- Reuti
> the compiled program with mpirun, I got the following information at
> very beginning:
>
> *** The MPI_Comm_f2c() function was called before MPI_INIT was invoked.
> *** This i
more than one of the N processes?
-- Reuti
tem and jobs can nicely be
removed.
Is there any reason to bypass this mechanism?
-- Reuti
> and OpenMPI and MPICH2, for one particular machine that I have been toying
> around with lately ...
>
> Dominik
>
> #!/bin/bash
>
> PBS
> #PBS -N f
unpc1.informatik.hs-fulda.de to: tyr
> Unable to connect to the peer 127.0.0.1 on port 516: Connection refused
Some distributions give the loopback interface also the name of the host. Is
there an additonal line:
127.0.0.1 tyr.informatik.hs-fulda.de
in /etc/hosts besides the localhost and i
Am 18.07.2012 um 07:17 schrieb Hongsheng Zhao:
> After compiling openmpi using intel parallel studio, I've seen the following
> bashrc settings by others:
>
>
> source /home/zhanqgp/intel/composerxe/bin/compilervars.sh intel64
> source /home/zhanggp/intel/composerxe/mkl/bin/mklv
Am 23.07.2012 um 10:02 schrieb 陈松:
> How can I create ckpt files regularly? I mean, do checkpoint every 100
> seconds. Is there any options to do this? Or I have to write a script myself?
Yes, or use a queuing system which supports creation of a checkpoint in fixed
time intervals.
--
e "sge" environment
> as :
>
> qsub -pe mpich 101 jobfile.job
>
> where jobfile contains the command
>
> mpirun -np 101 -nolocal ./executable
I would leave -nolocal out here.
$ qsub -l
"h=compute-5-[1-9]|compute-5-1[0-9]|comput
Am 26.07.2012 um 23:48 schrieb Reuti:
> Am 26.07.2012 um 23:33 schrieb Erik Nelson:
>
>> I have a purely parallel job that runs ~100 processes. Each process has
>> ~identical
>> overhead so the speed of the program is dominated by the slowest processor.
>>
>&
Am 26.07.2012 um 23:58 schrieb Erik Nelson:
> Reuti,
>
> Thank you. Our queue is backed up, so it will take a little while before I
> can try this.
>
> I assume that by specifying the nodes this way, I don't need (and it would
> confuse
> the system) to add -
ing system around to manage the
resources.
-- Reuti
> I believe SGE doesn't do that - and so the allocation won't include the
> submit host, in which case you don't need -nolocal.
>
>
> On Jul 26, 2012, at 5:58 PM, Erik Nelson wrote:
>
>> I was under the
based authentication if this is feasible
in your environment. For the admin it's a one-time setup, and users don't have
to think about it any longer.
-- Reuti
> Can we assume that your home directory is shared across all cluster nodes?
> That means when you log into a cluster node the
phrase, but not
between the nodes.
-- Reuti
> On Thu, Aug 2, 2012 at 1:09 PM, John Hearns wrote:
> On 02/08/2012, Syed Ahsan Ali wrote:
> > Yes the issue has been diagnosed. I can ssh them but they are asking for
> > passwords
>
> You need to configure 'passwordles
lume: 1
> Input archive name or "." to quit pax.
> Archive name >
what's in config.log for this section?
- -- Reuti
-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.18 (Darwin)
Comment: GPGTools - http://gpgtools.org
iEYEARECAAYFAlA2qU8ACgkQo
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Am 24.08.2012 um 00:13 schrieb Andreas Schäfer:
> On 00:05 Fri 24 Aug , Reuti wrote:
>> Am 23.08.2012 um 23:28 schrieb Andreas Schäfer:
>>
>>> ...
>>> checking for style of include used by make... GNU
>&
There is only one file where "return { ... };" is used.
--disable-vt
seems to fix it.
-- Reuti
Am 28.08.2012 um 14:56 schrieb Tim Prince:
> On 8/28/2012 5:11 AM, 清风 wrote:
>>
>>
>>
>> -- 原始邮 件 --
>> *发件人:* "29
line or put the complete argument in
quotation marks.
As it's a commercial application, I would suggest to ask the vendor and/or use
their forum/knowledgebase. Especially as they suggest to use their own wrapper
`oempirun`.
-- Reuti
>
> -
THs accordingly (at least: I hope so).
-- Reuti
Hi Ralph,
Am 03.09.2012 um 23:34 schrieb Ralph Castain:
>
> On Sep 3, 2012, at 2:12 PM, Reuti wrote:
>
>> Hi all,
>>
>> I just compiled Open MPI 1.6.1 and before digging any deeper: does anyone
>> else notice, that the command:
>>
>> $ mp
TFILE:
/var/spool/sge/pc15370/active_jobs/4640.1/pe_hostfile
[pc15370:31052] ras:gridengine: pc15370: PE_HOSTFILE shows slots=2
[pc15370:31052] ras:gridengine: pc15381: PE_HOSTFILE shows slots=2
Total: 4
Universe: 4
Hello World from Rank 0.
Hello World from Rank 1.
Hello World from Rank
pc15370: PE_HOSTFILE shows slots=1
[pc15370:13630] ras:gridengine: pc15381: PE_HOSTFILE shows slots=2
[pc15370:13630] ras:gridengine: pc15370: PE_HOSTFILE increased to slots=2
And the allocation is correct. I'll continue to investigate what is different
today.
-- Reuti
> Thx!
&g
gt;
> * soft memlock unlimited
> * hard memlock unlimited
>
> But why still OpenMPI is throwing warning message wrt registered memory.
These are not honored when a job is started by SGE, instead definitions inside
SGE are used:
`man sge_config` paragraph H_MEMORYLOCKED.
execd
processes as a result. Therefore I use unthreaded versions of
>> MKL/ACML/ATLAS usually.
> Thanks for that hint, as a workaround a could check all scripts for
> MKL_NUM_THREADS and set it to 1 by jsv ?
Yes.
But you could also try the opposite, i.e. OMP_NUM_THREADS=1 and
MKL_NUM_THREADS=$NSLOTS
Depends of the application what's better.
-- Reuti
> Udo
>>
>> -- Reuti
>>
>>
>>> ___
>>> users mailing list
>>> us...@gridengine.org
>>> https://gridengine.org/mailman/listinfo/users
>
at do you want to achieve in detail - just shorten the `./configure` command
line? You could also add it after Open MPI's compilation in the text file:
${prefix}/share/openmpi/mpicc-wrapper-data.txt
-- Reuti
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
Am 11.09.2012 um 20:22 schrieb Jed Brown:
> I want to avoid the user
With "user" you mean someone compiling Open MPI?
-- Reuti
> having to figure that out. MPICH2 sets RPATH by default when installed to
> nonstandard locations and I think that is not a bad choice. Usually
$ PATH=PATH=/usr/lib/openmpi/bin/:$PATH
Is this a typo - double PATH?
> $ LD_LIBRARY_PATH=/usr/lib/openmpi/lib/
It needs to be exported, so that child processes can use it too.
-- Reuti
> Then:
>
> $ mpif90 ex1.f95
> $ mpiexec -n 4 ./a.out
> ./a.out: error while loading share
d there is the command `aprun` instead of `mpiexec` in the
jobscript on a Cray. Maybe the allocation needs to be granted first.
-- Reuti
> I would be very happy if anybdy has an idea, what I could have missed during
> installation/runtime.
>
> Thanks in advance.
>
> Best reg
leave it for the time being.
what about adjusting it in /usr/share/openmpi/mpicc-wrapper-data.txt and alike?
-- Reuti
> Best regards, Jonas Juselius
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
ng on how many nodes at one time? This requirement
could be a point to start to use a queuing system, where can remove job
individually and also serialize your workflow. In fact: we use GridEngine also
local on workstations for this purpose.
-- Reuti
Am 24.10.2012 um 11:33 schrieb Nicolas Deladerriere:
> Reuti,
>
> Thanks for your comments,
>
> In our case, we are currently running different mpirun commands on
> clusters sharing the same frontend. Basically we use a wrapper to run
> the mpirun command and to run an
ocessors (the clock frequency is the same
> though,
> and tests for comparison were done on parallel runs using 8 cores), improves
> of nearly the 70%
> by using the proprietary HP MPI libraries.
NB: They made their way to Platform Computing and then to IBM.
-- Reuti
> Kind regard
info
on the command line? As MPI is a standard which is not targeting any specific
platform, most of the tutorials should apply here too, as long as they don't
access any OS specific functions.
For an Objective-C application:
$ mpicc -ObjC -framework Foundation -framework CoreLocation
-- Reuti
be compiled by a plain gcc, but then you have to take
care of all the necessary libraries on your own.
-- Reuti
demo.m
Description: Binary data
ifort: command line remark #10010: open '-pthread' is depreciated and
>> will be removed in a future release. See '-help deprecated'.
And here they list:
-pthread use -reentrancy threaded
-- Reuti
>> Is -pthread really needed? Is there a configure o
Am 10.11.2012 um 00:55 schrieb shiny knight:
> Thanks Reuti for the sample.
>
> I have the latest Xcode available on the Apple dev center; Xcode 4 probably?
As I don't intend to develop for 10.7 or above I stayed with 3.2.6. In the
beginning Xcode wasn't free and so I didn
201 - 300 of 548 matches
Mail list logo