Hi Ashley,
Yes, you can have this done automatically. Just use the
'--enable-mpirun-prefix-by-default' option to configure.
I'm actually a bit surprised this is not in the FAQ. I'll have to add it.
Hope this helps,
Tim
Ashley Pittman wrote:
Hello,
I work for medium si
Thanks for the report of the broken link. It is now fixed. I have also
added a paragraph about --enable-mpirun-prefix-by-default to
http://www.open-mpi.org/faq/?category=running#mpirun-prefix
Tim
Ashley Pittman wrote:
That looks like just what I need, thank you for the quick response.
The
-scheduling
I have copied Ralph on this mail to see if he has a better response.
Tim
Werner Augustin wrote:
Hi,
At our site here at the University of Karlsruhe we are running two
large clusters with SLURM and HP-MPI. For our new cluster we want to
keep SLURM and switch to OpenMPI. While testing I
Hi Joao,
Thanks for the bug report! You do not have to call free/disconnect
before MPI_Finalize. If you do not, they will be called automatically.
Unfortunately, there was a bug in the code that did the free/disconnect
automatically. This is fixed in r18079.
Thanks again,
Tim
Joao
Fixed F90 interface for MPI_CART_CREATE. See ticket #1208.
Thanks to Michal Charemza for reporting the problem.
- Fixed some C++ compiler warnings. See ticket #1203.
- Fixed formatting of the orterun man page. See ticket #1202.
Thanks to Peter Breitenlohner for the patch.
--
Tim Mattox
Open S
Hi Graham,
Have you tried running without the btl_tcp_if_include line in the .conf
file? Open MPI is usually smart enough to auto detect and choose the
correct interfaces.
Hope this helps,
Tim
Graham Jenkins wrote:
We're moving from using a single (eth0) interface on our execute nod
Open MPI ships with a full set of man pages for all the MPI functions,
you might want to start with those.
Tim
Alberto Giannetti wrote:
I am looking to use MPI in a publisher/subscriber context. Haven't
found much relevant information online.
Basically I would need to deal with dynami
this issue doesn't hurt me because I'm not running apps for
> folks yet, but I can see where it would be hard to deal with down the road.
>
> Any guidance out there?
>
> Albert
>
> ___
> users mailing list
> us...@ope
bleinformatics.com
> web : http://www.scalableinformatics.com
> http://jackrabbit.scalableinformatics.com
> phone: +1 734 786 8423
> fax : +1 866 888 3112
> cell : +1 734 612 4615
> ___
> users mailing list
> us...@ope
f svn checkin e-mails will be queued during that time or
if they will be lost. So, if you have something important you are checking in
to svn, you might avoid doing so during that hour today.
--
Tim Mattox, Ph.D. - http://homepage.mac.com/tmattox/
tmat...@gmail.com || timat...@open-mpi.org
I'
My questions are:
> Is possible to send an object by Open-MPI?
> If it yes, could you send me an source code example to do it or reference ?
>
> Thank you.
>
> Carlos
> ___
> users mailing list
> us...@open-mpi.org
> http://www.ope
--
> GPG fingerprint: D54D 1AEE B11C CE9B A02B C5DD 526F 01E8 564E E4B6
>
> Engineering consulting with open source tools
> http://www.opennovation.com/
>
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>
--
Tim Mattox, Ph.D. - http://homepage.mac.com/tmattox/
tmat...@gmail.com || timat...@open-mpi.org
I'm a bright... http://www.the-brights.net/
chmarks that don't actually
>>> > touch their receive buffers at all...
>>> >
>>> > -Ron
>>> >
>>> >
>>> > ___
>>> > users mailing list
>>> > us...@
4) == 0) print*,"loop ",ii,izproc
>>
>> call MPI_ALLTOALLW(phi,zsendcounts,zdispls,zsendtypes,
>> & phi2,zrecvcounts,zdispls,zrecvtypes,
>> & MPI_COMM_WORLD,mpierror)
>>
>>enddo
>>ret
his problem remained.
>
> Luckily, updating the source to SVN revision 19265 finally solved
> this checkpointing issue. Maybe the problem shows up again in later
> versions...
>
>
> Best,
> Matthias
> ___
> users mailing list
on
some systems (e.g., OS X). See ticket #1274.
- Fix a deadlock in inter-communicator scatter/gather operations.
Thanks to Martin Audet for the bug report. See ticket #1268.
--
Tim Mattox, Ph.D.
Open Systems Lab
Indiana University
CINECA Systems & Tecnologies Department
>
> Supercomputing Group
>
> Via Magnanelli 6/3, Casalecchio di Reno (BO) Italy
>
> www.cineca.it Tel: +39 051 6171722
>
> g.fatig...@cineca.it
>
> _______
> users mailing list
> us...@op
process rank order. See ticket #1529.
- Fix a regression introduced in 1.2.6 for the IBM eHCA. See ticket #1526.
--
Tim Mattox, Ph.D.
Open Systems Lab
Indiana University
oing their
work).
I'd appreciate any suggestions as to what I might be doing wrong with this
that is causing OpenMPI to hold the pipes open.
Thanks,
Tim M.
will
be migrated in?
Thanks again very much for your help.
Regards,
Tim
On Wed, Aug 26, 2009 at 6:07 PM, Ralph Castain wrote:
> This is a known issue - I'll test to see if it has been fixed for the
> upcoming 1.3.4. We know the problem does not exist in our devel trunk, but I
> don
amjad ali wrote:
Hi,
Suppose we run a parallel MPI code with 64 processes on a cluster, say
of 16 nodes. The cluster nodes has multicore CPU say 4 cores on each node.
Now all the 64 cores on the cluster running a process. Program is SPMD,
means all processes has the same workload.
Now if we
amjad ali wrote:
Hi,
thanks T.Prince,
Your saying:
"I'll just mention that we are well into the era of 3 levels of
programming parallelization: vectorization, threaded parallel (e.g.
OpenMP), and process parallel (e.g. MPI)." is a really great new
learning for me. Now I can perceive better.
Jeff Squyres wrote:
On Dec 3, 2009, at 3:31 AM, Katz, Jacob wrote:
I wonder if there is a BKM (efficient and portable) to mimic a timeout with a
call to MPI_Wait, i.e. to interrupt it once a given time period has passed if
it hasn’t returned by then yet.
Pardon my ignorance, but what does B
Gus Correa wrote:
Hi Matthew
5) Are you setting processor affinity on mpiexec?
mpiexec -mca mpi_paffinity_alone 1 -np ... bla, bla ...
Good point. This option optimizes processor affinity on the assumption
that no other jobs are running. If you ran 2 MPI jobs with this option,
they wou
All Rights Reserved.
Copyright 2000-2009, STMicroelectronics, Inc. All Rights Reserved.
I'm not sure what's wrong here as other people have reported being able to
build OpenMPI with PGI 9. Does anyone have any ideas?
Thanks,
Tim Miller
for all of the suggestions!
Tim
On Thu, Jan 7, 2010 at 8:11 AM, Jeff Squyres wrote:
> Here's the comment I put in OMPI's configure script with regards to the
> offsetof problem:
>
> # This macro checks to ensure that the compiler properly supports
> # offsetof(). The PGI
be done in your user settings so it
doesn't affect anyone else.
--
Tim Prince
computation and
so on
Maybe I don't understand your question. Are you saying that none of the
references found by search terms such as "hybrid mpi openmp" are useful
for you? They cover so many topics, you would have to be much more
specific about which topics you want in more d
r SSE control
registers, 32- vs. 64-bit compilation. SSE2 is the default for 64-bit
compilation, but compilers vary on defaults for 32-bit. If your program
depends on x87 extra precision of doubles, or efficient mixing of double
and long double, 387 code may be a better choice, but limits your
efficiency.
--
Tim Prince
nitialized
variables or range errors. If you are lucky, turning on the associated
gcc diagnostics and run-time checks may help discover them.
--
Tim Prince
ers a pre-built version compatible with Intel
Fortran. Intel also offers MPI, derived originally from Argonne
MPICH2, for both Windows and linux.
I can't imagine OpenMPI libraries being added to the Microsoft HPC
environment; maybe that's not exactly what the top poster meant.
--
Tim Prince
process or thread per core. With careful
affinity, performance when using 1 logical per core normally is
practically the same as with HT disabled.
--
Tim Prince
m
in RHEL5/CentOS-5 it's easy to switch it on and off on the fly.
___
That's the same as Jeff explained. It requires root privilege, and
affects all users.
--
Tim Prince
down. I'm not
counting on that magically making the bug go away. ifort is not
particularly aggressive about unrolling loops which contain MPI calls,
but I agree that must be considered.
--
Tim Prince
I don't recall Walt 's cases taking all of 5 seconds to start. More annoying is
the hang after completion.
Sent via the ASUS PadFone X mini, an AT&T 4G LTE smartphone
Original Message
From:Ralph Castain
Sent:Fri, 29 May 2015 15:35:15 -0400
To:Open MPI Users
Subject:Re: [OMPI
membership of the two groups. If there's no such functionality,
would it be a difficult thing to hack in (I'd be glad to give it a try
myself, but I'm not that familiar with the codebase, so a couple of
pointers would be helpful, or a note saying I'm crazy for trying).
Thanks,
Tim
about type encoding compatibility. Lack of instructions for
openmpi probably means something.
--
Tim Prince
is still interested in testing this
and, if so, try it out.
Thanks,
Tim
On Tue, Jun 16, 2015 at 7:15 PM, Jeff Squyres (jsquyres) wrote:
> Do you have different IB subnet IDs? That would be the only way for Open
> MPI to tell the two IB subnets apart.
>
>
>
> > On Jun 16,
not start with the FAQ?
https://www.open-mpi.org/faq/?category=openfabrics
Don't go by what the advertisements of other MPI implementations said
based on past defaults.
--
Tim Prince
ent executables with different numbers of
>>> ranks/nodes, but they all seem to run into problems with PMI_KVS_Put.
>>>
>>> Any ideas what could be going wrong?
>>>
>>> Thanks for any help,
>>> Nick
>>> ___
>>> users mailing list
>>> us...@open-mpi.org
>>> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
>>> Link to this post:
>>> http://www.open-mpi.org/community/lists/users/2015/06/27197.php
>>
>>
>> ___
>> users mailing list
>> us...@open-mpi.org
>> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
>> Link to this post:
>> http://www.open-mpi.org/community/lists/users/2015/06/27199.php
>
>
>
> ___
> users mailing list
> us...@open-mpi.org
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post:
> http://www.open-mpi.org/community/lists/users/2015/06/27200.php
--
Tim Mattox, Ph.D. - tmat...@gmail.com
omp.lang.fortran/coarray$20fortran/comp.lang.fortran/P5si9Fj1yIY/ptjM8DMUUzUJ
It's a little difficult to use if you have another MPI installed, as
Windows MPI (like the MPI which comes with linux distros) don't observe
normal methods for keeping distinct paths.
I doubt there is a separate version of OpenMPI docs specific to Windows.
--
Tim Prince
LIB line could be
>
> LIB = -static -L/opt/intel/composer_xe_2013_sp1/mkl/lib/intel64
> -lmkl_blas95_lp64 -lmkl_lapack95_lp64 -lmkl_intel_lp64 -lmkl_core
> -lmkl_sequential -dynamic
>
No, refer to the on-line advisor at
https://software.intel.com/en-us/articles/intel-mkl-link-line-advisor
--
Tim Prince
/ error messages
Do you have any suggestions to what might have gone wrong on this install?
I'm not sure if this thread is still alive, so if you need a refresh on the
situation/any more info, please let me know.
Kind regards,
Tim
On 24 May 2017 at 09:12, Tim Jim wrote:
> Thanks
Dear Gilles,
Thanks for the mail - where should I set export nvml_enable=no? Should I
reconfigure with default cuda support or keep the --without-cuda flag?
Kind regards,
Tim
On 21 September 2017 at 15:22, Gilles Gouaillardet
wrote:
> Tim,
>
>
> i am not familiar with CUDA, bu
Hi,
I tried as you suggested: export nvml_enable=no, then reconfigured and ran
make all install again, but mpicc is still producing the same error. What
should I try next?
Many thanks,
Tim
On 21 September 2017 at 16:12, Gilles Gouaillardet
wrote:
> Tim,
>
>
> do that in your
or in the copy/paste ?
>
> The mpicc command should be
> mpicc /opt/openmpi/openmpi-3.0.0_src/examples/hello_c.c
>
> Cheers,
>
> Gilles
>
> On Fri, Sep 22, 2017 at 3:33 PM, Tim Jim wrote:
>
>> Thanks for the thoughts and comments. Here is the setup informati
ng will be published under that license).
If someone is available for off-line discussion (to minimize unnecessary
traffic to the list), I'd be more than willing to summarize the
conversation and contribute it to the online documentation.
Thank you,
tim
--
All we need is one more Mickey
ints. The OS does not have
paging or segmentation, so fragmentation can be an issue. Performance
is very good.
Is is possible to use a shared memory approach and run an AMP set up,
with hwloc? Would there be any benefit to doing so instead of the
hardwiring approach you mention?
tim
Br
to a different email account and sent the post.
Gregory (tim) Kelly wrote:
Hello Everyone,
I'm inquiring to find someone that can answer some multi-part questions
about hwloc, OpenMPI and an alternative OS and toolchain. I have a
project as part of my PhD work, and it's not a simple, on
to an exokernel running an image of the OS on each CPU
node:
http://www.barrelfish.org/
This appears to have a robust shared memory approach. I'm still
digesting the details, but it looks to solve many of the problems I am
looking at.
Thanks again for the discussions!
tim
Jeff Sq
Hi Steffen,
I'm not sure if this will help you (I'm by far no expert) but the mailing
group pointed by to using:
mpirun --use-hwthread-cpus
to solve something similar.
Kind regards,
Tim
On Tue, 16 Apr 2019 at 19:01, Steffen Christgau
wrote:
> Hi everyone,
>
> on m
np_receiver: Status
> Update.
> [grid-demo-1.cit.tu-berlin.de:14252] Running - Global
> Snapshot Reference: (null)
> ---
>
> I want to underline that ompi-checkpoint is not hanging each
> time I execute it whi
---
>>>
>>> --
>>> [0,1,1]: uDAPL on host n02 was unable to find any NICs.
>>> Another transport will be used instead, although this may result in
>>> lower performance.
>>>
>>> -------
) were unload after the MPI_comm_spawn.
>
> Does anyone knows what's this??
>
> Heitor Florido
>
>
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>
--
Tim Mattox
no (BO) Italy
>>>>>
>>>>> www.cineca.it Tel: +39 051 6171722
>>>>>
>>>>> g.fatig...@cineca.it
>>>>> ___
>>>>> users mailing list
>>>>> us...@open-mpi.org
>>>>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>>>
>>>>
>>>> --
>>>> Jeff Squyres
>>>> Cisco Systems
>>>>
>>>> ___
>>>> users mailing list
>>>> us...@open-mpi.org
>>>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>>
>>>
>>> --
>>> Jeff Squyres
>>> Cisco Systems
>>>
>>> ___
>>> users mailing list
>>> us...@open-mpi.org
>>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>>
>>>
>>
>>
>>
>> --
>> Ing. Gabriele Fatigati
>>
>> Parallel programmer
>>
>> CINECA Systems & Tecnologies Department
>>
>> Supercomputing Group
>>
>> Via Magnanelli 6/3, Casalecchio di Reno (BO) Italy
>>
>> www.cineca.itTel: +39 051 6171722
>>
>> g.fatig...@cineca.it
>>
>
>
>
> --
> Ing. Gabriele Fatigati
>
> Parallel programmer
>
> CINECA Systems & Tecnologies Department
>
> Supercomputing Group
>
> Via Magnanelli 6/3, Casalecchio di Reno (BO) Italy
>
> www.cineca.itTel: +39 051 6171722
>
> g.fatig...@cineca.it
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>
--
Tim Mattox, Ph.D. - http://homepage.mac.com/tmattox/
tmat...@gmail.com || timat...@open-mpi.org
I'm a bright... http://www.the-brights.net/
agio Lucini
> Department of Physics, Swansea University
> Singleton Park, SA2 8PP Swansea (UK)
> Tel. +44 (0)1792 602284
>
> =
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>
--
Tim Mattox, Ph.D. - http://homepage.mac.com/tmattox/
tmat...@gmail.com || timat...@open-mpi.org
I'm a bright... http://www.the-brights.net/
should give
thanks to George for his efforts in tracking down the problem
and finding a solution.
-- Tim Mattox, the v1.3 gatekeeper
On Mon, Jan 12, 2009 at 12:46 PM, Justin wrote:
> Hi, has this deadlock been fixed in the 1.3 source yet?
>
> Thanks,
>
> Justin
>
>
> Jeff Squ
The Open MPI Team, representing a consortium of research, academic,
and industry partners, is pleased to announce the release of Open MPI
version 1.3. This release contains many bug fixes, feature
enhancements, and performance improvements over the v1.2 series,
including (but not limited to):
*
ody help me please?
>
> Bernard
>
>
>
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>
--
Tim Mattox, Ph.D. - http://homepage.mac.com/tmattox/
tmat...@gmail.com || timat...
ticket #1580.
--
Tim Mattox, Ph.D.
Open Systems Lab
Indiana University
Error 1
make[1]: *** [install-recursive] Error 1
make: *** [install-recursive] Error 1
--
I did not invoke any options for configure. Any suggestions?
Thanks,
Tim
://www.open-mpi.org/community/help/
Thanks,
Tim
On Oct 19, 2006, at 2:48 PM, Audet, Martin wrote:
Hi,
When I tried to install OpenMPI on the front node of a cluster
using OpenPBS batch system (e.g. --with-tm=/usr/open-pbs argument
to configure), it didn't work and I got the error me
s, as these are used for
our internal administrative messaging and we currently require it to be
there.
Thanks,
Tim Prins
On Tuesday 21 November 2006 07:49 pm, Adam Moody wrote:
> Hello,
> We have some clusters which consist of a large pool of 8-way nodes
> connected via ethernet. On th
PI version 1.2b2. This problem occurs on both
x86_64 and Intel i386 and it occurs for both Portland Group compilers
and for GCC/G95.
Cheers,
Tim Campbell
Naval Research Laboratory
test_ompi.f.gz
Description: GNU Zip compressed data
Thanks. I'll try it out when the appropriate revision shows up in
the tar list.
~Tim
On Jan 11, 2007, at 2:48 AM, George Bosilca wrote:
Tim,
Thanks for the bug report. I just commit a patch in our development
version (revision 13079). It will go into the 1.2b2 soon, after some
soak
ing new (as
compared to 1.2b2) when running configure.
Thanks,
Tim Campbell
Naval Research Laboratory
.
--
Tim Campbell
Naval Research Laboratory
How does one choose between rsh or ssh to for starting orted?
Where do I look in the "documentation" to find this information?
Thanks,
~Tim
Thanks!
~Tim
On Jan 24, 2007, at 9:42 AM, Jeff Squyres wrote:
On Jan 24, 2007, at 10:27 AM, Tim Campbell wrote:
How does one choose between rsh or ssh to for starting orted?
Where do I look in the "documentation" to find this information?
The best documentation that we have is
aries work with the IBM compilers.
Hope this helps,
Tim
On Feb 20, 2007, at 12:24 PM, Arif Ali wrote:
Hi list,
I have tried a few ways to compile OpenMPI using XLX/XLF compilers
but the I keep getting the same error (detailed below). I was
wandering if anyone has had any problems or su
ables have any impact. See the end of the output
from ./configure --help for all the environment variables.
My suggestion would be to remove the build tree, and start over
again. It is possible that there is something laying around that is
screwing up the build.
Tim
On Feb 20, 2007, at 2:
I have tried to reproduce this but cannot. I have been able to run your test
program to over 100 spawns. So I can track this further, please send the
output of ompi_info.
Thanks,
Tim
On Tuesday 27 February 2007 10:15 am, rozzen.vinc...@fr.thalesgroup.com wrote:
> Do you know if there i
Actually, I have also tried with the same version you are using and
cannot reproduce the behavior. Can you get a backtrace from the
segmentation fault?
Also, as Ralph suggested, you might want to upgrade and see if the
problem persists.
Tim
On Mar 1, 2007, at 8:52 AM, Ralph Castain
500 times). Have you been able to try a more
recent version of Open MPI? What kind of system is it? How many nodes
are you running on?
Tim
On Mar 5, 2007, at 1:21 PM, rozzen.vinc...@fr.thalesgroup.com wrote:
Maybe the problem comes from the configuration options.
The configuration options
Never mind, I was just able to replicate it. I'll look into it.
Tim
On Mar 5, 2007, at 4:26 PM, Tim Prins wrote:
That is possible. Threading support is VERY lightly tested, but I
doubt it is the problem since it always fails after 31 spawns.
Again, I have tried with these configure op
problem?
I am using OpenMPI version: Open MPI: 1.1 Open MPI SVN revision: r10477
Thank you in advance
Michael.
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
--
Tim Mattox - http://homepage.mac.com
suggest trying out 1.2 and seeing if it works for you.
Hope this helps,
Tim
On Mar 17, 2007, at 9:58 AM, Bala wrote:
Hi All,
we have installed 16 node Intel X86_64
dual CPU and dual core cluster( blade servers)
with OFED-1.1, that installs OpenMPI as well.
we are able to run some sample
David,
Have you tried something like
mpirun -np 1 --host talisker4 hostname
If that hangs, try adding '--debug-daemons' to the command line and
see if the output from that helps. If not, please send the output to
the list.
Thanks,
Tim
On Mar 19, 2007, at 1:59 AM, David B
Well that's not a good thing. I have filed a bug about this (https://
svn.open-mpi.org/trac/ompi/ticket/954) and will try to look into it
soon, but don't know when it will get fixed.
Thanks for bringing this to our attention!
Tim
On Mar 20, 2007, at 1:39 AM, Bill Saphir wrote:
mpi bug?
what is the best way to debug this?
any help would be appreciated!
--tim
#include
#include
#include
#include
#include
int main(int c, char **v)
{
int ret;
char *host = NULL;
host = (char *) calloc(128, sizeof(char));
gethostname(host, 64);
/* init mpi */
ret
-selection
ok, using the internal interfaces only fixed the problem.
it is a little confusing that when this happens, one machine would make
it past the barrier, and the others would not.
thanks Jeff!
--tim
Geoff,
'cpu', 'slots', and 'count' all do exactly the same thing.
Tim
On Thursday 22 March 2007 03:03 pm, Geoff Galitz wrote:
> Does the hostfile understand the syntax:
>
> mybox cpu=4
>
> I have some legacy code and scripts that I'd like to m
Steve,
This list is for supporting Open MPI, not MPICH2 (MPICH2 is an
entirely different software package). You should probably redirect
your question to their support lists.
Thanks,
Tim
On Mar 23, 2007, at 12:46 AM, Jeffrey Stephen wrote:
Hi,
I am trying to run an MPICH2 application
be
fixed in 1.2.1, and the fix is available now in the 1.2 nightly tarballs.
Hope this helps,
Tim
On Friday 30 March 2007 05:06 pm, de Almeida, Valmor F. wrote:
> Hello,
>
> I am getting this error any time the number of processes requested per
> machine is greater than the numbe
by Torque.
Alternatively, you could have wc read from stdin instead of from a file:
ncpus=`wc -l < $PBS_NODEFILE`
this will avoid the filename being printed.
Hope this helps,
Tim
On Apr 1, 2007, at 9:16 AM, Barry Evans wrote:
Hello,
Having a bit of trouble running Open MPI 1.2 under Torque
than me can help.
Thanks,
Tim
On Apr 2, 2007, at 6:12 AM, de Almeida, Valmor F. wrote:
Hi Tim,
I installed the openmpi-1.2.1a0r14178 tarball (took this
opportunity to
use the intel fortran compiler instead gfortran). With a simple
test it
seems to work but note the same messages
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
--
Tim Mattox - http://homepage.mac.com/tmattox/
tmat...@gmail.com || timat...@open-mpi.org
I'm a bright... http://www.the-brights.net/
xisting Open MPI installation.
- Always include -I for Fortran compiles, even if the prefix is
/usr/local.
- Support for "fork()" in MPI applications that use the
OpenFabrics stack (OFED v1.2 or later).
- Support for setting specific limits on registered memory.
--
Tim Mattox
Open Systems Lab
Indiana University
dLeveler on one of our machines and Open
MPI seems to work with it quite well. I would be interested in hearing how it
works for you.
Hope this helps, let me know if this works.
Thanks,
Tim
On Thursday 10 May 2007 02:57 am, Laurent Nguyen wrote:
> Hello,
>
> I tried to install Ope
On Thursday 10 May 2007 11:35 am, Laurent Nguyen wrote:
> Hi Tim,
>
> Ok, I thank you for all theses precisions. I also add "static int
> pls_poe_cancel_operation(void)" similary to you, and I can continue the
> compilation. But, I had another problem. In ompi/mpi/cxx/mpi
and I ran:
> gdb mpirun
>
> run --hostfile ../hostfile n 16 raytrace -finputs/car.env
>
> when I type
>
> backtrace
>
>
> after it crashes, it just said "no stack"
This is because you are debugging mpirun, and not your application. Mpirun
runs to completion s
gridengine.
- If pbs-config can be found, use it to look for TM support. Thanks
to Bas van der Viles for the inspiration and preliminary work.
- Fixed a deadlock in orterun when the rsh PLS encounters some errors.
--
Tim Mattox
Open Systems Lab
Indiana University
Open MPI uses TCP, and does not use any fixed ports. We use whatever ports the
operating system gives us. At this time there is no way to specify what ports
to use.
Hope this helps,
Tim
On Friday 18 May 2007 05:19 am, Code Master wrote:
> I run my openmpi-based application in a multi-n
Hi Daniel,
I am able to replicate your problem on Mandriva 2007.1, however I'm not sure
what is going on.
I was able to build the tarball just fine though, so you may try that.
Tim
On Friday 01 June 2007 12:32:54 pm Daniel Pfenniger wrote:
> Hello,
>
> version 1.2.2 refuses
Note that since you are setting OMPI_MCA_pml to cm, OMPI_MCA_btl will have no
effect. You may try setting OMPI_MCA_pml=ob1, and trying your measurements
again, but we generally get better performance with the cm pml than then ob1
pml.
Tim
On Wednesday 06 June 2007 12:54:26 pm George Bosilca
Thanks Peter,
We'll look into this...
Tim
Peter Kjellström wrote:
Hello,
I'm playing with a copy of svn7132 that built and installed just fine. At
first everything seemed ok, unlike earlier it now runs on mvapi
automagically :-)
But then a small testprogram failed and th
Hello Daryl,
I believe there is a problem w/ the latest version of the bproc launcher...
Try running w/ the following to use an older version:
mpirun -mca pls_bproc_seed_priority 101
This could also be set in your system default or local MCA
parameter file.
Thanks,
Tim
Daryl W. Grunau
656.59 MB/s
0 pinged 1: 4194304 bytes 6917.91 uSec 606.30 MB/s
0 pinged 1: 8388608 bytes 14157.00 uSec 592.54 MB/s
0 pinged 1: 16777216 bytes 28329.72 uSec 592.21 MB/s
Thanks,
Tim
Daryl W. Grunau wrote:
Hi, I downloaded/installed version 1.0a1r7337 configured to run o
Daryl,
This should be fixed in svn.
Thanks,
Tim
Daryl W. Grunau wrote:
Hi, I downloaded/installed version 1.0a1r7337 configured to run on my BProcV4
IB cluster (mvapi, for now). Upon execution, I get the following warning
message, however the app appears to run to completion afterwards
Daryl,
Tim, the latest nightly fixes this - thanks! Can I report another? I
can't seem to specify -H|-host|--host ; mpirun seems to ignore the
argument:
% mpirun -np 2 -H 0,4 ./cpi
Process 0 on n0
Process 1 on n1
pi is approximately 3.1416009869231241, Err
201 - 300 of 321 matches
Mail list logo