--Original Message--
From: Gustavo Correa
Sender: users-boun...@open-mpi.org
To: Open MPI Users
ReplyTo: Open MPI Users
Sent: Dec 7, 2011 1:10 PM
Subject: Re: [OMPI users] orte_ess_base_select failed
Hi John Doe
I would keep it very simple, particularly if you are just starting with MPI
Hi,
(1). I am wondering how I can speed up the time-consuming computation in the
loop of my code below using MPI?
int main(int argc, char ** argv)
{
// some operations
f(size);
// some operations
return 0;
}
void
any other process, such
> data would have to be exchanged between the processes
> explicitly.
>
> Many codes have both OpenMP and MPI parallelization, but
> you should first familiarize yourself with the basics of MPI
> before dealing with "hybrid" codes.
>
>
ene Loh wrote:
> From: Eugene Loh
> Subject: Re: [OMPI users] speed up this problem by MPI
> To: "Open MPI Users"
> Date: Thursday, January 28, 2010, 8:31 PM
> Tim wrote:
>
> > Thanks, Eugene.
> >
> > I admit I am not that smart to understand
}
--- On Thu, 1/28/10, Eugene Loh wrote:
> From: Eugene Loh
> Subject: Re: [OMPI users] speed up this problem by MPI
> To: "Open MPI Users"
> Date: Thursday, January 28, 2010, 11:40 PM
> Tim wrote:
>
> > Thanks Eugene!
> >
> > My case, after sim
ubject: Re: [OMPI users] speed up this problem by MPI
> To: "Open MPI Users"
> Date: Friday, January 29, 2010, 12:50 AM
> Tim wrote:
>
> > Sorry, complicated_computation() and f() are
> simplified too much. They do take more inputs.
> > Among the inputs to
with serialization problems?
Are there some good reference for these problems?
--- On Fri, 1/29/10, Eugene Loh wrote:
> From: Eugene Loh
> Subject: Re: [OMPI users] speed up this problem by MPI
> To: "Open MPI Users"
> Date: Friday, January 29, 2010, 10:39 AM
> Tim wrote:
or class, right?
--- On Fri, 1/29/10, Eugene Loh wrote:
> From: Eugene Loh
> Subject: Re: [OMPI users] speed up this problem by MPI
> To: "Open MPI Users"
> Date: Friday, January 29, 2010, 11:06 AM
> Tim wrote:
>
> > Sorry, my typo. I meant to say OpenMPI
Hi,
I am learning MPI on a cluster. Here is one simple example. I expect the output
would show response from different nodes, but they all respond from the same
node node062. I just wonder why and how I can actually get report from
different nodes to show MPI actually distributes processes to
Well said Jeff!
I look forward to seeing Open MPI's code when it's released.
Until then, I am happy to continue to use LAM/MPI for my clusters.
I wish you had called it OpenMPI though... better for googling ;-)
--
Tim Mattox - tmat...@gmail.com
http://homepage.mac.com/tmattox/
get Open MPI into Dag Wieers RPM/APT/YUM repositories... see:
http://dag.wieers.com/home-made/apt/
or the still-under-construction RPMforge site:
http://rpmforge.net/
That's more than my two cents...
--
Tim Mattox - tmat...@gmail.com
http://homepage.mac.com/tmattox/
I'm a bright... http://www.the-brights.net/
it still looks like the overhead will be
small.
Tim
Quoting Toon Knapen :
> Tim Prins wrote:
>
> > I am in the process of developing MorphMPI and have designed my
> > implementation a bit different than what you propose (my apologies
> if I
> > misunderstood what you have said). I am creating one main library,
> wh
; MKL libraries use
all available hardware threads for sufficiently large data sets).
--
Tim Prince
ox
drivers installed via the SL-provided RPMs.
Tim
I've checked the links repeatedly with "ibstatus" and they look OK. Both
nodes shoe a link layer of "InfiniBand".
As I stated, everything works well with MVAPICH2, so I don't suspect a
physical or link layer problem (but I could always be wrong on that).
Tim
n any
indication that anyone is supporting openmpi for ifort Windows 64-bit.
The closest openmpi thing seems to be the cygwin (gcc/gfortran) build.
Windows seems to be too crowded for so many MPI versions to succeed.
--
Tim Prince
On 05/29/2014 07:11 AM, Lorenzo Donà wrote:
I compiled openmpi 1.8.1 with intel compiler with this conf.
./configure FC=ifort CC=icc CXX=icpc
--prefix=/Users/lorenzodona/Documents/openmpi-1.8.1/
but when i write mpif90 -v i found:
Using built-in specs.
COLLECT_GCC=/opt/local/bin/gfortran-mp
same vendor, yet it thinks the transport types are
different (and one is unknown). I'm hoping someone with some experience
with how the OpenIB BTL works can shed some light on this problem...
Tim
On Fri, May 9, 2014 at 7:39 PM, Joshua Ladd wrote:
>
> Just wondering if you'
t pair. We will try your
suggestion and report back.
Thanks again!
Tim
On Thu, Jun 5, 2014 at 2:22 PM, Joshua Ladd wrote:
> Strange indeed. This info (remote adapter info) is passed around in the
> modex and the struct is locally populated during add procs.
>
> 1. How do you launch
Hi Josh,
I asked one of our more advanced users to add the "-mca btl_openib_if_include
mlx4_0:1" argument to his job script. Unfortunately, the same error
occurred as before.
We'll keep digging on our end; if you have any other suggestions, please
let us know.
Tim
On Thu, Jun
mpifort to link, you would use -lstdc++ in place of -lmpi
-lgfortran .
--
Tim Prince
ided InfiniBand
stack. I'll do some more poking, but at least now I've got something
semi-solid to poke at. Thanks for all of your help; I've attached the
results of "ibv_devinfo -v" for both systems, so if you see anything else
that jumps at you, please let me know.
Tim
fort co-arrays
will not cooperate with presence of OpenMPI.
--
Tim Prince
On 9/12/2014 6:14 AM, JR Cary wrote:
This must be a very old topic.
I would like to run mpi with one process per node, e.g., using
-cpus-per-rank=1. Then I want to use openmp inside of that.
But other times I will run with a rank on each physical core.
Inside my code I would like to detect wh
On 9/12/2014 9:22 AM, JR Cary wrote:
On 9/12/14, 7:27 AM, Tim Prince wrote:
On 9/12/2014 6:14 AM, JR Cary wrote:
This must be a very old topic.
I would like to run mpi with one process per node, e.g., using
-cpus-per-rank=1. Then I want to use openmp inside of that.
But other times I
Check by ldd in case you didn't update .so path
Sent via the ASUS PadFone X mini, an AT&T 4G LTE smartphone
Original Message
From:John Bray
Sent:Mon, 17 Nov 2014 11:41:32 -0500
To:us...@open-mpi.org
Subject:[OMPI users] Fortran and OpenMPI 1.8.3 compiled with Intel-15 does
ave a CUDA installation, which I'd like to
leverage too, if possible). I'm still fairly new to the ins and outs of
this, so I may have missed something obvious. Please let me know if any
other info is required.
Many thanks and kind regards,
Tim
--
*Timothy Jim**PhD Researcher i
2.1.0" - did it possibly
pick up the paths by accident?
Regarding the lib directory, I checked that the path physically exists.
Regarding the final part of the email, is it a problem that 'undefined
reference' is appearing?
Thanks and regards,
Tim
On 22 May 2017 at 06:54, Reuti wrote:
&
Dear Reuti,
Thanks for the reply. What options do I have to test whether it has
successfully built?
Thanks and kind regards.
Tim
On 22 May 2017 at 19:39, Reuti wrote:
> Hi,
>
> > Am 22.05.2017 um 07:22 schrieb Tim Jim :
> >
> > Hello,
> >
> > Thanks for you
Thanks for the thoughts, I'll give it a go. For reference, I have installed
it in the opt directory, as that is where I have kept my installs
currently. Will this be a problem when calling mpi from other packages?
Thanks,
Tim
On 24 May 2017 06:30, "Reuti" wrote:
> Hi,
>
he runtime system
figures out what is going on.
If not, do any users know of another MPI implementation that might
work for this use case? As far as I can tell, FT-MPI has been pretty
quiet the last couple of years?
Thanks in advance,
Tim
___
users ma
ompi-server, do you think this could be sufficient to isolate the
failures?
Cheers,
Tim
On 10 June 2017 at 00:56, r...@open-mpi.org wrote:
> It has been awhile since I tested it, but I believe the --enable-recovery
> option might do what you want.
>
>> On Jun 8, 2017, at 6:17
Hi Ralph,
Thanks for the quick response.
Just tried again not under slurm, but the same result... (though I
just did kill -9 orted on the remote node this time)
Any ideas? Do you think my multiple-mpirun idea is worth trying?
Cheers,
Tim
```
[user@bud96 mpi_resilience]$
/d/home/user/2017
s extended grids too).
snipped tons of stuff rather than attempt to reconcile top postings
--
Tim Prince
e extent that a build made with 10.1 would work with 11.1
libraries.
The most recent Intel library compatibility break was between MKL 9 and 10.
--
Tim Prince
On 8/12/2010 6:04 PM, Michael E. Thomadakis wrote:
On 08/12/10 18:59, Tim Prince wrote:
On 8/12/2010 3:27 PM, Ralph Castain wrote:
Ick - talk about confusing! I suppose there must be -some- rational
reason why someone would want to do this, but I can't imagine what
it would be
I
diagnostic from Intel MPI run-time, due to multiple uses of the
same buffer.Moral: even if it works for you now with openmpi, you
could be setting up for unexpected failure in the future.
--
Tim Prince
seful introduction to affinity. It's available in a default
build, but not enabled by default.
If you mean something other than this, explanation is needed as part of
your question.
taskset() or numactl() might be relevant, if you require more detailed
control.
--
Tim Prince
On 9/27/2010 12:21 PM, Gabriele Fatigati wrote:
HI Tim,
I have read that link, but I haven't understood if enabling processor
affinity are enabled also memory affinity because is written that:
"Note that memory affinity support is enabled only when processor
affinity is enabled&q
On 9/27/2010 2:50 PM, David Singleton wrote:
On 09/28/2010 06:52 AM, Tim Prince wrote:
On 9/27/2010 12:21 PM, Gabriele Fatigati wrote:
HI Tim,
I have read that link, but I haven't understood if enabling processor
affinity are enabled also memory affinity because is written that:
"
s'; no argument required
ld: libhdf5_fortran.so.6: No such file: No such file or directory
Do -Wl,-rpath and -Wl,-soname= work any better?
--
Tim Prince
normal linux conventions a directory named /lib/ as opposed to
/lib64/ would contain only 32-bit libraries. If gentoo doesn't conform
with those conventions, maybe you should do your learning on a distro
which does.
--
Tim Prince
er
version you mentioned should resolve this automatically.
--
Tim Prince
ch an option, I used to do:
sudo
source .. compilervars.sh
make install
--
Tim Prince
mpile line options for the application to change the default
integer and real to 64-bit. I wasn't aware of any reluctance to use
MPI_INTEGER8.
--
Tim Prince
PI_Recv() were a larger data type, you would
see this limit. Did you look at your ?
--
Tim Prince
anyone try mixing
auto-parallelization with MPI; that would require MPI_THREAD_MULTIPLE
but still appears unpredictable. MPI_THREAD_FUNNELED is used often with
OpenMP parallelization inside MPI.
--
Tim Prince
ones. It's the same as on linux (and, likely, Windows).
--
Tim Prince
ur -mca affinity settings. Even if
the defaults don't choose optimum mapping, it's way better than allowing
them to float as you would with multiple independent jobs running.
--
Tim Prince
ar
with marketing. I haven't seen an equivalent investigation for the
6-core CPUs, where various strange performance effects have been noted,
so, as Jeff said, the hyperthreading effect could be "in the noise."
--
Tim Prince
the same scope. If so, it seems good that the compiler catches it.
--
Tim Prince
On 2/23/2011 6:41 AM, Prentice Bisbal wrote:
Tim Prince wrote:
On 2/22/2011 1:41 PM, Prentice Bisbal wrote:
One of the researchers I support is writing some Fortran code that uses
Open MPI. The code is being compiled with the Intel Fortran compiler.
This one line of code:
integer ierr
On 2/23/2011 8:27 AM, Prentice Bisbal wrote:
Jeff Squyres wrote:
On Feb 23, 2011, at 9:48 AM, Tim Prince wrote:
I agree with your logic, but the problem is where the code containing
the error is coming from - it's comping from a header files that's a
part of Open MPI, which make
n compiler/library are reasonably up to date, you will
need to specify action='read' as opening once with default readwrite
will lock out other processes.
--
Tim Prince
evaluate to 'r' (readonly).
--
Tim Prince
as -xW.
It's likely that no one has verified OpenMPI with a compiler of that
vintage. We never used the 32-bit compiler for MPI, and we encountered
run-time library bugs for the ifort x86_64 which weren't fixed until
later versions.
--
Tim Prince
ther
this is due to excessive eviction of data from cache; not a simple
question, as most recent CPUs have 3 levels of cache, and your
application may require more or less data which was in use prior to the
message receipt, and may use immediately only a small piece of a large
message.
--
Tim Prince
e the guess that the performance difference under
discussion referred to a single node.
--
Tim Prince
On 3/28/2011 3:29 AM, Michele Marena wrote:
Each node have two processors (no dual-core).
which seems to imply that the 2 processors share memory space and a
single memory buss, and the question is not about what I originally guessed.
--
Tim Prince
torage issue about
cache eviction to arise.
--
Tim Prince
you consider ganglia et al?
I cannot use ssh to access each node.
How can MPI run?
The program takes 8 hours to finish.
--
Tim Prince
%29
where you should see that you must take care to set option -std=c++0x
when using current under icpc, as it is treated as a c++0x
feature. You might try adding the option to the CXXFLAGS or whatever
they are called in openmpi build (or to the icpc.cfg in your icpc
installation).
--
Tim Prince
./TDLSM() [0x404c19]
[panic:20659] *** End of error message ***
So my question is: Can I intermix the C and FORTRAN APIs within one
program? Oh and also I think the cluster I will eventually run this on
(cx1.hpc.ic.ac.uk, if anyone is from Imperial) doesn't use OpenMP, so
what about other MPI implementations?
Many thanks,
Tim
On 5/6/2011 7:58 AM, Tim Hutt wrote:
Hi,
I'm trying to use PARPACK in a C++ app I have written. This is an
FORTRAN MPI routine used to calculate SVDs. The simplest way I found
to do this is to use f2c to convert it to C, and then call the
resulting functions from my C++ code.
However PA
On 6 May 2011 16:27, Tim Prince wrote:
> If you want to use the MPI Fortran library, don't convert your Fortran to C.
> It's difficult to understand why you would consider f2c a "simplest way,"
> but at least it should allow you to use ordinary C MPI function calls.
On 6 May 2011 16:45, Tim Hutt wrote:
> On 6 May 2011 16:27, Tim Prince wrote:
>> If you want to use the MPI Fortran library, don't convert your Fortran to C.
>> It's difficult to understand why you would consider f2c a "simplest way,"
>> but at least
On 5/6/2011 10:22 AM, Tim Hutt wrote:
On 6 May 2011 16:45, Tim Hutt wrote:
On 6 May 2011 16:27, Tim Prince wrote:
If you want to use the MPI Fortran library, don't convert your Fortran to C.
It's difficult to understand why you would consider f2c a "simplest way,"
bu
if you had 2 versions of the
same named compiler?
--
Tim Prince
MPI has affinity with Westmere
awareness turned on by default. I suppose testing without affinity
settings, particularly when banging against all hyperthreads, is a more
severe test of your application. Don't you get better results at 1
rank per core?
--
Tim Prince
form very close, if using equivalent
settings, when working within the environments for which both are suited.
--
Tim Prince
On 7/12/2011 11:06 PM, Mohan, Ashwin wrote:
Tim,
Thanks for your message. I was however not clear about your suggestions. Would
appreciate if you could clarify.
You say," So, if you want a sane comparison but aren't willing to study the compiler
manuals, you might use (if your s
02b6e7e7f4000)
libintlc.so.5 =>
/opt/intel/Compiler/11.1/072/lib/intel64/libintlc.so.5 (0x2b6e7ea0a000)
--
Tim Prince
irect instruction set translations which
shouldn't vary from -O1 on up nor with linkage options nor be affected
by MPI or insertion of WRITEs.
--
Tim Prince
ral linuxes and mac and it
works fine there.
Not all Windows compilers work well enough with all threading models
that you could expect satisfactory results; in particular, the compilers
and thread libraries you use on linux may not be adequate for Windows
thread support.
--
Tim Prince
iles should be built with
-fPIC or similar. Ideally, the configure and build tools would enforce this.
--
Tim Prince
On 9/21/2011 12:22 PM, Blosch, Edwin L wrote:
Thanks Tim.
I'm compiling source units and linking them into an executable. Or perhaps you
are talking about how OpenMPI itself is built? Excuse my ignorance...
The source code units are compiled like this:
/usr/mpi/intel/openmpi-1.4.
determine the root of this error for the past week,
but with no success.
Any help would be greatly appreciated.
Thank you,
Tim
Package: Open MPI root@intel16 Distribution
Open MPI: 1.4.4
Open MPI SVN revision: r25188
Open MPI release date: Sep 27, 2011
rank pinned to a single L3 cache.
All 3 MPI implementations which were tested have full shared memory
message passing and pinning to local cache within each node (OpenMPI and
2 commercial MPIs).
--
Tim Prince
e your PATH and LD_LIBRARY_PATH
correctly simply by specifying absolute path to mpif90.
--
Tim Prince
ntel cluster checker, you will see noncompliance if anyone's
MPI is on the default paths. You must set paths explicitly according to
the MPI you want. Admittedly, that tool didn't gain a high level of
adoption.
--
Tim Prince
rying to install
with ifort.
This is one of the reasons for recommending complete removal (rpm -e if
need be) of any MPI which is on a default path (and setting a clean
path) before building a new one, as well as choosing a unique install
path for the new one.
--
Tim Prince
ect.<http://www.csi.cuny.edu/tobaccofree>
Tobacco-Free Campus as of July 1, 2012.
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
--
---
Tim Carlson, PhD
Senior Research Scientist
Environmental Molecular Sciences Laboratory
to be implemented in parallelizable fashion (not SSOR style
where each line uses updates from the previous line), it should be
feasible to divide the outer loop into an appropriate number of blocks,
or decompose the physical domain and perform ADI on individual blocks,
then update and repeat.
-
libraries would be linked
automatically.
There was a single release of the compiler several years ago (well out
of support now) where that sse2 library was omitted, although the sse3
version was present.
--
Tim Prince
> FILENAME and > log & these generate files
however they are empty.any help would be appreciated.
If you run under screen your terminal output should be collected in
screenlog. Beats me why some sysadmins don't see fit to install screen.
--
Tim Prince
ing the SSH launcher on an RHEL 6 derivative, you
might give this a try. It's an SSH issue, not an OpenMPI one.
Regards,
Tim
On Thu, Apr 12, 2012 at 9:04 AM, Seyyed Mohtadin Hashemi
wrote:
> Hello,
>
> I have a very peculiar problem: I have a micro cluster with three nodes (18
> co
compilers, so as to configure it to keep
it off the default PATHs (e.g. --prefix=/opt/ompi1.4gf/), if you can't
move the Ubuntu ompi.
Surely most of this is implied in the OpenMPI instructions.
--
Tim Prince
ler you are using, it appears that either you didn't build the
netcdf Fortran 90 modules with that compiler, or you didn't set the
include path for the netcdf modules. This would work the same with
mpif90 as with the underlying Fortran compiler.
--
Tim Prince
rning)
cmsolver.cpp
Which indicates that the openmpi version "openmpi_v1.6-x64" is 64 bit.
And I'm sure that I installed the 64 bit version. I am compiling on a
64 bit version of Windows 7.
setting X64 compiler project options?
--
Tim Prince
int where the compiler
(gcc?) complains.
I suppose you must have built mpicc yourself; you would need to assure
that the mpicc on PATH is the one built with the C compiler on PATH.
--
Tim Prince
ask }; }
^
Looks like your icpc is too old to work with your g++. If you want to
build with C++ support, you'll need better matching versions of icpc and
g++. icpc support for g++4.7 is expected to release within the next
month; icpc 12.1 should be fine with g++ 4.5 and 4.6.
--
Tim Prince
h".
Do you have a reason for withholding information such as which Windows
version you want to support, and your configure commands?
--
Tim Prince
d you would want to find a way to avoid any
reference to that library, possibly by avoiding sourcing that part of
ifort's compilervars.
If you want a response on this subject from the Intel support team,
their HPC forum might be a place to bring it up:
http://software.intel.com/en-us/forums/intel-clusters-and-hpc-technology
--
Tim Prince
than the rule.
--
Tim Prince
en't duplicated by the newer ones.
You also need the 64-bit g++ active.
--
Tim Prince
On 05/16/2013 10:13 PM, Tim Prince wrote:
On 5/16/2013 2:16 PM, Geraldine Hochman-Klarenberg wrote:
Maybe I should add that my Intel C++ and Fortran compilers are
different versions. C++ is 12.0.2 and Fortran is 13.0.2. Could that
be an issue? Also, when I check for the location of ifort, it
ld yourself, even if you use a
(preferably more up to date version of) gcc, which you can use along
with one of the commercial Fortran compilers for linux.
--
Tim Prince
d the compiler used to build the application. So there really
isn't a good incentive to retrogress away from the USE files simply to
avoid one aspect of mixing incompatible compilers.
--
Tim Prince
--disable-mpi-f77
checking if MCA component btl:openib can compile... no
Tim
Thanks Tom, that sounds good. I will give it a try as soon as our Phi host
here host gets installed.
I assume that all the prerequisite libs and bins on the Phi side are
available when we download the Phi
1 - 100 of 321 matches
Mail list logo