There is a project called "MVAPICH2-GPU", which is developed by D. K.
Panda's research group at Ohio State University. You will find lots of
references on Google... and I just briefly gone through the slides of
"MVAPICH2-GPU: Optimized GPU to GPU Communication for InfiniBand
Clusters"":
http://no
On Dec 14, 2011, at 1:26 PM, Sabela Ramos Garea wrote:
> Hello,
>
> As far as I know, there is no support for some MPI-2 features in the shared
> memory BTL as dynamic process creation or port connection. Are you planning
> to include this support?
It depends on what exactly you mean. Dynamic
On Dec 14, 2011, at 3:48 PM, Prentice Bisbal wrote:
> I realized this after I wrote that and clarified it in a subsequent e-mail.
> Which you probably just read. ;-)
After I sent the mail, I saw it. Oops. :-)
--
Jeff Squyres
jsquy...@cisco.com
For corporate legal information go to:
http://ww
On 12/14/2011 03:39 PM, Jeff Squyres wrote:
> On Dec 14, 2011, at 3:21 PM, Prentice Bisbal wrote:
>
>> For example, your configure command,
>>
>> ./configure --prefix=/opt/openmpi/intel CC=gcc CXX=g++ F77=ifort FC=ifort
>>
>> Doesn't tell Open MPI to use ifcort for mpif90 and mpif77.
> Actually, t
On Dec 14, 2011, at 3:21 PM, Prentice Bisbal wrote:
> For example, your configure command,
>
> ./configure --prefix=/opt/openmpi/intel CC=gcc CXX=g++ F77=ifort FC=ifort
>
> Doesn't tell Open MPI to use ifcort for mpif90 and mpif77.
Actually, that's not correct.
For Open MPI, our wrapper compil
On 12/14/2011 03:29 PM, Micah Sklut wrote:
> Okay thanks Prentice.
>
> I understand what you are saying about specifying the compilers during
> configure.
> Perhaps, that alone would have solved the problem, but removing the
> 1.4.2 ompi installation worked as well.
>
> Micah
>
Well, to clarify my
Okay thanks Prentice.
I understand what you are saying about specifying the compilers during
configure.
Perhaps, that alone would have solved the problem, but removing the 1.4.2
ompi installation worked as well.
Micah
On Wed, Dec 14, 2011 at 3:24 PM, Prentice Bisbal wrote:
>
> On 12/14/2011 01
Hello,
As far as I know, there is no support for some MPI-2 features in the shared
memory BTL as dynamic process creation or port connection. Are you planning
to include this support?
Thank you.
Sabela Ramos.
On 12/14/2011 01:20 PM, Fernanda Oliveira wrote:
> Hi Micah,
>
> I do not know if it is exactly what you need but I know that there are
> environment variables to use with intel mpi. They are: I_MPI_CC,
> I_MPI_CXX, I_MPI_F77, I_MPI_F90. So, you can set this using 'export'
> for bash, for instance
On 12/14/2011 12:21 PM, Micah Sklut wrote:
> Hi Gustav,
>
> I did read Price's email:
>
> When I do "which mpif90", i get:
> /opt/openmpi/intel/bin/mpif90
> which is the desired directory/binary
>
> As I mentioned, the config log file indicated it was using ifort, and
> had no mention of gfortran.
I uninstalled 1.4.2 with rpm -e ompi, and now my existing mpi binaries are
working.
Thanks so much for everyone's help.
On Wed, Dec 14, 2011 at 3:12 PM, Tim Prince wrote:
> On 12/14/2011 12:52 PM, Micah Sklut wrote:
>
>> Hi Gustavo,
>>
>> Here is the output of :
>> barells@ip-10-17-153-123:~> /
On 12/14/2011 12:52 PM, Micah Sklut wrote:
Hi Gustavo,
Here is the output of :
barells@ip-10-17-153-123:~> /opt/openmpi/intel/bin/mpif90 -showme
gfortran -I/usr/lib64/mpi/gcc/openmpi/include -pthread
-I/usr/lib64/mpi/gcc/openmpi/lib64 -L/usr/lib64/mpi/gcc/openmpi/lib64
-lmpi_f90 -lmpi_f77 -lmpi
On 12/14/2011 1:20 PM, Fernanda Oliveira wrote:
Hi Micah,
I do not know if it is exactly what you need but I know that there are
environment variables to use with intel mpi. They are: I_MPI_CC,
I_MPI_CXX, I_MPI_F77, I_MPI_F90. So, you can set this using 'export'
for bash, for instance or directl
Open MPI InfiniBand gurus and/or Mellanox: could I please get some
assistance with this? Any suggestions on tunables or debugging
parameters to try?
Thank you very much.
On Mon, Dec 12, 2011, at 10:42 AM, V. Ram wrote:
> Hello,
>
> We are running a cluster that has a good number of older nodes
On Dec 14, 2011, at 12:52 PM, Micah Sklut wrote:
> I do see what you are saying about the 1.4.2 and 1.4.4 components.
> I'm not sure why that is, but there seems to be some conflict with the
> existing openmpi, before recently installed 1.4.4 and trying to install with
> ifort.
Did you instal
When it comes to intrinsic Fortran-90 functions, or to libraries provided by
the compiler vendor
[e.g. MKL in the case of Intel], I do agree that they *should* be able to parse
the array-section
notation and use the correct memory layout.
However, for libraries that are not part of Fortran-90, s
Hi Micah,
I do not know if it is exactly what you need but I know that there are
environment variables to use with intel mpi. They are: I_MPI_CC,
I_MPI_CXX, I_MPI_F77, I_MPI_F90. So, you can set this using 'export'
for bash, for instance or directly when you run.
I use in my bashrc:
export I_MPI
Actually, sub array passing is part of the F90 standard (at least
according to every document I can find), and not an Intel extension. So
if it doesn't work you should complain to the compiler company. One of
the reasons for using it is that the compiler should be optimized for
whatever method
Hi Gustavo,
Here is the output of :
barells@ip-10-17-153-123:~> /opt/openmpi/intel/bin/mpif90 -showme
gfortran -I/usr/lib64/mpi/gcc/openmpi/include -pthread
-I/usr/lib64/mpi/gcc/openmpi/lib64 -L/usr/lib64/mpi/gcc/openmpi/lib64
-lmpi_f90 -lmpi_f77 -lmpi -lopen-rte -lopen-pal -ldl -Wl,--export-dynam
How about the output of this?
/opt/openmpi/intel/bin/mpif90 -showme
Anyway, something seems to be wrong with your OpenMPI installation.
Just read the output of your ompi_info in your email below.
You will see that the OpenMPI version is 1.4.4.
However, most components are version 1.4.2.
Do you ag
Hi Gustav,
I did read Price's email:
When I do "which mpif90", i get:
/opt/openmpi/intel/bin/mpif90
which is the desired directory/binary
As I mentioned, the config log file indicated it was using ifort, and had
no mention of gfortran.
Below is the output from ompi_info. It shows reference to th
Hi Micah
Did you read Tim Prince's email to you? Check it out.
Best thing is to set your environment variables [PATH, LD_LIBRARY_PATH, intel
setup]
in your initialization file, .profile/.bashrc or .[t]cshrc.
What is the output of 'ompi_info'? [From your ifort-built OpenMPI.]
Does it show ifor
Hi Patrick
>From my mere MPI and Fortran-90 user point of view,
I think that the solution offered by the MPI standard [at least up to MPI-2]
to address the problem of non-contiguous memory layouts is to use MPI
user-defined types,
as I pointed out in my previous email.
I like this solution becaus
Thanks for your thoughts,
It would certainly appear that it is a PATH issue, but I still haven't
figured it out.
When I type the ifort command, ifort does run.
The intel path is in my PATH and is the first directory listed.
Looking at the configure.log, there is nothing indicating use or mention
Hi Micah
Is ifort in your PATH?
If not, the OpenMPI configure script will use any fortran compiler it finds
first, which may be gfortran.
You need to run the Intel compiler startup script before you run the OpenMPI
configure.
The easy thing to do is to source the Intel script inside your .profi
Dear Matthieu, Rolf,
Thank you!
But normally CUDA device selection is based on MPI process index. So,
cuda context must exist where MPI index is not yet available. What is
the best practice of process<->GPU mapping in this case? Or can I
select any device prior to MPI_Init and later change to ano
Thanks all for your anwers. yes, I understand well that it is a non contiguous
memory access problem as the MPI_BCAST should wait for a pointer on a valid
memory zone. But I'm surprised that with the MPI module usage Fortran does not
hide this discontinuity in a contiguous temporary copy of the
To add to this, yes, we recommend that the CUDA context exists prior to a call
to MPI_Init. That is because a CUDA context needs to exist prior to MPI_Init
as the library attempts to register some internal buffers with the CUDA library
that require a CUDA context exists already. Note that this
On 12/14/2011 9:49 AM, Micah Sklut wrote:
I have installed openmpi for gfortran, but am now attempting to install
openmpi as ifort.
I have run the following configuration:
./configure --prefix=/opt/openmpi/intel CC=gcc CXX=g++ F77=ifort FC=ifort
The install works successfully, but when I run
/
Hi,
Processes are not spawned by MPI_Init. They are spawned before by some
applications between your mpirun call and when your program starts. When it
does, you already have all MPI processes (you can check by adding a sleep
or something like that), but they are not synchronized and do not know ea
Dear colleagues,
For GPU Winter School powered by Moscow State University cluster
"Lomonosov", the OpenMPI 1.7 was built to test and popularize CUDA
capabilities of MPI. There is one strange warning I cannot understand:
OpenMPI runtime suggests to initialize CUDA prior to MPI_Init. Sorry,
but how
Hi All,
I have installed openmpi for gfortran, but am now attempting to install
openmpi as ifort.
I have run the following configuration:
./configure --prefix=/opt/openmpi/intel CC=gcc CXX=g++ F77=ifort FC=ifort
The install works successfully, but when I run
/opt/openmpi/intel/bin/mpif90, it run
Hi all,
I am trying to have a working mpif90 on my laptop PC (windows 7 64
bits), so that I can develop/test fortran 90 MPI code before running it
on a cluster.
I have tried the 1.5.4 installer on windows, cygwin, installed ubuntu,
tried cygwin again, and now am back to the Open MPI 1.5.4 wi
33 matches
Mail list logo