Hi,
I'm a newbie, and I want to test the checkpoint/restart mechanism included in
OpenMPI v 1.3.3. I have tried to search for some documentation about how to
install the OpenMPI implementation in order to support checkpoint/restart, and
I found lots of links to
h
Hi Andreea,
I compiled an installation of openmpi with checkpoint/restart support
and it is working fine and I'm trying to integrate it with SGE.
Did you set the right option compiling openMPI?
Did you installed BLCR before?
I used these options:
./configure --prefix=/opt/cesga/openmpi-1.3.3
Ok, thanks for you advises. Actually I did not install BLCR... As I
have told I was unable to find proper documentation and I just thought
it is all included :)
I will try to see what is each parameter for and I will install OpenMPI once
again.
Thanks again,
Andreea
--- On Fri, 10/30/09, Ser
Hi!
Im trying to compile a Fortran file. I did not code it myself and am not
familiar with its detailed workings Im interested in the program it will
result in after compiling.
Along with the file I also received a command line.
My OS is openSuse 11.1. As I need openMPI and the Intel co
A copy of the configure line for Open MPI would be helpful. Which Intel
compiler are you using, version and bitness. Can you do file on
libmpi_f77.so? Also, are you sure that /usr/local/lib is where you
installed you Open MPI build and that isn't something latent?
--td
Date: Fri, 30 Oct 2
This is he configure line I used:
./configure OMPI_F77=/opt/intel/Compiler/11.1/056/bin/ia32/ifort
--with-wrapper-fflags='-compiler -03 -ip-pad -xW -w -02'
OMPI_FFLAGS='-compiler -03
-ip -pad -xW -w -02'
The specifications were included on the basis that they are used in the
compile command I re
Georg,
I think your problem is you are using a ia32 (32 bit compiler) with a 64
bit built library. Either you need to use the intel64 compiler or build
Open MPI with the 32 bit library.
--td
*Subject:* Re: [OMPI users] Fortran Library Problem using openMPI
*From:* Georg A. Reichstein (/rei
Let me try this one more time.
Your application is being built with a 32 bit compiler ia32. However
the Open MPI libraries look to be built with the 64 bit compiler
intel64. One or the other needs to change.
--td
Terry Dontje wrote:
Georg,
I think your problem is you are using a ia32 (32
Also, is the configure line you are giving below the application
configure line. I was actually asking for the Open MPI configure line.
--td
Terry Dontje wrote:
Let me try this one more time.
Your application is being built with a 32 bit compiler ia32. However
the Open MPI libraries look
Terry,
Thanks for your input so far. I'll try changing the compiler to the 64bit
version. I might have been mistaken to assume that my openSuse is a 32bit
system when in fact it could well be 64bit (which might explain why Open MPI
installed the 64bit library after the system check).
The configure
Hi All,
I compiled OpenMPI in windows server 2003 through Cygwin and also through CMake
and Visual Studio. In both the method I successfully complied
in cygwin I configured with following command
./configure
--enable-mca-no-build=timer-windows,memory_mallopt,maffinity,paffinity
without these f
Hi All,
I compiled OpenMPI in windows server 2003 through Cygwin and also through CMake
and Visual Studio. In both the method I successfully complied
in cygwin I configured with following command
./configure
--enable-mca-no-build=timer-windows,memory_mallopt,maffinity,paffinity
without these f
Terry,
you were right! Thanks a lot - I never thought about double checking the
bitness :)
Now it compiled fine. All I have to do is check if the program works ... but
that is something between me and the developers.
Any remarks on the options I handed to the compiler? It worked with them but
I a
good part of the day,
I am trying to run a parallel program (that used to run in a cluster) in my
double core pc. Could openmpi simulate the distribution of the parallel jobs
to my 2 processors meaning will qsub work even if it is not a real cluster?
thank you for reading my message and for an
On Friday 30 October 2009, Konstantinos Angelopoulos wrote:
> good part of the day,
>
> I am trying to run a parallel program (that used to run in a cluster) in my
> double core pc. Could openmpi simulate the distribution of the parallel
> jobs to my 2 processors
If your program is an MPI program
On Fri, Oct 30, 2009 at 4:51 AM, Andreea m. (Costea) wrote:
> Hi,
>
> I'm a newbie, and I want to test the checkpoint/restart mechanism included
> in OpenMPI v 1.3.3. I have tried to search for some documentation about how
> to install the OpenMPI implementation in order to support
> checkpoint/
Hi Konstantinos, list
If you want "qsub" you need to install the resource manager /
queue system in your PC.
Assuming your PC is a Linux box, if your resource manager
is Torque/PBS on some Linux distributions it can be installed
from an rpm through yum (or equivalent mechanism), for instance.
I a
Hi Basant,
I am not familiar with Windows builds of Open MPI. However, can you see
if you Open MPI build actually created a mca_paffinity_window's dll? I
could imagine the issue might be that the dll is not finding a needed
dependency. Under Windows is there a command similar to Unix's ldd
Even with the original way to create the matrices, one can use
MPI_Create_type_struct to create an MPI datatype (http://web.mit.edu/course/13/13.715/OldFiles/build/mpich2-1.0.6p1/www/www3/MPI_Type_create_struct.html
) using MPI_BOTTOM as the original displacement.
george.
On Oct 29, 2009, a
Hi All,
I got a problem when trying to checkpoint a mpi job.
I will really appreciate if you can help me fix the problem.
the blcr package was installed successfully on the cluster.
I configure the ompenmpi with flags,
./configure --with-ft=cr --enable-ft-thread --enable-mpi-threads
--with-blcr=/
Wouldn't you need to create a different datatype for each matrix
instance? E.g., let's say you create twelve 5x5 matrices. Wouldn't you
need twelve different derived datatypes? I would think so because each
time you create a matrix, the footprint of that matrix in memory will
depend on the w
Thanks for the replies guys! Definitely two suggestions worth trying.
Definitely didn't consider a derived datatype. I wasn't really sure that the
MPI_Send call overhead was significant enough that increasing the buffer
size and decreasing the number of calls would cause any speed up. Will
change t
22 matches
Mail list logo