Dear Peter,
Your suggestions did worked, it didnt showed any error during make and
make install. But it didnt got installed with mpif90 support. I tried to
compile my mpi code, but it gave following error.
[vighnesh@test_node SIVA]$ /share/apps/mpi/openmpi/intel/bin/mpif90 code.f
-o code.exe
Dear Peter,
I got info from the net that OPenmpi requires F77 bindings for F90
support. Thats where I was making mistake, i didnt configured for F77
bindings during openmpi setup. I rectified my mistake and after that
openmpi
was installed successfully for both PGI and INTEL compiler.
It was
dear sir
i am sending the details as follows
1. i am using openmpi-1.3.3 and blcr 0.8.2
2. i have installed blcr 0.8.2 first under /root/MS
3. then i installed openmpi 1.3.3 under /root/MS
4 i have configured and installed open mpi as follows
#./configure --with-ft=cr --enable-mpi-threads --
Hi,
from what you describe below, seems as if you did not configure well
OpenMPI.
You issued
./configure --with-ft=cr --enable-mpi-threads --with-blcr=/usr/local/bin
--with-blcr-libdir=/usr/local/lib
while according to the installation paths you gave it should have been
more like
./confi
Hi Ankur,
try this command,
$ mpirun -np 2 -host firstHostIp,secondHostIp a.out
for details read manual page for "mpirun".
$ man mpirun
Regards,
On Wed, Sep 30, 2009 at 3:22 PM, ankur pachauri wrote:
> Dear all,
>
> I have been able to install open mpi on tw
Hi,
A fortran application which is compiled with ifort-10.1 and open mpi
1.3.1 on Cent OS 5.2 fails after running 4 days with following error
message:
[compute-0-7:25430] *** Process received signal ***
[compute-0-7:25433] *** Process received signal ***
[compute-0-7:25433] Signal: Bus error
hi vipin,
thanks for the answer but one thing more, do openmpi had bit different
library functions than mpi or it's usage is different (such as i'll have to
use ompi_** insted of mpi_** functions)
thanks in advance
On Thu, Oct 1, 2009 at 2:53 PM, vipin kumar wrote:
> Hi Ankur,
>
> try this co
The following is the information regarding the error. I am running Open MPI
1.2.5 on Ubuntu 4.2.4, kernel version 2.6.24
I ran the server program as mpirun -np 1 server. This program gave me the
output port as 0.1.0:2000. I used this port name value as the command line
argument for the client pr
Open MPI is a compliant MPI-2.1 implementation, meaning that your MPI
applications are source compatible with other MPI 2.1
implementations. In short: use MPI_Send and all the other MPI_*
functions that you're used to.
On Oct 1, 2009, at 6:36 AM, ankur pachauri wrote:
hi vipin,
thanks
Hello,
We have 64 bit linux box. For a number of reason I need to build a 32 bit
openMPI.
I have searched FAQ and archived mail, but I couldn't find a good answer.
There are some references to this question, in the developer mailing list with
no clear
response.
I am I looking for
You probably just want to pass the relevant compiler/linker flags to
Open MPI's configure script, such as:
./configure CFLAGS=-m32 CXXFLAGS=-m32 FFLAGS=-m32 FCFLAGS=-m32 ...
You need to pass them to all four language flags (C, C++, F77, F90).
I used -m32 as an example here; use whatever f
I just upgraded to the devel snapshot of 1.4a1r22031
when i run a simple hello world with a barrier i get
btl_tcp_endpoint.c:484:mca_btl_tcp_endpoint_recv_connect_ack] received
unexpected process identifier
if i pull the barrier out the hello world runs fine
interestingly enough, i can run IMB
We had a question from a user who had turned on memory debugging in TotalView
and experience a memory event error Invalid memory alignment request. Having a
1.3.3 build of OpenMPI handy, I tested it and sure enough, saw the error. I
traced it down to, surprise, a call to memalign. I find ther
On Oct 1, 2009, at 1:24 PM, Michael Di Domenico wrote:
I just upgraded to the devel snapshot of 1.4a1r22031
when i run a simple hello world with a barrier i get
btl_tcp_endpoint.c:484:mca_btl_tcp_endpoint_recv_connect_ack] received
unexpected process identifier
I have seen this failure ove
Hi,
I think Jeff has already addressed this problem.
https://svn.open-mpi.org/trac/ompi/changeset/21744
--
Samuel K. Gutierrez
Los Alamos National Laboratory
On Oct 1, 2009, at 11:25 AM, Peter Thompson wrote:
We had a question from a user who had turned on memory debugging in
TotalView and
On Thu, Oct 1, 2009 at 1:37 PM, Jeff Squyres wrote:
> On Oct 1, 2009, at 1:24 PM, Michael Di Domenico wrote:
>
>> I just upgraded to the devel snapshot of 1.4a1r22031
>>
>> when i run a simple hello world with a barrier i get
>>
>> btl_tcp_endpoint.c:484:mca_btl_tcp_endpoint_recv_connect_ack] rece
FWIW, I saw this bug to have race-condition-like behavior. I could
run a few times and then it would work.
On Oct 1, 2009, at 1:42 PM, Michael Di Domenico wrote:
On Thu, Oct 1, 2009 at 1:37 PM, Jeff Squyres
wrote:
> On Oct 1, 2009, at 1:24 PM, Michael Di Domenico wrote:
>
>> I just upgrad
Did that make it over to the v1.3 branch?
On Oct 1, 2009, at 1:39 PM, Samuel K. Gutierrez wrote:
Hi,
I think Jeff has already addressed this problem.
https://svn.open-mpi.org/trac/ompi/changeset/21744
--
Samuel K. Gutierrez
Los Alamos National Laboratory
On Oct 1, 2009, at 11:25 AM, Peter
Hmm, i don't recall seeing that...
On Thu, Oct 1, 2009 at 1:51 PM, Jeff Squyres wrote:
> FWIW, I saw this bug to have race-condition-like behavior. I could run a
> few times and then it would work.
>
> On Oct 1, 2009, at 1:42 PM, Michael Di Domenico wrote:
>
>> On Thu, Oct 1, 2009 at 1:37 PM, Je
Ticket created (#2040). I hope it's okay ;-).
--
Samuel K. Gutierrez
Los Alamos National Laboratory
On Oct 1, 2009, at 11:58 AM, Jeff Squyres wrote:
Did that make it over to the v1.3 branch?
On Oct 1, 2009, at 1:39 PM, Samuel K. Gutierrez wrote:
Hi,
I think Jeff has already addressed thi
On Thu, 2009-10-01 at 13:58 -0400, Jeff Squyres wrote:
> Did that make it over to the v1.3 branch?
No it didn't. And memalign is obsolete according to the manpage.
posix_memalign is the one to use.
> >
> > I think Jeff has already addressed this problem.
> >
> > https://svn.open-mpi.org/trac/ompi
Took a look at the changes and that looks like it should work. It's certainly
not in 1.3.3, but as long as you guys are on top of it, that relieves my
concerns ;-)
Thanks,
PeterT
Samuel K. Gutierrez wrote:
Ticket created (#2040). I hope it's okay ;-).
--
Samuel K. Gutierrez
Los Alamos Nat
Simple malloc() returns pointers that are at least eight byte aligned
anyway, I'm not sure what the reason for calling memalign() with a value
of four would be be anyway.
Ashley,
On Thu, 2009-10-01 at 20:19 +0200, Åke Sandgren wrote:
> No it didn't. And memalign is obsolete according to the manp
Are there are tuning parameters than I can use to reduce the amount of memory
used by OpenMPI? I would very much like to use OpenMPI instead of MVAPICH, but
I'm on a cluster where memory usage is the most important consideration. Here
are three results which capture the problem:
With the "leav
On Thu, 2009-10-01 at 19:56 +0100, Ashley Pittman wrote:
> Simple malloc() returns pointers that are at least eight byte aligned
> anyway, I'm not sure what the reason for calling memalign() with a value
> of four would be be anyway.
That is not necessarily true on all systems.
--
Ake Sandgren,
Good point. That particular call to memalign, however, is part of a
series of OMPI memory hook tests. The memory allocated by that
memalign call is promptly freed (opal/mca/memory/ptmalloc2/
opal_ptmalloc2_component.c : line 111). The change is to silence
TotalView's memory alignment error
The value of 4 might be invalid (though maybe on a 32b machine, it would be
okay?) but it's enough to allow TotalView to continue on without raising a
memory event, so I'm okay with it ;-)
PeterT
Ashley Pittman wrote:
Simple malloc() returns pointers that are at least eight byte aligned
anywa
On Oct 1, 2009, at 2:19 PM, Åke Sandgren wrote:
No it didn't. And memalign is obsolete according to the manpage.
posix_memalign is the one to use.
This particular call is testing the memalign intercept in the ptmalloc
component during startup; we can't replace it with posix_memalign.
Hen
On Thu, 2009-10-01 at 15:19 -0400, Jeff Squyres wrote:
> On Oct 1, 2009, at 2:19 PM, Åke Sandgren wrote:
>
> > No it didn't. And memalign is obsolete according to the manpage.
> > posix_memalign is the one to use.
> >
>
>
> This particular call is testing the memalign intercept in the ptmalloc
Hi All,
I am trying to profile (get the call graph/call sequence of) Open MPI
communication routines using GNU Profiler (gprof) since the
communication calls are implemented using macros and it's harder to
trace them statically. In order to do that I compiled the OpenMPI
source code with following
On Oct 1, 2009, at 3:27 PM, Åke Sandgren wrote:
Yes, but perhaps you need to verify that posix_memalign is also
intercepted?
Er... right. Of course. :-)
https://svn.open-mpi.org/trac/ompi/changeset/22045
I commented on memalign being obsolete since there are a couple of
uses
of it
Aniruddha Marathe wrote:
I am trying to profile (get the call graph/call sequence of) Open MPI
communication routines using GNU Profiler (gprof) since the
communication calls are implemented using macros and it's harder to
trace them statically. In order to do that I compiled the OpenMPI
source
I am not sure if this is the right place the ask this question but here
it goes.
Simplified abstract version of the question.
I have 2 MPI processes and I want one to make an occasional signal to
the other process. These signals will not happen at predictable times.
I want the other process sitti
33 matches
Mail list logo