On 7/17/06 12:37 AM, "Mahesh Barve" wrote:
> Can anyone please enlighten us about what really
> happens in MPI_init() in openMPI?
This is quite a complicated question. :-)
> More specifically i am interested in knowing
> 1.Functions that needs to accomplished during
> MPI_init()
> 2.What h
On 7/14/06 10:40 AM, "Michael Kluskens" wrote:
> I've looked through the documentation but I haven't found the
> discussion about what each BTL device is, for example, I have:
>
> MCA btl: self (MCA v1.0, API v1.0, Component v1.2)
This is the "loopback" Open MPI device. It is used exclusively
What version of Open MPI are you using?
Can you run your application through a memory-checking debugger such as
Valgrind to see if it gives any more information about where the original
problem occurs?
On 7/17/06 10:14 PM, "Manal Helal" wrote:
> Hi
>
> after I finish execution, and all result
Hi George,
George Bosilca wrote:
> It is what I suspected. You can see that the envio array is smaller than
> it should. It was created as an array of doubles with the size t_max, when
> it should have been created as an array of double with the size t_max *
> nprocs.
Ah, yes, I see (and even und
I think there are two questions here:
1. Running MPI applications on "slow" networks (e.g., 100mbps). This is
very much application-dependent. If your MPI app doesn't communication with
other processes much, then it probably won't matter. If you have
latency/bandwidth-sensitive applications, the
Its doable, the scaling will not as good, because a network is a
network. If you are using just regular 100Mbit, you will not scale
as far as really good 1gig ethernet, but we are still talking about
tcp which incurs a penalty over networks like infiniband and myrinet.
Tcp is the largest is
Hi,
Is MPI paradigm applicable to the cluster of regular networked machines.
That is, does the cost of network IO offset benefits of parallelization?
My guess is that this really depends on the application itself, however,
I'm wondering if you guys know of any success stories which involve MPI
ru
On 7/20/06 2:06 AM, "esaifu" wrote:
> I have been using openMPI for the last one month,so i need some clarification
> regrding the following points.
> 1). What is the advantage of OpenMPI over MPICH2 and LAM/MPI.I mean to say
> is there any difference in performace wise.
Open MPI's TCP perfor
Could you re-send that? The attachment that I got was an excel spreadsheet
with the output from configure that did not show any errors -- it just
stopped in the middle of the check for "bool" in the C++ compiler.
Two notes:
1. One common mistake that people make is to use the "icc" compiler for
On 7/20/06 12:04 AM, "Jeff Squyres" wrote:
>> Config #2: ./configure --disable-shared --enable-static --with-rsh=/
>> usr/bin/ssh
>> Successful Build = NO
>> Error:
>> g++ -O3 -DNDEBUG -finline-functions -Wl,-u -Wl,_munmap -Wl,-
>> multiply_defined -Wl,suppress -o ompi_info components.o ompi_info
It is what I suspected. You can see that the envio array is smaller than
it should. It was created as an array of doubles with the size t_max, when
it should have been created as an array of double with the size t_max *
nprocs. If you look how the recibe array is created you can notice that
it'
Hi,
George Bosilca wrote:
> On the all-to-all collective the send and receive buffers has to be able
> to contain all the information you try to send. On this particular case,
> as you initialize the envio variable to a double I suppose it is defined
> as a double. If it's the case then the error
Hi,
shen T.T. wrote:
> Do you have the other compiler? Could you check the error and report it ?
I don't used other Intel Compilers at the moment, but I'm going to give
gfortran a try today.
Kind regards,
--
Frank Gruellich
HPC-Techniker
Tel.: +49 3722 528 42
Fax:+49 3722 528 15
E-Mail
Hi,
Graham E Fagg wrote:
> I am not sure which alltoall your using in 1.1 so can you please run
> the ompi_info utility which is normally built and put into the same
> directory as mpirun?
>
> i.e. host% ompi_info
>
> This provides lots of really usefull info on everything before we dig
> deeper
Dear All,
I have been using openMPI for the last one month,so i need some clarification
regrding the following points.
1). What is the advantage of OpenMPI over MPICH2 and LAM/MPI.I mean to say is
there any difference in performace wise.
2). Any check pointing mechanism is there in OpenMPI li
Dear All,
I was able to compile OpenMPI and create wrapper functions(like
mpicc,mpif77,etc) on top of GNU compilers.But when i tried it with Intel
fortran compiler(Since i need f90 compiler also),i met with some configuration
error(Hence i did'nt ger the Makefile).I am here with attching t
I have the same error message:"forrtl: severe (174): SIGSEGV, segmentation
fault occurred". I run my paralled code on single node or multi nodes, the
error existes. Then i try three Intel compilers : 8.1.037, 9.0.032 and 9.1.033
, but the error still existes. But my code work correctly on Window
On 7/18/06 7:33 PM, "Warner Yuen" wrote:
> USING GCC 4.0.1 (build 5341) with and without Intel Fortran (build
> 9.1.027):
What version of Open MPI were you working with? If it was a developer/SVN
checkout, what version of the GNU Auto tools were you using?
> Config #2: ./configure --disable-sh
18 matches
Mail list logo