(HLRS)
> Nobelstrasse 19
> 70569 Stuttgart
>
> Tel: ++49(0)711-685-87203
> email: nietham...@hlrs.de
> http://www.hlrs.de/people/niethammer
>
>
>
> - Ursprüngliche Mail -
> Von: "Pradeep Jha"
> An: "Open MPI Users"
> Gesendet: F
;> Regards
> >> Christoph
> >>
> >> --
> >>
> >> Christoph Niethammer
> >> High Performance Computing Center Stuttgart (HLRS)
> >> Nobelstrasse 19
> >> 70569 Stuttgart
> >>
> >> Tel: ++49(0)711-685-87203
>
I am writing a parallel program in Fortran77. I have the following problem:
1) I have N number of processors.
2) Each processor contains an array A of size S.
3) Using some function, on every processor (say rank X), I calculate
the value of two integers Y and Z, where Z
ound is to source the following _after_ you source the Intel
> Compiler's compilervars.sh in your start-up scripts:
> . /var/mpi-selector/data/openmpi_...sh
>
> -Tom
>
> >
> > On Apr 1, 2013, at 5:12 AM, Pradeep Jha wrote:
> >
> > > /opt/intel/composer_xe_2013.1.1
Hello,
When I am trying to run a perfectly running parallel code on a new Linux
machine, using the following command:
--
mpirun -np 16 name_of_executable
--
I am getting the fo
oh! it works now. Thanks a lot and sorry about my negligence.
2013/3/1 Ake Sandgren
> On Fri, 2013-03-01 at 01:24 +0900, Pradeep Jha wrote:
> > Sorry for those mistakes. I addressed all the three problems
> > - I put "implicit none" at the top of main progra
_integer, sender, tag,
& mpi_comm_world, status, ierror)
end do
end if
if ((me.ge.1).and.(me.lt.np)) then
send(1) = me*12
call mpi_send(send, 1, mpi_integer, 0, tag,
&mpi_comm_world, ierror)
end if
return
end
2013/3/1 Jeff Squyres (jsquy
Is it possible to call the MPI_send and MPI_recv commands inside a
subroutine and not the main program? I have written a minimal program for
what I am trying to do. It is compiling fine but it is not working. The
program just hangs in the "sendrecv" subroutine. Any ideas how can I do it?
main.f
2013/2/21 Gus Correa
> two types are the same size,
> but I wonder if somehow the two type names are interchangeable
> in OpenMPI (I would guess they're not),
> although declared
>
Hello,
No, I didnt had to change that. They both work fine for me.
Pradeep
plement it so, there is no guarantee the sends are
> >> performed in this order. B
> >>
> >> It is better if you accept messages from all senders (MPI_ANY_SOURCE)
> >> instead of particular ranks and then check where the
> >> message came from by examin
I have attached a sample of the MPI program I am trying to write. When I
run this program using "mpirun -np 4 a.out", my output is:
Sender:1
Data received from1
Sender:2
Data received from1
Sender:2
And the run hangs there. I dont u
does it just run on one core?
Generally, how is the work load divided on the cores on a computer. Does
every process that I start uses a new core, or the work load is distributed
over all the available cores?
Thank you
2013/1/29 Jens Glaser
> Hi Pradeep,
>
> On Jan 28, 2013, at 11:16 PM
Hello,
I have a very basic question about MPI.
I have a computer with 8 processors (each with 8 cores). What is the
difference between if I run a program simply by "./program" and "mpirun -np 8
/path/to/program" ? In the first case does the program just use one processor
out of the 8? If I
13 matches
Mail list logo