The only MPI calls I am using are these (grep-ed from my code):
MPI_Abort(MPI_COMM_WORLD, 1);
MPI_Barrier(MPI_COMM_WORLD);
MPI_Bcast(&bufarray[0].hdr, sizeof(BD_CHDR), MPI_CHAR, 0, MPI_COMM_WORLD);
MPI_Comm_rank(MPI_COMM_WORLD,&myid);
MPI_Comm_size(MPI_COMM_WORLD,&numprocs);
MPI_Finalize();
MPI_I
Hi Damien,
I'll check this. Thanks for reporting it.
Shiqing
On 2010-8-8 10:16 PM, Damien wrote:
Hi all,
There's a code generation bug in the CMake/Visual Studio build of rc 5
on VS 2008. A release build, with static libs, F77 and F90 support
gives an error at line 91 in mpif-config.h
Hi Damien,
It is the user's responsibility to make sure the consistency of CMake
and VS build types, but you can't change this setting from CMake in
order to change it automatically in VS, it's a nature of using CMake.
Shiqing
On 2010-8-6 10:33 PM, Damien wrote:
Hi all,
There's a small
Thanks for reporting this Matthew. Fixed in r23576 (
https://svn.open-mpi.org/trac/ompi/changeset/23576)
Regards
--Nysal
On Fri, Aug 6, 2010 at 10:38 PM, Matthew Clark wrote:
> I was looking in my copy of openmpi-1.4.1 opal/asm/base/POWERPC32.asm
> and saw the following:
>
> START_FUNC(opal_sys_
On 8/8/2010 8:13 PM, Randolph Pullen wrote:
> Thanks, although “An intercommunicator cannot be used for collective
> communication.” i.e , bcast calls.,
yes it can. MPI-1 did not allow for collective operations on
intercommunicators, but the MPI-2 specification did introduce that notion.
Thank
I did not take the time to try to fully understand your approach so this
may sound like a dumb question;
Do you have an MPI_Bcast ROOT process in every MPI_COMM_WORLD and does
every non-ROOT MPI_Bcast call correctly identify the rank of ROOT in its
MPI_COMM_WORLD ?
An MPI_Bcast call when the
Sorry -
I missed the statement that all works when you add sleeps. That probably
rules out any possible error in the way MPI_Bcast was used.
Dick Treumann - MPI Team
IBM Systems & Technology Group
Dept X2ZA / MS P963 -- 2455 South Road -- Poughkeepsie, NY 12601
Tele (845) 433-7846 F
In your first mail, you mentioned that you are testing the new knem support.
Can you try disabling knem and see if that fixes the problem? (i.e., run with
--mca btl_sm_use_knem 0") If it fixes the issue, that might mean we have a
knem-based bug.
On Aug 6, 2010, at 1:42 PM, John Hsu wrote:
Personally, I've been having trouble following the explanations of the
problem. Perhaps it'd be helpful if you gave us an example of how to
reproduce the problem. E.g., short sample code and how you run the
example to produce the problem. The shorter the example, the greater
the odds of reso
Hi,
Could someone have a look on these two different error messages ? I'd like to
know the reason(s) why they were displayed and their actual meaning.
Thanks,
Eloi
On Monday 19 July 2010 16:38:57 Eloi Gaudry wrote:
> Hi,
>
> I've been working on a random segmentation fault that seems to occur
No idea what is going on here. No MPI call is implemented as a multicast - it
all flows over the MPI pt-2-pt system via one of the various algorithms.
Best guess I can offer is that there is a race condition in your program that
you are tripping when other procs that share the node change the ti
Hi,
I have to send some vectors from node to node, and the vecotrs are built
using a template. The datatypes used in the template will be long, int,
double, and char. How may I send those vectors since I wouldn't know what
MPI datatype i have to specify in MPI_Send and MPI Recv. Is there any way t
On Jul 28, 2010, at 12:21 PM, Åke Sandgren wrote:
> > Jeff: Is this correct?
>
> This is wrong, it should be 8 and alignement should be 8 even for intel.
> And i also see exactly the same thing.
Good catch!
I just fixed this in https://svn.open-mpi.org/trac/ompi/changeset/23580 -- it
looks li
On Jul 28, 2010, at 5:07 PM, Gus Correa wrote:
> Still, the alignment under Intel may or may not be right.
> And this may or may not explain the errors that Hugo has got.
>
> FYI, the ompi_info from my OpenMPI 1.3.2 and 1.2.8
> report exactly the same as OpenMPI 1.4.2, namely
> Fort dbl prec size
Hi
I have integrated mpi4py with openmpi 1.4.2 that was built with BLCR
0.8.2. When I run ompi-checkpoint on the program written using mpi4py, I
see that program doesn't resume sometimes after successful checkpoint
creation. This doesn't occur always meaning the program resumes after
successful
Hello Alexandru,
On Mon, Aug 9, 2010 at 6:05 PM, Alexandru Blidaru wrote:
> I have to send some vectors from node to node, and the vecotrs are built
> using a template. The datatypes used in the template will be long, int,
> double, and char. How may I send those vectors since I wouldn't know wha
Hello Riccardo,
I basically have to implement a 4D vector. An additional goal of my project
is to support char, int, float and double datatypes in the vector. I figured
that the only way to do this is through a template. Up to this point I was
only supporting doubles in my vector, and I was sendin
I have not tried to checkpoint an mpi4py application, so I cannot say for sure
if it works or not. You might be hitting something with the Python runtime
interacting in an odd way with either Open MPI or BLCR.
Can you attach a debugger and get a backtrace on a stuck checkpoint? That might
show
problem "fixed" by adding the --mca btl_sm_use_knem 0 option (with -npernode
11), so I proceeded to bump up -npernode to 12:
$ ../openmpi_devel/bin/mpirun -hostfile hostfiles/hostfile.wgsgX -npernode
12 --mca btl_sm_use_knem 0 ./bin/mpi_test
and the same error occurs,
(gdb) bt
#0 0x7fcca6a
Hi Alexandru,
you can read all about Boost.MPI at:
http://www.boost.org/doc/libs/1_43_0/doc/html/mpi.html
On Mon, Aug 9, 2010 at 10:27 PM, Alexandru Blidaru wrote:
> I basically have to implement a 4D vector. An additional goal of my project
> is to support char, int, float and double dataty
I've opened a ticket about this -- if it's an actual problem, it's a 1.5
blocker:
https://svn.open-mpi.org/trac/ompi/ticket/2530
What version of knem and Linux are you using?
On Aug 9, 2010, at 4:50 PM, John Hsu wrote:
> problem "fixed" by adding the --mca btl_sm_use_knem 0 option (with
I've replied in the ticket.
https://svn.open-mpi.org/trac/ompi/ticket/2530#comment:2
thanks!
John
On Mon, Aug 9, 2010 at 2:42 PM, Jeff Squyres wrote:
> I've opened a ticket about this -- if it's an actual problem, it's a 1.5
> blocker:
>
>https://svn.open-mpi.org/trac/ompi/ticket/2530
>
> Wh
The install was completly vanilla - no extras a plain .configure command line
(on FC10 x8x_64 linux)
Are you saying that all broadcast calls are actually implemented as serial
point to point calls?
--- On Tue, 10/8/10, Ralph Castain wrote:
From: Ralph Castain
Subject: Re: [OMPI users] MPI_B
23 matches
Mail list logo