Thanks, it turns out, this was caused by an error earlier in the code,
resolved on StackOverflow
http://stackoverflow.com/questions/13290608/mpi-err-truncate-on-broadcast
On Fri, Nov 9, 2012 at 9:20 PM, Jeff Squyres wrote:
> Offhand, your code looks fine.
>
> Can you send a small, self-containe
Offhand, your code looks fine.
Can you send a small, self-contained example?
On Nov 8, 2012, at 9:42 AM, Lim Jiew Meng wrote:
> I have an int I intend to broadcast from root (rank==(FIELD=0)).
>
> int
> winner
>
>
> if (rank == FIELD) {
>
> winner
> = something;
> }
>
>
> MPI_Barrie
I have an int I intend to broadcast from root (rank==(FIELD=0)).
int winner
if (rank == FIELD) {
winner = something;}
MPI_Barrier(MPI_COMM_WORLD);
MPI_Bcast(&winner, 1, MPI_INT, FIELD, MPI_COMM_WORLD);
MPI_Barrier(MPI_COMM_WORLD);if (rank != FIELD) {
cout << rank << " informed that winner
Sorry for the super-late reply. :-\
Yes, ERR_TRUNCATE means that the receiver didn't have a large enough buffer.
Have you tried upgrading to a newer version of Open MPI? 1.4.3 is the current
stable release (I have a very dim and not guaranteed to be correct recollection
that we fixed somethin
Hi:
I'm running openmpi 1.2.8. I'm working on a project where one part
involves communicating an integer, representing the number of data
points I'm keeping track of, to all the processors. The line is simple:
MPI_Allreduce(&np,&geo_N,1,MPI_INT,MPI_MAX,MPI_COMM_WORLD);
where np and geo_N
Thanks for confirming. We'll try valgrind next :)
On Wed, Feb 24, 2010 at 6:35 PM, Jeff Squyres wrote:
> On Feb 24, 2010, at 8:17 PM, Brian Budge wrote:
>
>> We are receiving an error of MPI_ERR_TRUNCATE from MPI_Test (after
>> enabling the RETURN error handler). I'm confused as to what might
>
On Feb 24, 2010, at 8:17 PM, Brian Budge wrote:
> We are receiving an error of MPI_ERR_TRUNCATE from MPI_Test (after
> enabling the RETURN error handler). I'm confused as to what might
> cause this, as I was assuming that this generally resulted from a recv
> call being made requesting fewer byte
Hi all -
We are receiving an error of MPI_ERR_TRUNCATE from MPI_Test (after
enabling the RETURN error handler). I'm confused as to what might
cause this, as I was assuming that this generally resulted from a recv
call being made requesting fewer bytes than were sent.
Can anyone shed some light o
On Oct 17, 2008, at 6:03 PM, Nick Collier wrote:
And under some conditions, I get the error:
[3] [belafonte.home:04938] *** An error occurred in MPI_Wait
[3] [belafonte.home:04938] *** on communicator MPI_COMM_WORLD
[3] [belafonte.home:04938] *** MPI_ERR_TRUNCATE: message truncated
[3] [belafon
Hi,
I'm getting an error I don't quite understand. The code:
MPI_Irecv(recv->data, recv->count, recv->datatype, recv->sender_id,
recv->agent_type, MPI_COMM_WORLD,
&recv->request);
...
recv = (AgentRequestRecv*) item->data;
MPI_Wait(&recv->requ
On Sep 26, 2008, at 1:45 PM, Robert Kubrick wrote:
I'm not sure how should I interpret this message:
[local:17344] *** An error occurred in MPI_Testsome
[local:17344] *** on communicator MPI COMMUNICATOR 5 CREATE FROM 0
[local:17344] *** MPI_ERR_TRUNCATE: message truncated
[local:17344] *** MPI
I'm not sure how should I interpret this message:
[local:17344] *** An error occurred in MPI_Testsome
[local:17344] *** on communicator MPI COMMUNICATOR 5 CREATE FROM 0
[local:17344] *** MPI_ERR_TRUNCATE: message truncated
[local:17344] *** MPI_ERRORS_ARE_FATAL (goodbye)
mpiexec noticed that job
limited number of hosts, it seems to
behave as expected. Thanks!! Tom
--- On Mon, 8/18/08, George Bosilca wrote:
From: George Bosilca
Subject: Re: [OMPI users] MPI_ERR_TRUNCATE with MPI_Revc without Infinipath
To: "Open MPI Users"
Cc: "Tom Riddle"
List-Post: users@lis
chines without . I
guess I wonder what is the mechanism when in a wildcard mode.
--- On Sun, 8/17/08, George Bosilca wrote:
From: George Bosilca
Subject: Re: [OMPI users] MPI_ERR_TRUNCATE with MPI_Revc without
Infinipath
To: rarebit...@yahoo.com, "Open MPI Users"
Date: Sunday
Things were working without issue until we went to the wildcard MPI_ANY_SOURCE
on our receives but only on machines without . I guess I wonder what is the
mechanism when in a wildcard mode.
--- On Sun, 8/17/08, George Bosilca wrote:
From: George Bosilca
Subject: Re: [OMPI users] MPI_ERR_TRUNCATE w
Tom,
I did the same modification as you on the osu_latency and the
resulting application run to completion. I don't get any TRUNCATE
error messages. I'm using the latest version of Open MPI (1.4a1r19313).
There was a bug that might be related to your problem but our commit
log shows it wa
Hi,
A bit more info wrt the question below. I have run other releases of OpenMPI
and they seem to be fine. The reason I need to run the latest is because it
supports valgrind fully.
openmpi-1.2.4
openmpi-1.3ar18303
TIA, Tom
--- On Tue, 8/12/08, Tom Riddle wrote:
Hi,
I am getting a curious
Hi,
I am getting a curious error on a simple communications test. I have altered
the std mvapich osu_latency test to accept receives from any source and I get
the following error
[d013.sc.net:15455] *** An error occurred in MPI_Recv
[d013.sc.net:15455] *** on communicator MPI_COMM_WORLD
[d013.s
18 matches
Mail list logo