[OMPI users] Problem with the receive buffer size?

2010-09-04 Thread dbbmyx-franzxaver

Hi,

I have some problems with my mpi-project.
I try to send some information from one process to another one. To realize this 
I use the Issend and the Iprobe and Ireceive Operation.
When using the Issend and the Ireceive operation I use the MPI_CHAR datatype 
because I try to send/receive a serialized object as a std::string. This works 
for most operations. But when I send a larger string it seems to be that only a 
part of it is received at the other process. The Iprobe operation set the size 
of the message correctly. But the receiving string ist not as large as the 
sending one.
Is there any size-limits at the buffers? Is it possible to change the 
buffer-size?

I am thankful for all your informations
Greetings Franz Xaver




Re: [OMPI users] Problem with the receive buffer size?

2010-09-04 Thread dbbmyx-franzxaver



Thanks
for your tip. 
No I did not use mpi_wait. Because when I use it it
waits forever. I wrote some example that show this behavior.
Sorry
for the ugly coding. But it should show my problem.





When
using  countR-variable with 800 it works. But when I use 1000 it waits
forever...






#include


#include


#include


#include


#include






#include


#include


#include


#include










#include














#define
countS  1

#define
countR  1000





class
expObj{

public:

friend
class boost::serialization::access;





   char
array[countS][countR];

   template

void
serialize(Archive & ar, const unsigned int version){


   ar & array;

}





expObj(){

for
(int i = 0; i < countS; ++i) {

for
(int j = 0; j < countR; ++j) {

array[i][j]
= 'q';

}

}

}

};





int
main(int argc, char* argv[])

{

  MPI_Init(&argc,
&argv);





  int
rank;

  MPI_Comm_rank(MPI_COMM_WORLD,
&rank);

  if
(rank == 0) {


 MPI_Request request_bilder_token_ro_multi;





std::deque
senden;

expObj
bild1;

bild1.array[0][0]
= 'a';

senden.push_back(bild1);

std::ostringstream
archive_stream1;

boost::archive::text_oarchive
archive(archive_stream1);





archive
<< senden;

std::string
outbound_data_ = archive_stream1.str();

std::cout
<< "Send - Size of message: " <<
outbound_data_.size() << std::endl;





MPI_Issend(&outbound_data_[0],
static_cast(outbound_data_.size()), MPI_CHAR, 1, 0,
MPI_COMM_WORLD,&request_bilder_token_ro_multi);

while(true){

1/1;

}

  }













  else
if (rank == 1) {


 MPI_Request req;


 MPI_Status stat;


 int flag = 0; //


 int msglen = 1;





std::deque
receive;

expObj
obj;









std::string
serString;





do
{

MPI_Iprobe(0,
0, MPI_COMM_WORLD, &flag, &stat);

}
while (!flag);

MPI_Get_count(&stat,
MPI_CHAR, &msglen);

std::cout
<<"Received size: "<< msglen <"<<
std::endl;

  }





  std::cout
<< "Rank 1 OK!" << std::endl;

  MPI_Finalize();

  return
0;

}




[OMPI users] MPI_Wait: wait for ever by using Issend using larger data-strings

2010-09-06 Thread dbbmyx-franzxaver
Hi,

first of all sorry that I write a new thread. I already wrote a messge about 
"Problem with the receive buffer size?". But I had some problems with my 
email-provider and the mailinglist. (I do not often use mailing lists).

Here is the link to my old Message: 
http://www.open-mpi.org/community/lists/users/2010/09/14181.php


I wrote a short programm that show my problem. (The coding ist very ugly. 
sorry). It send a serialized object(this time a very simple) as a string to the 
other process. 

The problem is that it never gets return of the wait-operation when I using a 
more data. 
If you set the countR to 996 it waits forever. With 995 it works.
Can anyone help me?

Thanks!!!

(used library: boost_serialization)
#include 
#include 
#include 
#include 
#include 

#include 
#include 
#include 
#include 


#include 



#define countS  1
#define countR  996

class expObj{
public:
friend class boost::serialization::access;

   char array[countS][countR];
   template
void serialize(Archive & ar, const unsigned int version){
ar & array;
}

expObj(){
for (int i = 0; i < countS; i++) {
for (int j = 0; j < countR; j++) {
array[i][j] = 'q';
}
}
}
};

int main(int argc, char* argv[])
{
  MPI_Init(&argc, &argv);

  int rank;
  MPI_Comm_rank(MPI_COMM_WORLD, &rank);
  if (rank == 0) {
  MPI_Request request_bilder_token_ro_multi;

std::deque senden;
expObj bild1;
bild1.array[0][0] = 'a';
senden.push_back(bild1);
std::ostringstream archive_stream1;
boost::archive::text_oarchive archive(archive_stream1);

archive << senden;
std::string outbound_data_ = archive_stream1.str();
std::cout << "Send - Size of message: " << 
outbound_data_.size() << std::endl;

MPI_Issend(&outbound_data_[0], 
static_cast(outbound_data_.size()), MPI_CHAR, 1, 0, 
MPI_COMM_WORLD,&request_bilder_token_ro_multi);
while(true){
1/1;
}
  }



  else if (rank == 1) {
  MPI_Request req;
  MPI_Status stat;
  int flag = 0; //
  int msglen = 1;

std::deque receive;
expObj obj;


std::string serString;

do {
MPI_Iprobe(0, 0, MPI_COMM_WORLD, &flag, &stat);
} while (!flag);
MPI_Get_count(&stat, MPI_CHAR, &msglen);
std::cout <<"Received size: "<< msglen <"<< std::endl;
  }

  std::cout << "Rank 1 OK!" << std::endl;
  MPI_Finalize();
  return 0;
}