I don't think it's really a fault -- it's just how we designed and implemented 
it.


On Sep 6, 2010, at 7:40 AM, lyb wrote:

> Thanks for your answer, but I test with MPICH2, it doesn't have this fault. 
> Why?
>> Message: 9
>> Date: Wed, 1 Sep 2010 20:14:44 -0600
>> From: Ralph Castain<r...@open-mpi.org>
>> Subject: Re: [OMPI users] MPI_Comm_accept and MPI_Comm_connect both
>>      use 100%        one cpu core. Is it a bug?
>> To: Open MPI Users<us...@open-mpi.org>
>> Message-ID:<4e4bc153-b4e3-43e2-b980-904dabe78...@open-mpi.org>
>> Content-Type: text/plain; charset="us-ascii"
>> 
>> It's not a bug - that is normal behavior. The processes are polling hard to 
>> establish the connections as quickly as possible.
>> 
>> 
>> On Sep 1, 2010, at 7:24 PM, lyb wrote:
>> 
>>   
>>> >  Hi, All,
>>> >  >  I tested two sample applications on Windows 2003 Server, one use 
>>> > MPI_Comm_accept and other use MPI_Comm_connect,
>>> >  when run into MPI_Comm_accept or MPI_Comm_connect, the application use 
>>> > 100% one cpu core.  Is it a bug or some wrong?
>>> >  >  I tested with three version including Version 1.4 (stable), Version 
>>> > 1.5 (prerelease) and trunk 23706 version.
>>> >  >  ...
>>> >  MPI_Open_port(MPI_INFO_NULL, port);
>>> >  MPI_Comm_accept( port, MPI_INFO_NULL, 0, MPI_COMM_WORLD,&client );
>>> >  ...
>>> >  >  ...
>>> >  MPI_Comm_connect( port, MPI_INFO_NULL, 0, MPI_COMM_WORLD,&server );
>>> >  ...
>>> >  >  thanks a lot.
>>> >  >  lyb
>>> >  >  >  >  >  >  _______________________________________________
>>> >  users mailing list
>>> >  us...@open-mpi.org
>>> >  http://www.open-mpi.org/mailman/listinfo.cgi/users
>>>     
>> -------------- next part --------------
>> HTML attachment scrubbed and removed
>>   
> 
> 
> 
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users


-- 
Jeff Squyres
jsquy...@cisco.com
For corporate legal information go to:
http://www.cisco.com/web/about/doing_business/legal/cri/


Reply via email to