I have simple MPI program that sends data to processor rank 0. The
communication works well but when I run the program on more than 2
processors (-np 4) the extra receivers waiting for data run on > 90%
CPU load. I understand MPI_Recv() is a blocking operation, but why
does it consume so mu
2008, at 5:34 PM, Alberto Giannetti wrote:
I have simple MPI program that sends data to processor rank 0. The
communication works well but when I run the program on more than 2
processors (-np 4) the extra receivers waiting for data run on > 90%
CPU load. I understand MPI_Recv() is a blocking o
that made some sense :)
Best regards,
Torje
On Apr 23, 2008, at 5:34 PM, Alberto Giannetti wrote:
I have simple MPI program that sends data to processor rank 0. The
communication works well but when I run the program on more than 2
processors (-np 4) the extra receivers waiting for data ru
I would like to run one of my MPI processors to a single core on my
iMac Intel Core Duo system. I'm using release 1.2.4 on Darwin 8.11.1.
It looks like processor affinity is not supported for this kind of
configuration:
$ ompi_info|grep affinity
MCA maffinity: first_use (MCA v1.
Note: I'm running Tiger (Darwin 8.11.1). Things might have changed
with Leopard.
On Apr 23, 2008, at 5:30 PM, Jeff Squyres wrote:
On Apr 23, 2008, at 3:01 PM, Alberto Giannetti wrote:
I would like to run one of my MPI processors to a single core on my
iMac Intel Core Duo system. I
27;t been a high priority.
On Apr 23, 2008, at 12:19 PM, Alberto Giannetti wrote:
Thanks Torje. I wonder what is the benefit of looping on the
incoming
message-queue socket rather than using system I/O signals, like read
() or select().
On Apr 23, 2008, at 12:10 PM, Torje Henriksen wrote:
On Apr 24, 2008, at 6:56 AM, Ingo Josopait wrote:
I am using one of the nodes as a desktop computer. Therefore it is
most
important for me that the mpi program is not so greedily acquiring cpu
time.
From a performance/usability stand, you could set interactive
applications on higher priori
I am looking to use MPI in a publisher/subscriber context. Haven't
found much relevant information online.
Basically I would need to deal with dynamic tag subscriptions from
independent components (connectors) and a number of other issues. I
can provide more details if there is an interest. A
On Apr 24, 2008, at 9:09 AM, Adrian Knoth wrote:
On Thu, Apr 24, 2008 at 08:25:44AM -0400, Alberto Giannetti wrote:
I am using one of the nodes as a desktop computer. Therefore it is
most important for me that the mpi program is not so greedily
acquiring cpu time.
From a performance
I want to connect two MPI programs through the MPI_Comm_connect/
MPI_Comm_Accept API.
This is my server app:
int main(int argc, char* argv[])
{
int rank, count;
int i;
float data[100];
char myport[MPI_MAX_PORT_NAME];
MPI_Status status;
MPI_Comm intercomm;
MPI_Init(&argc, &argv);
pplications
with the same mpirun, with MPMD syntax. However this will have the
adverse effect of having a larger than expected MPI_COMM_WORLD.
Aurelien
Le 26 avr. 08 à 00:31, Alberto Giannetti a écrit :
I want to connect two MPI programs through the MPI_Comm_connect/
MPI_Comm_Accept API.
This is
Check the FAQ section for processor affinity.
On Apr 25, 2008, at 2:27 PM, Roopesh Ojha wrote:
Hello
As a newcomer to the world of openMPI who has perused the faq and
searched
the archives, I have a few questions about how to schedule processes
across
a heterogeneous cluster where some process
I am having error using MPI_Lookup_name. The same program works fine
when using MPICH:
/usr/local/bin/mpiexec -np 2 ./client myfriend
Processor 0 (662, Sender) initialized
Processor 0 looking for service myfriend-0
Processor 1 (664, Sender) initialized
Processor 1 looking for service myfriend-
Linwei, are you running the command as root?
Try using sudo:
# sudo make install
It will ask you for an administrator password.
On Apr 29, 2008, at 3:54 PM, Linwei Wang wrote:
Dear all,
I'm new to MPI... I'm trying to install open MPI on my mac (Leopard)..
But during the installation (with the
In message http://www.open-mpi.org/community/lists/users/
2007/03/2889.php I found this comment:
"The only way to get any
benefit from the MPI_Bsend is to have a progress thread which take
care of the pending communications in the background. Such thread is
not enabled by default in Open MPI."
hat.
>
> This is cleaned up in the upcoming 1.3 release and should work much
> smoother.
>
> Ralph
>
>
>
> On 4/27/08 6:58 PM, "Alberto Giannetti"
> wrote:
>
> > I am having error using MPI_Lookup_name. The same program works fine
> > when usi
Is MPE part of OMPI? I can't find any reference in the FAQ.
I need to log application-level messages on disk to trace my program
activity. For better performances, one solution is to dedicate one
processor to the actual I/O logging, while the other working
processors would trace their activity through non-blocking, string
message sends:
/* LOGGER
On May 7, 2008, at 1:32 PM, Barry Rountree wrote:
On Wed, May 07, 2008 at 12:33:59PM -0400, Alberto Giannetti wrote:
I need to log application-level messages on disk to trace my program
activity. For better performances, one solution is to dedicate one
processor to the actual I/O logging
On May 7, 2008, at 5:45 PM, Barry Rountree wrote:
On Wed, May 07, 2008 at 01:51:03PM -0400, Alberto Giannetti wrote:
On May 7, 2008, at 1:32 PM, Barry Rountree wrote:
On Wed, May 07, 2008 at 12:33:59PM -0400, Alberto Giannetti wrote:
I need to log application-level messages on disk to
On May 9, 2008, at 8:03 AM, Jeff Squyres wrote:
On May 9, 2008, at 12:58 AM, Mukesh K Srivastava wrote:
What is the tentatives release dates for OpenMPI-v1.3.1?
Please don't CC both mailing lists on future replies to this thread;
one or the other would be fine; thanks!
Brad Benton and Geo
21 matches
Mail list logo