On Monday 08 December 2008 02:44:42 pm George Bosilca wrote:
> Barry,
>
> If you set the eager size large enough, the isend will not return
> until the data is pushed into the network layer.
That's exactly what I want it to do -- good. I've set the eagerness to 2MB,
but for messages 64k and up,
>
> Hello Tim,
>
> I'm using OpenMPI 1.2.8 on Linux Ubuntu 8.04 - kernel 2.4.26
> I hope can help you
>
> Heitor Florido
>
Ops...
It's kernel 2.6.24 and not 2.4.26...
sorry
Heitor
Barry,
If you set the eager size large enough, the isend will not return
until the data is pushed into the network layer. However, this doesn't
guarantee that the data is delivered to the peer, but only that it was
queued in the network (in the TCP case it is copied somewhere in the
kerne
Hello Tim,
I'm using OpenMPI 1.2.8 on Linux Ubuntu 8.04 - kernel 2.4.26
I hope can help you
Heitor Florido
douglas.gupt...@dal.ca wrote:
Proceeding from that, it seems that "mpi_recv" is implemented as
"poll forever until the message comes"
and NOT as
"sleep until the message comes"
I had assumed, until now, that mpi_recv would be implemented as the
second.
It isn't a binary situatio
Hi Eugen,
thanks for your answer ... I am beginning to understand - even though I
am not happy with it :)
Greetings
Jens
Eugene Loh schrieb:
> Jens wrote:
>
>> Hi Terry,
>>
>> I would like to run a paraview-server all time on our cluster (even
>> though it is not in use 24h) - but this would si
On Mon, Dec 08, 2008 at 08:56:59PM +1100, Terry Frankcombe wrote:
> As Eugene said: Why are you desperate for an idle CPU?
So I can run another job. :-)
Douglas.
--
Douglas Guptill
Research Assistant, LSC 4640 email: douglas.gupt...@dal.ca
Oceanography De
Hello Eugene:
On Sun, Dec 07, 2008 at 11:15:21PM -0800, Eugene Loh wrote:
> Douglas Guptill wrote:
>
> >Hi:
> >
> >I am using openmpi-1.2.8 to run a 2 processor job on an Intel
> >Quad-core cpu. Opsys is Debian etch. I am reaonably sure that, most
> >of the time, one process is waiting for resu
Hello Heitor,
We need more information to be able to answer your question,
such as what version of Open MPI are you using, and what
kind of OS/machine are you running on, and what kind of network, etc.
Please follow the directions on this webpage for getting help:
http://www.open-mpi.org/community/
Jens wrote:
Hi Terry,
I would like to run a paraview-server all time on our cluster (even
though it is not in use 24h) - but this would simply result in some kind
of "heating-thread".
Even though it has (in theory) no impact on the node performace (which
is part of a grid-engine), it would sim
Original message
>Date: Mon, 8 Dec 2008 11:47:19 -0500
>From: George Bosilca
>Subject: Re: [OMPI users] How to force eager behavior during Isend?
>To: Open MPI Users
>
>Barry,
>
>These values are used deep inside the Open MPI library, in order to
>define how we handle the messag
Looks like the same source tree was used cleaned (distclean). So I
don't have config logs for gcc or pgi.. Also I can't find
opal_confg.h in ether the configured/built source or installed
location,
1.2.8+pgi, This library was found to run an executable built with
1.2.6+gcc
Sorry I
Barry,
These values are used deep inside the Open MPI library, in order to
define how we handle the messages internally. From a user perspective
you will not see much difference. Moreover, checking the number of
completed requests returned by MPI_Wait* if definitively not the most
accurat
Black magic happens all the time. To keep it simple, we do not expect
different compilers to be 100% compatible, so this is completely
unsupported by the Open MPI community. Moreover, we already know some
compilers that claim gcc compatibility, when there are always some
[obscure] things th
you forgot the "mpirun"
mpirun -mca btl_openib_warn_default_gid_prefix 0
jody
On Mon, Dec 8, 2008 at 4:00 PM, Yasmine Yacoub wrote:
> Thank you for your response, but still my problem remains, I have used this
> command:
>
> -mca btl_openib_warn_default_gid_prefix 0
>
> and I have got this
Hello,
My program keeps throwing this error after I created a child process with
MPI_comm_spawn:
*./../../Desktop/computacaoDistribuida/src/server/server: symbol lookup
error: ./../../Desktop/computacaoDistribuida/src/server/server: undefined
symbol: MPI_Send*
I've already used MPI_Send on other
I'm sorry for not communicating this accurately enough - you need to
add that line to your mpirun command. In other words, you need to
start your job with:
mpirun -mca btl_openib_warn_default_gid_prefix 0 .
Ralph
On Dec 8, 2008, at 8:00 AM, Yasmine Yacoub wrote:
Thank you for your re
Thank you for your response, but still my problem remains, I have used this
command:
-mca btl_openib_warn_default_gid_prefix 0
and I have got this message:
-bash: -mca: command not found
Hello,
My program keeps throwing this error after I created a child process with
MPI_comm_spawn:
*./../../Desktop/computacaoDistribuida/src/server/server: symbol lookup
error: ./../../Desktop/computacaoDistribuida/src/server/server: undefined
symbol: MPI_Send*
I've already used MPI_Send on other
There are multiple ways to set MCA parameters - you can checkout the
FAQ to see all of them:
http://www.open-mpi.org/faq/?category=tuning#setting-mca-params
In your immediate case, just add -mca
btl_openib_warn_default_gid_prefix 0 to your cmd line.
Ralph
On Dec 8, 2008, at 1:49 AM, Yasmi
I notice that bug ticket #954
https://svn.open-mpi.org/trac/ompi/ticket/954 has the very issue I'm
encountering: I want to know when mpirun fails because of a missing
executable during some automated tests.
At the moment, nearly 2 years after the bug was reported, orterun/mpirun
still returns
Hi Terry,
I would like to run a paraview-server all time on our cluster (even
though it is not in use 24h) - but this would simply result in some kind
of "heating-thread".
Even though it has (in theory) no impact on the node performace (which
is part of a grid-engine), it would simply result in s
As Eugene said: Why are you desperate for an idle CPU? Is it not
yielding to other processes?
On Mon, 2008-12-08 at 10:01 +0100, Jens wrote:
> Hi Douglas,
>
> this an answer to my question on the paraview-mailinglist.
>
> I have the same problem with paraview, that it simply waits for more to
Hello,
I'm trying to find a set of mca parameters that will (among other things)
force all but the largest messages to be transmitted eagerly during an
MPI_Isend call rather than during the accompanying MPI_Wait. I thought
increasing the btl_tcp_eager_limit and other buffer sizes would accompl
Hi Douglas,
this an answer to my question on the paraview-mailinglist.
I have the same problem with paraview, that it simply waits for more to
do in client-server(MPI) mode, but is running on 100%.
Different MPI-Implementations seem to behave different here. Using
MPICH2 for example does not res
Good morning,
I have explain my problem lat time and still I haven't receive any response.
ok, my problem is that after installing pwscf and running one scf example, I
got the output but with this warning message :
WARNING: There are more than one active ports on host 'stallo-2.local', but the
de
Hello,
I'm trying to find a set of mca parameters that will (among other things)
force all but the largest messages to be transmitted eagerly during an
MPI_Isend call rather than during the accompanying MPI_Wait. I thought
increasing the btl_tcp_eager_limit and other buffer sizes would accompl
Douglas Guptill wrote:
Hi:
I am using openmpi-1.2.8 to run a 2 processor job on an Intel
Quad-core cpu. Opsys is Debian etch. I am reaonably sure that, most
of the time, one process is waiting for results from the other. The
code is fortran 90, and uses mpi_send and mpi_recv. Yet
"gnome-sys
28 matches
Mail list logo