Hi Jeff,
I think setting global limits will not help in this case as the limits
like stacksize need to be program specific.
So far I am using wrappers, however the solution is a bit nasty.
If there is another way it would be great.
Hoever I doubt that there is a way as the FAQ states:
More
Hello everybody:
I had the same problem described at thread
http://www.open-mpi.org/community/lists/users/2008/05/5601.php which I
solved setting btl_openib_free_list_max MCA parameter to 2048, but I
have some doubts and derived problems that I would like to comment:
1) Is this a problem which o
If you need per-job settings, then a wrapper is probably your best bet.
On Sep 10, 2008, at 5:08 AM, Samuel Sarholz wrote:
Hi Jeff,
I think setting global limits will not help in this case as the
limits like stacksize need to be program specific.
So far I am using wrappers, however the s
Dear all,
First some background, the real question is at the end of this (longish)
mail.
I have a problem where I need to exchange data between all processes. The
data is unevenly distributed and I thought at first I could use
MPI_Alltoallv to transfer the data. However, in my case, the r
I am trying to connect a client MPI app to a server with
MPI_Comm_connect. I get this error:
$ mpiexec -n 1 client 0.1.0:2000
Processor 0 (1193, Sender) initialized
Processor 0 connecting to '0.1.0:2000'
[local:01193] *** Process received signal ***
[local:01193] Signal: Bus error (10)
[local:0
Jeff and Samuel,
Thanks for your responses.
-Hamid
Jeff Squyres wrote:
If you need per-job settings, then a wrapper is probably your best bet.
On Sep 10, 2008, at 5:08 AM, Samuel Sarholz wrote:
Hi Jeff,
I think setting global limits will not help in this case as the
limits like stacksize
Daniel,
Your understanding of he MPI standard requirement with regard to
MPI_Alltoallv is now 100% accurate. The send count and datatype should
match what the receiver expect. You can always use an MPI_Alltoall
before the MPI_Alltoallv to exchange the lengths that you expect.
george.
O
George, thanks for the quick answer!
I thought about using alltoall before the alltoallv, but it "feels" like
this might end up slow having two alltoall, at least doubling the latency.
Might still be faster than a large bunch of sendrecvs of course. I'll
simply have to do some short tests,
When I use mpiexec from mpich2 1.05.p4, there is an option called -l.
Using this option with mpiexec prepends each line of the standard
output with the rank of the process. I am a big fan of this feature
and it helps in debugging. Is there any such simple trick to prepend
the standard output with t
This feature is tentatively scheduled on our to-do list, but it won't
be included in the upcoming v1.3 series.
FWIW, the v1.0 series is ancient -- the v1.2 series has had many many
fixes and improvements since then. If the v1.0 series is working for
you, ok, but just be aware that the v1.2
Hi,
I have upgraded my openMPI to 1.2.6 (We have gentoo and emerge showed
1.2.6-r1 to be the latest stable version of openMPI).
I do still get the following error message when running my test helloWorld
program:
[10.12.77.21][0,1,95][btl_tcp_endpoint.c:572:mca_btl_tcp_endpoint_complete_c
onnect
Prasanna Ranganathan wrote:
Hi,
I have upgraded my openMPI to 1.2.6 (We have gentoo and emerge showed
1.2.6-r1 to be the latest stable version of openMPI).
Prasanna, do a sync, 1.2.7 is in portage and report back.
Eric
I do still get the following error message when running my test helloWo
Prasanna, also make sure you try with USE=-threads ...as the ebuild
states, it's _experimental_ ;)
Keep your eye on:
https://svn.open-mpi.org/trac/ompi/wiki/ThreadSafetySupport
Eric
Prasanna Ranganathan wrote:
Hi,
I have upgraded my openMPI to 1.2.6 (We have gentoo and emerge showed
1.2.6
Hi Eric,
Thanks a lot for the reply.
I am currently working on upgrading to 1.2.7
I do not quite follow your directions; What do you refer to when you say say
"try with USE=-threads..."
Kindly excuse if it is a silly question and pardon my ignorance :D
Regards,
Prasanna.
I compiled the Open MPI source with openib support. However, the
Infiniband part is still not working right (I had to build it from
source since I'm using Ubuntu, and it's a mess).
If I execute 'mpirun', I assume it will automatically look to
communicate using Infiniband. However, since Inf
By default it will fail back to tcp,
You will get a bunch of message about not finding any hca's when you
are built with OpenIB and its not working.
You can also always force it to not look for openib
mpirun --mca btl ^openib app
Brock Palen
www.umich.edu/~brockp
Center for Advanced Computi
Hi,
I have upgraded to 1.2.7 and am still noticing the issue.
Kindly help.
>
> Message: 1
> Date: Mon, 8 Sep 2008 16:43:33 -0400
> From: Jeff Squyres
> Subject: Re: [OMPI users] Need help resolving No route to host error
> withOpenMPI 1.1.2
> To: Open MPI Users
> Message-ID:
> Content-
Prasanna Ranganathan wrote:
Hi Eric,
Thanks a lot for the reply.
I am currently working on upgrading to 1.2.7
I do not quite follow your directions; What do you refer to when you say say
"try with USE=-threads..."
I am referring to the USE variable which is used to set global package
speci
18 matches
Mail list logo