On Mar 11, 2006, at 1:00 PM, Jayabrata Chakrabarty wrote:
Hi I have been looking for information on how to use multiple
Gigabit Ethernet Interface for MPI communication.
So far what i have found out is i have to use mca_btl_tcp.
But what i wish to know, is what IP Address to assign to each
On Mar 11, 2006, at 1:00 PM, Jayabrata Chakrabarty wrote:
Hi I have been looking for information on how to use multiple
Gigabit Ethernet Interface for MPI communication.
So far what i have found out is i have to use mca_btl_tcp.
But what i wish to know, is what IP Address to assign to each
On Mar 13, 2006, at 8:38 AM, Michael Kluskens wrote:
On Mar 11, 2006, at 1:00 PM, Jayabrata Chakrabarty wrote:
Hi I have been looking for information on how to use multiple
Gigabit Ethernet Interface for MPI communication.
So far what i have found out is i have to use mca_btl_tcp.
But what i
Hi Ravi -
With the help of another Open MPI user, I spent the weekend finding a
couple of issues with Open MPI on Solaris. I believe you are running
into the same problems. We're in the process of certifying the
changes for release as part of 1.0.2, but it's Monday morning and the
relea
It looks like we never added similar logic to the Open IB transport.
I'll pass your request on to the developer of our Open IB transport.
Given our timeframe for releasing Open MPI 1.0.2, it's doubtful any
change will make that release. But it should definitely be possible
to add such functionali
On Mar 9, 2006, at 12:18 PM, Pierre Valiron wrote:
- Configure and compile are okay
Good to hear.
- However compiling the mpi.f90 takes over 35 *minutes* with -O1.
This seems a bit excessive... I tried removing any -O option and
things are just as slow. Is this behaviour related to open
On Mon, 13 Mar 2006 07:37:10 -0700, Galen Shipman
wrote:
It looks like we never added similar logic to the Open IB transport.
I'll pass your request on to the developer of our Open IB transport.
Given our timeframe for releasing Open MPI 1.0.2, it's doubtful any
change will make that release.
This was my oversight, I am getting to it know, should have something
in just a bit.
- Galen
I can live with that, certainly. Fortunately, there's a couple months
until I have a real /need/ for this.
--
Hi Troy,
I have added max_btls to the openib component on the trunk, try:
mpirun --mca
On Mon, 2006-03-13 at 10:57 -0700, Galen Shipman wrote:
> >> This was my oversight, I am getting to it know, should have something
> >> in just a bit.
> >>
> >> - Galen
> >
> > I can live with that, certainly. Fortunately, there's a couple months
> > until I have a real /need/ for this.
> > --
>
Brian Barrett wrote:
b) whatever the code was compiled with mpif77 or mpif90, execution
fails:
valiron@icare ~/BENCHES > mpirun -np 2 all
Signal:11 info.si_errno:0(Error 0) si_code:1(SEGV_MAPERR)
Failing at addr:40
*** End of error message ***
Signal:11 info.si_errno:0(Error 0) si_code:1(SEGV_
I have added max_btls to the openib component on the trunk, try:
mpirun --mca btl_openib_max_btls 1 ...etc
I don't have a dual nic machine handy to test on, if this checks out we
can patch the release branch.
Thanks,
Galen
I'll get to it as soon as I can; but it may be a few days.
--
Troy Te
I have successfully build openmpi-1.1a1r9260 (from the subversion trunk)
in 64-bit mode on Solaris Opteron.
This r9260 tarball incorporates the last patches for Solaris from Brian
Barrett.
In order to accelerate the build I disabled the f90 bindings. My build
script is as follows:
#! /bin/tc
On Mar 13, 2006, at 4:36 PM, Pierre Valiron wrote:
I have successfully build openmpi-1.1a1r9260 (from the subversion
trunk)
in 64-bit mode on Solaris Opteron.
This r9260 tarball incorporates the last patches for Solaris from
Brian
Barrett.
Just a quick note - these changes were recently m
13 matches
Mail list logo