Daryl,
I've added support for this in bproc, committed to the trunk.
Tim
Jeff Squyres wrote:
No, you are not doing anything wrong.
Currently, this is not handled. I think I documented this in the
README file, but I can add a message to the orterun --help output, or
just remove it fo
Daryl,
Try setting:
-mca btl_base_include self,mvapi
To specify that only lookback (self) and mvapi btls should be used.
Can you forward me the config.log from your build?
Thanks,
Tim
Daryl W. Grunau wrote:
Hi, I've got a dual-homed IB + GigE connected cluster for which I've bu
Hello Chris,
Please give the next release candidate a try. There was an issue
w/ the GM port that was likely causing this.
Thanks,
Tim
Parrott, Chris wrote:
Greetings,
I have been testing OpenMPI 1.0rc3 on a rack of 8 2-processor (single
core) Opteron systems connected via both Gigabit
ne:
-mca pml teg
I'm interested in seeing if there is any performance difference.
Thanks,
Tim
the IP address exported by the peer is not reachable.
You can use the tcp btl parameters:
-mca btl_tcp_include eth0,eth1
or
-mca btl_tcp_exclude eth1
To specify the set of interfaces to use/not use.
Tim
/btl_tcp_if_exclude.
Regards,
Tim
256
Also can you forward me a copy of the test code or a reference to it?
Thanks,
Tim
Mike,
I believe was probably corrected today and should be in the
next release candidate.
Thanks,
Tim
Mike Houston wrote:
Woops, spoke to soon. The performance quoted was not actually going
between nodes. Actually using the network with the pinned option gives:
[0,1,0
Mike,
Let me confirm this was the issue and look at the TCP problem as well.
Will let you know.
Thanks,
Tim
Mike Houston wrote:
What's the ETA, or should I try grabbing from cvs?
-Mike
Tim S. Woodall wrote:
Mike,
I believe was probably corrected today and should be in the
next re
r should I try grabbing from cvs?
-Mike
Tim S. Woodall wrote:
Mike,
I believe was probably corrected today and should be in the
next release candidate.
Thanks,
Tim
Mike Houston wrote:
Woops, spoke to soon. The performance quoted was not actually going
between nodes. Actually using the
Mike,
I believe this issue has been corrected on the trunk, and should
be in the next release candidate, probably by the end of the week.
Thanks,
Tim
Mike Houston wrote:
mpirun -mca btl_mvapi_rd_min 128 -mca btl_mvapi_rd_max 256 -np 2
-hostfile /u/mhouston/mpihosts mpi_bandwidth 21 131072
might try doing a bpsh ldd orted
And check that the libraries resolve / and or rebuild with the indicated
configure option.
Regards,
Tim
John Ouellette wrote:
Hi,
I'm having problems with getting code (specifically ASC FLASH) to run on our
bproc-based cluster using Open-MPI.
Our clust
John,
Any progress on this?
John Ouellette wrote:
Hi Tim,
H, nope. I recompiled OpenMPI to produce the static libs, and even
recompiled my app statically, and received the same error messages.
If orted isn't starting on the compute nodes, is there any way I can debug
this to fin
ke the limits were propigated to the
back end nodes.
Tim, this should fix your problem as well?
On Thu, 2005-12-01 at 17:26 -0800, Todd Wilde wrote:
> How about this one:
>
> For Redhat AS4.0 and Fedora Core 3 or a newer kernel, edit the
> file /etc/security/limits.conf and add the f
d under that license).
If someone is available for off-line discussion (to minimize unnecessary
traffic to the list), I'd be more than willing to summarize the
conversation and contribute it to the online documentation.
Thank you,
tim
--
"Nuclear power is a hell of a way to bo
d everything will be published under that license).
If someone is available for off-line discussion (to minimize unnecessary
traffic to the list), I'd be more than willing to summarize the
conversation and contribute it to the online documentation.
Thank you,
tim
--
"Nuclear power is
Hello Emanuel,
You might want to try an actual hard limit, say 8GB, rather than
unlimited. I've run into issues w/ unlimited in the past.
Thanks,
Tim
Emanuel Ziegler wrote:
Hi!
After solving my last problem with the help of this list (thanks
again :) I encountered another problem rega
s, if they are available
on the backend nodes. So, it's more convienent if they are linked statically,
but not a requirement.
Tim
Ralph
David Gunter wrote:
Unfortunately static-only will create binaries that will overwhelm
our machines. This is not a realistic option.
-david
On Apr 11
Error 1
>
Do you have the Intel compilervars.[c]sh sourced (and associated library
files visible) on each node where you expect to install?
--
Tim Prince
___
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users
You might try inserting parentheses so as to specify your preferred order of
evaluation. If using ifort, you would need -assume protect-parens .
Sent via the ASUS PadFone X mini, an AT&T 4G LTE smartphone
Original Message
From:Oscar Mojica
Sent:Mon, 16 Jan 2017 08:28:05 -0500
scheduling (although linux is more capable than
Windows). Agree that explicit use of taskset under MPI should have been
superseded by the options implemented by several MPI including openmpi.
--
Tim Prince
___
users mailing list
users@list
301 - 321 of 321 matches
Mail list logo