Hello Guys, I had openmpi v1.2 installed on my cluster.
Couple of days back, i thought to upgrade it to v1.2.4(latest release i
suppose). Since i didnt want to take risk, i first installed it on temporary
location and did bandwidth and bidirectional bandwidth test provided by the OSU
guys, a
Hi!
In common_mx.c the following looks wrong.
ompi_common_mx_finalize(void)
{
mx_return_t mx_return;
ompi_common_mx_initialize_ref_cnt--;
if(ompi_common_mx_initialize == 0) {
That should be
if(ompi_common_mx_initialize_ref_cnt == 0)
right?
--
Ake Sandgren, HPC2N, Umea University, S-
Hi Dirk,
On 10/24/07, Dirk Eddelbuettel wrote:
>
>
> On 24 October 2007 at 01:01, Amit Kumar Saha wrote:
> | Hello all!
> |
> | After some background research, I am soon going to start working on
> | "Parallel Genetic Algorithms". When I reach the point of practical
> | implementation, I am going
On Wed, 2007-10-24 at 09:00 +0200, Åke Sandgren wrote:
> Hi!
>
> In common_mx.c the following looks wrong.
> ompi_common_mx_finalize(void)
> {
> mx_return_t mx_return;
> ompi_common_mx_initialize_ref_cnt--;
> if(ompi_common_mx_initialize == 0) {
>
> That should be
> if(ompi_co
You're absolutely right. Thanks for the patch, I applied it on the
trunk (revision 16560).
Thanks,
george.
On Oct 24, 2007, at 8:17 AM, Åke Sandgren wrote:
On Wed, 2007-10-24 at 09:00 +0200, Åke Sandgren wrote:
Hi!
In common_mx.c the following looks wrong.
ompi_common_mx_finalize(voi
I've been scratching my head over this:
lnx01:/usr/lib> orterun -n 2 --mca btl ^openib ~/c++/tests/mpitest
[lnx01:14417] mca: base: component_find: unable to open btl openib: file not
found (ignored)
[lnx01:14418] mca: base: component_find: unable to open btl openib: file not
found (ignored)
Hello,
I'd like to run Open MPI "by hand". I have a few ordinary
workstations I'd like to run a code using Open MPI on. They're in
the same LAN, have unique IP addresses and hostnames, and I've
installed the default Open MPI package, and I've compiled an MPI app
against the Open MPI lib
On 10/24/07, Dean Dauger, Ph. D. wrote:
> Hello,
>
> I'd like to run Open MPI "by hand". I have a few ordinary
> workstations I'd like to run a code using Open MPI on. They're in
> the same LAN, have unique IP addresses and hostnames, and I've
> installed the default Open MPI package, and I've c
Dean,
There is no way to run Open MPI by hand, or at least not simple way.
How about xgrid on your OS X cluster ? Anyway, without a way to start
processes remotely it is really difficult to start up any kind of
parallel job.
george.
On Oct 24, 2007, at 12:06 PM, Dean Dauger, Ph. D. wro
Hi,
Am 24.10.2007 um 19:21 schrieb George Bosilca:
There is no way to run Open MPI by hand, or at least not simple
way. How about xgrid on your OS X cluster ? Anyway, without a way
to start processes remotely it is really difficult to start up any
kind of parallel job.
just to note: with
If they are OSX machines adding password-less ssh is easy,
then you can make a nodefile with all the unique ip's, if you can
do that you can avoid putting a full resource manager on them.
Brock Palen
Center for Advanced Computing
bro...@umich.edu
(734)936-1985
On Oct 24, 2007, at 1:39 PM,
The changes in the 1.2 series are listed here:
http://svn.open-mpi.org/svn/ompi/branches/v1.2/NEWS
I'm surprised that your performance went down from v1.2 to v1.2.4.
What networks were you testing, and how exactly did you test?
On Oct 24, 2007, at 12:14 AM, Neeraj Chourasia wrote:
He
This is quite likely because of a "feature" in how the OMPI v1.2
series handles its plugins. In OMPI <=v1.2.x, Open MPI opens all
plugins that it can find and *then* applies the filter that you
provide (e.g., via the "btl" MCA param) to close / ignore certain
plugins.
In OMPI >=v1.3, we
Hi Jeff,
On 24 October 2007 at 15:43, Jeff Squyres wrote:
| This is quite likely because of a "feature" in how the OMPI v1.2
| series handles its plugins. In OMPI <=v1.2.x, Open MPI opens all
| plugins that it can find and *then* applies the filter that you
| provide (e.g., via the "btl" M
On Oct 24, 2007, at 4:16 PM, Dirk Eddelbuettel wrote:
I buy that explanation any day, but what is funny is that the
btl = ^openib
does suppress the warning on some of my systems (all running 1.2.4)
but not
others (also running 1.2.4).
If I had to guess, the systems where you don't s
On Oct 24, 2007, at 1:21 PM, George Bosilca wrote:
There is no way to run Open MPI by hand, or at least not simple
way. How about xgrid on your OS X cluster ? Anyway, without a way
to start processes remotely it is really difficult to start up any
kind of parallel job.
More specifically,
Hi Tim,
Thank you for your reply.
You are right, my openMPI version is rather old. However I am stuck with
it while I can compile v1.2.4. I have had some problems with it (I already
opened a case on Oct 15th).
You were also right about my hostname. uname -n reports (none) and the
"hostname"
Wow -- that has survived since LAM/MPI -- you're the first person to
have ever noticed it. :-)
I *think* it's just a wrong type, but I'd prefer to file a ticket so
that someone gives it a bit more than a cursory examination before
making the change.
Thanks for pointing it out!
On Oct 1
Sorry I missed this message before... it got lost in the deluge that
is my inbox.
Are you using the mpi_leave_pinned MCA parameter? That will make a
big difference on the typical ping-pong benchmarks:
mpirun --mca mpi_leave_pinned 1
On Oct 11, 2007, at 11:44 AM, Matteo Cicuttin
Well that's fun; I'm not sure why that would happen. Can you send
all the information listed here:
http://www.open-mpi.org/community/help/
On Oct 15, 2007, at 5:36 PM, Jorge Parra wrote:
Hi,
I am trying to cross-compile Open-mpi 1.2.4 for an embedded system.
The development system i
Glad you found the problem.
Don't worry about the '--num_proc 3'. This does not refer to the number
of application processes, but rather the number of 'daemon' processes
plus 1 for mpirun. However, this is an internal interface which changes
on different versions of Open MPI, so this explanati
I believe that the second scenario that Sriram described is
incorrect: you cannot merge independent intercommunicators into a
single communicator (either intra or inter).
On Oct 18, 2007, at 4:36 PM, Murat Knecht wrote:
Hi,
I have a question regarding merging intracommunicators.
Using MPI_
By default, I believe that orte assumes that the orted is in the same
location on all nodes. If it's not, you should be able to use the
following:
1. Make a sym link such that /usr/local/bin/orted appears on all of
your nodes. You implied that you tried this, but I find it hard to
belie
We don't really have this kind of fine-grained processor affinity
control in Open MPI yet.
Is there a reason you want to oversubscribe cores this way? Open MPI
assumes that each process should be as aggressive as possible in
terms of performance -- spinning heavily until progress can be ma
Those are the three libraries that are typically required. I don't
know anything about xcode, so I don't know if there's any other
secret sauce that you need to use.
Warner -- can you shed any light here?
To verify your Open MPI installation, you might want to try compiling
a trivial MPI
On 24 October 2007 at 16:22, Jeff Squyres wrote:
| On Oct 24, 2007, at 4:16 PM, Dirk Eddelbuettel wrote:
|
| > I buy that explanation any day, but what is funny is that the
| > btl = ^openib
| > does suppress the warning on some of my systems (all running 1.2.4)
| > but not
| > others (also
On Oct 24, 2007, at 9:23 PM, Dirk Eddelbuettel wrote:
| If I had to guess, the systems where you don't see the warning are
| systems that have OFED loaded.
I am pretty sure that none of the systems (at work) have IB
hardware. I am
very sure that my home systems do not, and there the 'btl =
On 24 October 2007 at 21:31, Jeff Squyres wrote:
| On Oct 24, 2007, at 9:23 PM, Dirk Eddelbuettel wrote:
|
| > | If I had to guess, the systems where you don't see the warning are
| > | systems that have OFED loaded.
| >
| > I am pretty sure that none of the systems (at work) have IB
| > hardwa
28 matches
Mail list logo