On Mon, 22 Oct 2007, Jeff Squyres wrote:
> On Oct 22, 2007, at 6:44 PM, Lourival Mendes wrote:
>
> >Hy everybody, I'm interested in use the MPI on the Pascal
> > environment. I tryed the MPICH2 list but no success. On the Free
> > Pascal Compiler list, Daniël invited me to subscribe this li
Hello Bill,
I have recently set up a small AMD Opteron cluster using Torque/Maui and OpenMPI, and not being very experienced I did not find it too complicated to do, and it works fine. I do not know Slurm, so I cannot make any comparison, but I just wanted to add that getting started with Torque/
Michael wrote:
The primary difference seems to be that you have all communication
going over a single interface.
Yes. It's clearly stated in the OpenMPI FAQ that such configuration is
not supported:
These rules do /not/ cover the following cases:
* Running an MPI job that spans a bun
Hi,
Testing a distributed system locally, I couldn't help but notice that a
blocking MPI_Recv causes 100% CPU load. I deactivated (at both compile-
and run-time) the shared memory bt-layer, and specified "tcp, self" to
be used. Still one core busy. Even on a distributed system I intend to
perform w
You should look at these two FAQ entries:
http://www.open-mpi.org/faq/?category=running#oversubscribing
http://www.open-mpi.org/faq/?category=running#force-aggressive-degraded
To get what you want, you need to force Open MPI to yield the processor rather
than be aggressively waiting for a message
Hi,
thanks for answering. Unfortunately, I did try that, too. The point is
that i don't understand the ressource consumption. Even if the processor
is yielded, it still is busy waiting, wasting system resources which
could otherwise be used for actual work. Isn't there some way to
activate an inter
Currently there is no work around this issue. We consider(ed) that
when you run an MPI job the cluster is in dedicated mode, so a 100%
CPU consumption is acceptable. However, as we discussed at our last
meeting, there are others reasons to be able to yield the CPU until a
message arrives. T
On Monday 22 October 2007, Don Kerr wrote:
> Couple of things.
> With linux I believe you need the interface instance in the 7th field of
> the /etc/dat.conf file.
> example:
>
> InfiniHost0 u1.1 nonthreadsafe default /usr/lib64/libdapl.so ri.1.1 " " " "
> should be
> InfiniHost0 u1.1 nonthread
Troy Telford wrote:
On Monday 22 October 2007, Don Kerr wrote:
Couple of things.
With linux I believe you need the interface instance in the 7th field of
the /etc/dat.conf file.
example:
InfiniHost0 u1.1 nonthreadsafe default /usr/lib64/libdapl.so ri.1.1 " " " "
should be
InfiniHost0 u1.1 n
Good to know that I'm not just not finding the solution, there simply is
none.
The system is actually dedicated to the job. But the process may, while
working, receive a signal that alters the ongoing job. Like for example
a terminate signal or more data to be taken into consideration. That's
why I
I have a question about using the open mpi.
I want to tie "N" number of processes to one core and "M" number of
processes to another core. I want to know if open mpi is capable of doing
that.
Thanks,
Siamak
I do this using the hostfile. There might be a more sophisticated way too.
Siamak Riahi wrote:
I have a question about using the open mpi.
I want to tie "N" number of processes to one core and "M" number of
processes to another core. I want to know if open mpi is capable of
doing that.
Tha
Thanks for the answer. I have been using LAM MPI. I'm using FDS (Fire
Dynamic Simulator) and in my model I have 7 threads and want to tie them to
two cores which means I don't want the 7 threads to use all four cores on
the cluster that we have. Have you done some thing similar to this?
Thanks,
S
Hello all!
After some background research, I am soon going to start working on
"Parallel Genetic Algorithms". When I reach the point of practical
implementation, I am going to use Open MPI for the purpose.
Has anyone here worked on similar things? It would be nice if you could
share some views/co
Hi George,
Thanks for your response.
I found a bug in my MTL code that had propagated up to PML which was
causing that error.
Sajjad Tabib
Message: 2
List-Post: users@lists.open-mpi.org
Date: Wed, 17 Oct 2007 12:24:53 -0400
From: George Bosilca
Subject: Re: [OMPI users] Open MPI can't open
I assumed you meant processes on a host, but I noticed that you
wrote "cores". I'm not sure what the answer is if you really meant cores.
I run different number of processes per "node" using mpirun
-hostfile=hosts ,
where hosts file contains:
host0 slots=5
host1 slots=5
host2 slots=5
host
On 24 October 2007 at 01:01, Amit Kumar Saha wrote:
| Hello all!
|
| After some background research, I am soon going to start working on
| "Parallel Genetic Algorithms". When I reach the point of practical
| implementation, I am going to use Open MPI for the purpose.
|
| Has anyone here worked o
17 matches
Mail list logo