Hi again,
Let me clarify the context of the problem. I'm implementing a MPI piggyback
mechanism that should allow for attaching extra data to any MPI message. The
idea is to wrap MPI communication calls with PMPI interface (or with dynamic
instrumentation or whatsoever) and add/receive extra data
Also see:
http://www.open-mpi.org/faq/?category=tuning#paffinity-defs
http://www.open-mpi.org/faq/?category=tuning#using-paffinity
and
http://www.open-mpi.org/projects/plpa/
On Oct 31, 2007, at 11:55 AM, ky...@neuralbs.com wrote:
It will indeed but you can have better control over the process
On Oct 31, 2007, at 11:47 AM, Karsten Bolding wrote:
Does OpenMPI detect if procceses share memory and hence do not
communicate via sockets.
Yes.
But if you lie to Open MPI and tell it that there are more processors
than there really are, we may not recognize that the machine is
oversubscrib
For some version of Open MPI (recent versions) you can use the
btl_tcp_disable_family MCA parameter to disable the IPv6 at runtime.
Unfortunately, there is no similar option allowing you to disable IPv6
for the runtime environment.
george.
On Oct 31, 2007, at 6:55 PM, Tim Prins wrote:
The MPI standard defines the upper bound and the upper bound for
similar problems. However, even with all the functions in the MPI
standard we cannot describe all types of data. There is always a
solution, but sometimes one has to ask if the performance gain is
worth the complexity introduc
Hi Clement,
I seem to recall (though this may have changed) that if a system supports
ipv6, we may open both ipv4 and ipv6 sockets. This can be worked around by
configuring Open MPI with --disable-ipv6
Other then that, I don't know of anything else to do except raise the limit
for the number o
Hi Jon,
Just to make sure, running 'ompi_info' shows that you have the udapl btl
installed?
Tim
On Wednesday 31 October 2007 06:11:39 pm Jon Mason wrote:
> I am having a bit of a problem getting udapl to work via mpirun (over
> open-mpi, obviously). I am running a basic pingpong test and I get
I am having a bit of a problem getting udapl to work via mpirun (over
open-mpi, obviously). I am running a basic pingpong test and I get the
following error.
# mpirun --n 2 --host vic12-10g,vic20-10g -mca btl udapl,self
/usr/mpi/gcc/open*/tests/IMB*/IMB-MPI1 pingpong
I'm not sure if you understood my question. The case is not trivial at all
or I miss something important.
Try to design this derived datatype and you will understand my point.
Thanks anyway.
On 10/31/07, Amit Kumar Saha wrote:
>
>
>
> On 10/31/07, Oleg Morajko wrote:
> >
> > Hello,
> >
> > I h
It will indeed but you can have better control over the processor
assignment by using processor affinity (also get better performance) as
sen here:
http://www.nic.uoregon.edu/tau-wiki/Guide:Opteron_NUMA_Analysis
http://www-128.ibm.com/developerworks/linux/library/l-affinity.html
Eric
> I think if
On Wed, Oct 31, 2007 at 11:13:46 -0700, Jeff Squyres wrote:
> On Oct 31, 2007, at 10:45 AM, Karsten Bolding wrote:
>
> > In a different thread I read about a performance penalty in OpenMPI if
> > more than one MPI-process is running on one processor/core - is that
> > correct? I mean having max-sl
I think if you boot the mpi on the host machine, and than run your
program with 8 thread (mpirun -np 8 ) , the operating
system will automatically distribute it to the cores.
Jeff Pummill wrote:
I am doing some testing on a variety of 8-core nodes in which I just
want to execute a couple of e
On Oct 31, 2007, at 10:45 AM, Karsten Bolding wrote:
In a different thread I read about a performance penalty in OpenMPI if
more than one MPI-process is running on one processor/core - is that
correct? I mean having max-slots>4 on a quad-core machine.
Open MPI polls for message passing progres
On 10/31/07, Oleg Morajko wrote:
>
> Hello,
>
> I have the following problem. There areI two arrays somewere in the
> program:
>
> double weights [MAX_SIZE];
> ...
> int values [MAX_SIZE];
> ...
>
> I need to be able to send a single pair { weights [i], values [i] } with a
> single MPI_Send
On Wed, Oct 31, 2007 at 09:27:48 -0700, Jeff Squyres wrote:
> I think you should use the MPI_PROC_NULL constant itself, not a hard-
> coded value of -1.
the value -1 was in the neighbor specification file.
>
> Specifically: the value of MPI_PROC_NULL is not set in the MPI
> standard -- so imp
Hello,
I have the following problem. There areI two arrays somewere in the program:
double weights [MAX_SIZE];
...
int values [MAX_SIZE];
...
I need to be able to send a single pair { weights [i], values [i] } with a
single MPI_Send call Or receive it directly into both arrays at at given
I think you should use the MPI_PROC_NULL constant itself, not a hard-
coded value of -1.
Specifically: the value of MPI_PROC_NULL is not set in the MPI
standard -- so implementations are free to choose whatever value they
want. In Open MPI, MPI_PROC_NULL is -2. So using -1 is an illegal
Hello
I've just introduced the possibility to use OpenMPI instead of MPICH in
an ocean model. The code is quite well tested and has being run in
various parallel setups by various groups.
I've compiled the program using mpif90 (instead of ifort). When I run I
get the error - shown at the end of t
Hi Jeff,
Sorry I did not see your post. Attached to this email are the outputs
requested by the help page. It is a compressed tar file containing the
output of .configure and the output of "make all". Please let me know if
more information is needed.
Thank you for your help,
Jorge
On Tue,
I would try attaching to the processes to see where things are
getting stuck.
On Oct 31, 2007, at 5:51 AM, Murat Knecht wrote:
Jeff Squyres schrieb:
On Oct 31, 2007, at 1:18 AM, Murat Knecht wrote:
Yes I am, (master and child 1 running on the same machine). But
knowing the oversubscribin
Jeff Squyres schrieb:
> On Oct 31, 2007, at 1:18 AM, Murat Knecht wrote:
>
>
>> Yes I am, (master and child 1 running on the same machine).
>> But knowing the oversubscribing issue, I am using
>> mpi_yield_when_idle which should fix precisely this problem, right?
>>
>
> It won't *fix* t
THREAD_MULTIPLE support does not work in the 1.2 series. Try turning
it off.
On Oct 30, 2007, at 12:17 AM, Neeraj Chourasia wrote:
Hi folks,
I have been seeing some nasty behaviour in MPI_Send/Recv
with large dataset(8 MB), when used with OpenMP and Openmpi
together with IB Int
On Oct 31, 2007, at 1:18 AM, Murat Knecht wrote:
Yes I am, (master and child 1 running on the same machine).
But knowing the oversubscribing issue, I am using
mpi_yield_when_idle which should fix precisely this problem, right?
It won't *fix* the problem -- you're still oversubscribing the no
Sorry if this has already been discussed, am new
to this list.
I came across the ETH BTL from
http://archiv.tu-chemnitz.de/pub/2006/0111/data/hoefler-CSR-06-06.pdf
and was wondering whether this protocol is
available / integrated into OpenMPI.
Kind regards,
Mattijs
--
Mattijs Janssens
Yes I am, (master and child 1 running on the same machine).
But knowing the oversubscribing issue, I am using mpi_yield_when_idle
which should fix precisely this problem, right?
Or is the option ignored,when initially there is no second process? I
did give both machines multiple slots, so OpenMPI
"
Are you perchance oversubscribing your nodes?
Open MPI does not currently handle well when you initially
undersubscribe your nodes but then, due to spawning, oversubscribe
your nodes. In this case, OMPI will be aggressively polling in all
processes, not realizing that the node is now overs
On Oct 30, 2007, at 9:42 AM, Jorge Parra wrote:
Thank you for your reply. Linux does not freeze. The one that
freezes is
OpenMPI. Sorry for my unaccurate choice of words that led to
confusion.
Therefore dmesg does not show anything abnormal (I attached to this
email
a full dmesg log, captu
27 matches
Mail list logo