Hi,
OK, that makes it clear.
Thank you for the fast response.
Regards,
Markus
Am 07.11.2012 13:49, schrieb Iliev, Hristo:
Hello, Markus,
The openib BTL component is not thread-safe. It disables itself when
the thread support level is MPI_THREAD_MULTIPLE. See this rant from
one of my colleagu
On Nov 7, 2012, at 7:21 PM, Jens Glaser wrote:
> With the help of MVAPICH2 developer S. Potluri the problem was isolated and
> fixed.
Sorry about not replying; we're all (literally) very swamped trying to prepare
for the Supercomputing trade show/conference next week. I know I'm wy
behind
Not sure. I will look into this. And thank you for the feedback Jens!
Rolf
>-Original Message-
>From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org]
>On Behalf Of Jeff Squyres
>Sent: Thursday, November 08, 2012 8:49 AM
>To: Open MPI Users
>Subject: Re: [OMPI users] mpi_l
On Nov 8, 2012, at 8:51 AM, Rolf vandeVaart wrote:
> Not sure. I will look into this. And thank you for the feedback Jens!
FWIW, I +1 Jens' request. MPI implementations are able to handle network
registration mechanisms via standard memory hooks (their hooks are actually
pretty terrible, bu
Another good reason for ummunotify kernel module
(http://lwn.net/Articles/345013/)
Pavel (Pasha) Shamis
---
Computer Science Research Group
Computer Science and Math Division
Oak Ridge National Laboratory
On Nov 8, 2012, at 9:08 AM, Jeff Squyres wrote:
On Nov 8, 2012, at 8:51 AM, Rolf vandeVaart
I have an int I intend to broadcast from root (rank==(FIELD=0)).
int winner
if (rank == FIELD) {
winner = something;}
MPI_Barrier(MPI_COMM_WORLD);
MPI_Bcast(&winner, 1, MPI_INT, FIELD, MPI_COMM_WORLD);
MPI_Barrier(MPI_COMM_WORLD);if (rank != FIELD) {
cout << rank << " informed that winner
Note that the saga of trying to push ummunotify upstream to Linux ended up with
Linus essentially saying "fix your own network stack; don't put this in the
main kernel."
He's was right back then. With a 2nd "customer" for this kind of thing (cuda),
that equation might be changing, but I'll lea
My understanding of the upstreaming failure was more like:
* Linus was going to be OK
* Some perf (or trace?) guys came late and said "oh your code should be
integrated into our more general stuff" but they didn't do it, and
basically vetoed anything that didn't do what they said
Brice
Le 08/11
Yes it is a Westmere system.
Socket L#0 (P#0 CPUModel="Intel(R) Xeon(R) CPU E7- 8870 @ 2.40GHz"
CPUType=x86_64)
L3Cache L#0 (size=30720KB linesize=64 ways=24)
L2Cache L#0 (size=256KB linesize=64 ways=8)
L1dCache L#0 (size=32KB linesize=64 ways=8)
L1iCache L#0
Nope, that wasn't it.
...oh, I see, Linus' reply didn't go to LKML; it just went to a bunch of
individuals. Here's part of his reply:
The interface claims to be generic, but is really just a hack for a single
use case that very few people care about. I find the design depressingly
stupid,
On Fri, Nov 02, 2012 at 05:08:56PM -0400, Jeff Squyres wrote:
> FWIW, we have seen bugs in the intel compiler suite before. We usually
> advise people to get the latest version of the particular intel compiler
> suite version that they have a license to obtain.
>
...
The latest compoxerXE versio
Note that the saga of trying to push ummunotify upstream to Linux ended up with
Linus essentially saying "fix your own network stack; don't put this in the
main kernel."
I haven't seen this one. All I found is this thread
http://thread.gmane.org/gmane.linux.drivers.openib/65188
On Nov 8
> * Some perf (or trace?) guys came late and said "oh your code should be
I think eventually there was a consensus that perf/trace doesn't fit well ...
(or it requires substantial changes)
>
>
>
> Le 08/11/2012 15:43, Jeff Squyres a écrit :
>> Note that the saga of trying to push ummunotify
Thanks, I definitely appreciate the new, hotness of hwloc. I just couldn't
tell from the documentation or the web page how or if it was being used by
OpenMPI.
I still work with OpenMPI 1.4.x and now that I've looked into the builds, I
think I understand that PLPA is used in 1.4 and hwloc is br
On Nov 8, 2012, at 10:17 AM, Blosch, Edwin L wrote:
> Thanks, I definitely appreciate the new, hotness of hwloc. I just couldn't
> tell from the documentation or the web page how or if it was being used by
> OpenMPI.
>
> I still work with OpenMPI 1.4.x and now that I've looked into the builds,
I gather from your other emails you are using 1.4.3, yes? I believe that
has npersocket as an option. If so, you could do:
mpirun -npersocket 2 -bind-to-socket ...
That would put two processes in each socket, bind them to that socket, and
rank them in series. So ranks 0-1 would be bound to the fi
Thanks, that's what I'm looking for.
My first look for documentation is always the FAQ, not the man pages. I found
no mention of -npersocket in the FAQ but there it is very clear in the man
page. Boy do I feel dumb.
Anyway, thanks a lot.
From: users-boun...@open-mpi.org [mailto:users-boun...
I'm way behind on updating the FAQs -my apologies :-(
Sent from my iPhone
On Nov 8, 2012, at 9:31 AM, "Blosch, Edwin L" wrote:
> Thanks, that’s what I’m looking for.
>
> My first look for documentation is always the FAQ, not the man pages. I
> found no mention of -npersocket in the FAQ but
Hi
I've discovered MPI recently, and I would like to start to write some
applications to use it's potential.
Now the problem is that I use a mac, and I see no tutorials or books that are
targeting OSX, so I was wondering if you could give me some pointers about
where to find info.
I use Xcod
Am 08.11.2012 um 23:25 schrieb shiny knight:
> I've discovered MPI recently, and I would like to start to write some
> applications to use it's potential.
>
> Now the problem is that I use a mac, and I see no tutorials or books that are
> targeting OSX, so I was wondering if you could give me s
Greetings ladies and gentlemen,
I believe that the last version of OSX to have OpenMPI shipped with the dev
tools or the OS was Snow Leopard. That version I believe was version OMPI
1.29.
I am starting to get back to working with MPI myself. The last time I worked
with MPI, I basically wrap
21 matches
Mail list logo