On 14:24 Mon 12 Dec , Dave Love wrote:
> Andreas Schäfer writes:
>
> >> Yes, as root, and there are N different systems to at least provide
> >> unprivileged read access on HPC systems, but that's a bit different, I
> >> think.
> >
> > LIKWI
LIKWID[1] uses a daemon to provide limited RW access to MSRs for
applications. I wouldn't wonder if support for this was added to
LIKWID by RRZE.
Cheers
-Andreas
[1] https://github.com/RRZE-HPC/likwid
--
ers
-Andreas
--
==========
Andreas Schäfer
HPC and Grid Computing
Department of Computer Science 3
Friedrich-Alexander-Universität Erlangen-Nürnberg, Germany
+49 9131 85-27910
PGP/GPG key via keyserver
http://www.libg
; Link to this post:
> http://www.open-mpi.org/community/lists/users/2015/04/26608.php
--
==
Andreas Schäfer
HPC and Grid Computing
Chair of Computer Science 3
Friedrich-Alexander-Universität Erlangen
On 14:26 Wed 26 Mar , Ross Boylan wrote:
> [Main part is at the bottom]
> On Wed, 2014-03-26 at 19:28 +0100, Andreas Schäfer wrote:
> > If you have a complex workflow with varying computational loads, then
> > you might want to take a look at runtime systems which allow you t
Heya,
On 19:21 Wed 26 Mar , Gus Correa wrote:
> On 03/26/2014 05:26 PM, Ross Boylan wrote:
> > [Main part is at the bottom]
> > On Wed, 2014-03-26 at 19:28 +0100, Andreas Schäfer wrote:
> >> On 09:08 Wed 26 Mar , Ross Boylan wrote:
> >>> Second,
Ross-
On 09:08 Wed 26 Mar , Ross Boylan wrote:
> On Wed, 2014-03-26 at 10:27 +, Jeff Squyres (jsquyres) wrote:
> > On Mar 26, 2014, at 1:31 AM, Andreas Schäfer wrote:
> >
> > >> Even when "idle", MPI processes use all the CPU. I thought I remember
more
> hardware resources to the remaining HT that is left is each core
> (e.g., deeper queues).
Oh, I didn't know that. That's interesting! Do you have any links with
in-depth info on that?
Thanks!
-Andreas
--
==========
Andreas
run slower with SMT
("hyperthreading").
Cheers
-Andreas
--
======
Andreas Schäfer
HPC and Grid Computing
Chair of Computer Science 3
Friedrich-Alexander-Universität Erlangen-Nürnberg, Germany
+49
st
-Andreas
--
======
Andreas Schäfer
HPC and Grid Computing
Chair of Computer Science 3
Friedrich-Alexander-Universität Erlangen-Nürnberg, Germany
+49 9131 85-27910
PGP/GPG key via keyserver
http://www.libge
___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
--
======
Andreas Schäfer
HPC and Grid Computing
Chair of Computer Science 3
Friedrich-Alexander-Universität Erlangen-Nürn
an integrate the client-side
into our library to allow users to let their simulations run through
despite nodes failing.
Thanks!
-Andreas
--
==========
Andreas Schäfer
HPC and Grid Computing
Chair of Computer Science 3
Friedrich-Alexander-Univ
ility to dynamically
connect/disconnect nodes in a robust way, then we could build
fault-resilient apps on top of that.
Best
-Andreas
--
======
Andreas Schäfer
HPC and Grid Computing
Chair of Computer Science 3
Friedrich-Alexander-Universität Erlangen-
ive you advice with so little information. Otherwise i
might just say "use MPI and you're done".
In any case this is probably not the right mailinglist to ask these
questions as this list is specifically for Open MPI, not MPI in
general.
Best
-Andreas
--
==========
Hi,
On 00:05 Fri 24 Aug , Reuti wrote:
> Am 23.08.2012 um 23:28 schrieb Andreas Schäfer:
>
> > ...
> > checking for style of include used by make... GNU
> > checking how to create a ustar tar archive...
> > ATTENTION! pax archive volume change required.
&
t; to quit pax.
Archive name >
=== 8< *snip* ==
--
==========
Andreas Schäfer
HPC and Grid Computing
Chair of Computer Science 3
Friedrich-Alexander-Universität Erlangen-Nürnberg, Germany
+49 9131
On 17:55 Fri 01 Jun , Rayson Ho wrote:
> We posted an MPI quiz but so far no one on the Grid Engine list has
> the answer that Jeff was expecting:
>
> http://blogs.scalablelogic.com/
That link gives me an "error 503"?
--
====
he datatype variable
is merely a handle, Open MPI has an internal data store for each
user-defines datatype. Same for requests AFAIK.
Best
-Andreas
--
======
Andreas Schäfer
HPC and Grid Computing
Chair of Computer Science 3
Friedrich-Alexander-
___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
--
==
Andreas Schäfer
HPC and Grid Computing
Chair of Computer Science 3
Friedrich-Alexander-Universität Erlangen-N
y I'd have a look directy at the code (i.e.
search for cudaSetDevice()).
HTH
-Andreas
--
==========
Andreas Schäfer
HPC and Grid Computing
Chair of Computer Science 3
Friedrich-Alexander-Universität Erlangen-Nürnberg, Germany
+49 9131 85
you using and how would you specify the GPU to use
sans MPI?
Best
-Andreas
--
======
Andreas Schäfer
HPC and Grid Computing
Chair of Computer Science 3
Friedrich-Alexander-Universität Erlangen-Nürnberg, Germany
+49 9131 85-27910
PGP/GPG key via
_
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
--
==========
Andreas Schäfer
HPC and Grid Computing
Chair of Computer Science 3
Friedrich-Alexander-Universitä
> best regards,
>
> Amjad Ali
>
>
>
> _______
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
--
==
icates that he would like to use both: slots AND weights.
--
======
Andreas Schäfer
HPC and Grid Computing
Chair of Computer Science 3
Friedrich-Alexander-Universität Erlangen-Nürnberg, Germany
+49 9131 85-27910
PGP/GPG key via
us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
--
==
Andreas Schäfer
HPC and Grid Computing
Chair of Computer Science 3
Friedrich-Alexander-Universität Erlangen-Nürnberg, Germ
d InfiniBand 4x QDR
(active_width 4X, active_speed 10.0 Gbps), so I /should/ be able to
get about twice the throughput of what I'm currently seeing.
--
==========
Andreas Schäfer
HPC and Grid Computing
Chair of Computer Science 3
Friedrich-Alexan
-pong for >256K.
I'll try to find a Intel system to repeat the tests. Maybe it's AMD's
different memory subsystem/cache architecture which is slowing Open
MPI? Or are my systems just badly configured?
Best
-Andreas
--
==========
A
ind-to-core or --bind-to-socket on the cmd line?
> Otherwise, the processes are running unbound, which makes a significant
> difference to performance.
>
>
> On Jul 9, 2010, at 3:15 AM, Andreas Schäfer wrote:
>
> > Maybe I should add that for tests I ran the benchmarks
Maybe I should add that for tests I ran the benchmarks with two MPI
processes: for InfiniBand one process per node and for shared memory
both processes were located on one node.
--
==
Andreas Schäfer
HPC and Grid Computing
Chair of
Polling)==> [ ] "" (
)
10 16[ ] ==( 4X 2.5 Gbps Down/ Polling)==> [ ] "" (
)
10 17[ ] ==( 4X 2.5 Gbps Active/ LinkUp)==> 52[ ]
"faui36a HCA-1" ( )
10 18[ ] ==( 4X 2.5 Gbps Down/ Polling)==> [ ]
tion time, would
it be an option to only ship the source necessary to build the
flex.exe? One could then add an additional build stage during which
flex.exe is compiled, just before it is required.
Just my $0.02
-Andreas
--
======
Andreas Schäfe
(the vector). Since datatypes can only be used for objects with
fixed size (and layout), you can't define an MPI_Datatype for
this. I'd suggest you to use Boost.MPI in this case
(http://www.boost.org/doc/libs/1_35_0/doc/html/mpi.html)
Cheers
-Andreas
--
==
gt; http://www.open-mpi.org/mailman/listinfo.cgi/users
--
====
Andreas Schäfer
Cluster and Metacomputing Working Group
Friedrich-Schiller-Universität Jena, Germany
0049/3641-9-46376
PGP/GPG key via keyserver
I'm a bright... http://www.the-brights.net
(\___/)
(+
ould not need to delete, just add in front of MPICH.
> Would you please help me with that ?
I utterly hope I just did.
Most sincerely yours ;-)
-Andreas
--
========
Andreas Schäfer
Cluster and Metacomputing Working Group
Friedrich-Schiller-Universit
ode (www.transcoding.org)
and would suggest MPEG output (MPEG 4, or MPEG 2 if you really
must). But that's just what I prefer.
Cheers
-Andi
--
Andreas Schäfer
Cluster and Metacomputing Working Group
Friedrich-Schil
://dblp.uni-trier.de}
}
--
========
Andreas Schäfer
Cluster and Metacomputing Working Group
Friedrich-Schiller-Universität Jena, Germany
PGP/GPG key via keyserver
I'm a bright... http://www.the-brights.net
(\___/)
(+'.'
fer how-to pages for this, as you can copy&paste the commands
directly into your own shell.
> - ...other [low-budget] suggestions?
Maybe an a tad higher audio bitrate. And some people don't like the
.mov format, but that isn't really important.
Thanks!
-Andreas
--
===
On 12:28 Fri 30 May , Lee Amy wrote:
> 2008/5/29 Andreas Schäfer :
> Thank you very much. If I do a shorter job it seems run well. And the job
> dosen't repeatedly fail at the same time, but it will fail at this error
> messages. Anyway, I'm not using a scheduling syst
nds like your application is
terminated from an external instance, maybe because your job exceeded
the wall clock time limit of your scheduling system. Does the job
repeatedly fail at the same time? Do shorter jobs finish successfully?
Just my 0.02 Euros (-8
Cheers
-Andreas
--
=====
f you want to.
Cheers!
-Andi
--
========
Andreas Schäfer
Cluster and Metacomputing Working Group
Friedrich-Schiller-Universität Jena, Germany
PGP/GPG key via keyserver
I'm a bright... http://www.the-brights.net
(\___/)
(+'.'+)
(")_(")
This
a guess that
it's not OMPI's fault but VASP's, since the segfault happens in one of
its functions. Maybe you should have a look there.
HTH
-Andi
--
Andreas Schäfer
Cluster and Metacomputing Working Group
Friedrich-
cannot be MPI data types." [1]
AFAIK, boost::mpi will thus buffer all vectors to be sent. This might
not be as efficient as just feeding it a raw pointer and the number of
elements.
Cheers!
-Andreas
[1]
http://www.boost.org/doc/libs/1_35_0/doc/html/mpi/tutorial.html#mpi.point_to_point
--
===
#x27;t help you if you don't provide us with self-sufficient
code; small excerpts mixed with comments won't cut it in most cases.
Cheers
-Andreas
--
====
Andreas Schäfer
Cluster and Metacomputing Working Group
Friedrich-Schiller-Universität Jena,
the error (and still constitutes
a valid/complete MPI program).
Cheers!
-Andreas
--
Andreas Schäfer
Cluster and Metacomputing Working Group
Friedrich-Schiller-Universität Jena, Germ
?
You could do so,
> king regards, oeter
your majesty Oeter ;-)
Cheers
-Andreas
--
============
Andreas Schäfer
Cluster and Metacomputing Working Group
Friedrich-Schiller-Universität Jena, Germany
PGP/GPG key via keyserver
I'm a bright... htt
ichever creative and colorful
way you like.
Cheers!
-Andreas
--
Andreas Schäfer
Cluster and Metacomputing Working Group
Friedrich-Schiller-Universität Jena, Germany
PGP/GPG key via keyserver
I'm a bright... http://www.the-brights.net
(\___/)
(+
rrect MPI apps to avoid this optimization -- a
> proper fix is coming in the v1.3 series.
Yo, I've just tried it with the current SVN and couldn't reproduce the
deadlock. Nice!
Cheers
-Andreas
--
============
Andreas Schäfer
Cluster and Metacom
b64/libmpi.so.0
#3 0x0040ca04 in MPI::Comm::Send ()
#4 0x00409700 in main ()
Anyone got a clue?
--
========
Andreas Schäfer
Cluster and Metacomputing Working Group
Friedrich-Schiller-Universität Jena, Germany
PGP/GPG key via keyserv
y much!
>
> Giovani
>
>
>
>
>
>
>
> __
> Fale com seus amigos de graça com o novo Yahoo! Messenger
> http://br.messenger.yahoo.com/
> ___
> users mailing list
> us...@open-mpi.org
> ht
27;re really just one-dimensional in
memory.
Cheers
-Andreas
--
Andreas Schäfer
Cluster and Metacomputing Working Group
Friedrich-Schiller-Universität Jena, Germany
PGP/GPG key via keyserver
I'm a bright... http://www.the-brights.net
node, which is 0
in your case. Thus, the other nodes cannot produce the same output as
node 0.
I've attached my reworked version (including some initialization
code for clarity). If you want me again to debug a program of yours,
send a floppy along with a pizza Hawai (cartwheel size) to:
read a few
> papers about it - the basic approach would be to
> parallel the divide and conquer part - which would
> result in ALOT of network messages...
As already said, please read Powers' paper from above. I could imagine
that even though this results in _many_ messages, the algor
52 matches
Mail list logo