Thankyou for the reference to the Divakar Viswanath book.
It is very generous that it is available online - I just wish it was in
ePub format.
I guess that is a decision of MIT Press. I would happily pay for an ePub
edition. I just cannot justify more shelf space for physical books.
On Thu,
Feel free to holler if you run into trouble - it should be relatively easy to
build and use PRRTE if you have done so for OMPI
On Aug 20, 2020, at 10:49 AM, Carlo Nervi mailto:carlo.ne...@unito.it> > wrote:
Thank you Ralph for the suggestion!
I will carefully consider it, although I'm a chemist
I'm using VASP, Quantum Espresso, DFTB+, Gulp, Tinker, Crystal and Gaussian.
VASP, QE and G16 are not a problem (the latter is using threads up to 48
cores).
QE sometimes slows down, but nothing to be much worry. DFTB+ is often used
with several jobs and mpi. In any cases jobs x mpi <= 48
I'm wond
Thank you Ralph for the suggestion!
I will carefully consider it, although I'm a chemist and not a sysadmin (I
miss a lot a specialized sysadmin in our Department!).
Carlo
Il giorno gio 20 ago 2020 alle ore 18:45 Ralph Castain via users <
users@lists.open-mpi.org> ha scritto:
> Your use-case sou
It's not about Open-MPI but I know of only one book on the internals of
MPI: "Inside the Message Passing Interface: Creating Fast Communication
Libraries" by Alexander Supalov.
I found it useful for understanding how MPI libraries are implemented. It
is no substitute for spending hours reading so
On Thu, Aug 20, 2020 at 3:22 AM Carlo Nervi via users <
users@lists.open-mpi.org> wrote:
> Dear OMPI community,
> I'm a simple end-user with no particular experience.
> I compile quantum chemical programs and use them in parallel.
>
Which code? Some QC codes behave differently than traditional M
Your use-case sounds more like a workflow than an application - in which case,
you probably should be using PRRTE to execute it instead of "mpirun" as PRRTE
will "remember" the multiple jobs and avoid the overload scenario you describe.
This link will walk you thru how to get and build it:
http
Thank you, Christoph. I did not consider the --cpu-list.
However, this is okay if I have a single script that is launching several
jobs (please note that each job may have a different number of CPUs). In my
case I have the same script (that launches mpirun), which is called many
times. The script i
Hello Carlo,
If you execute multiple mpirun commands they will not know about each others
resource bindings.
E.g. if you bind to cores each mpirun will start with the same core to assign
with again.
This results then in over subscription of the cores, which slows down your
programs - as you did
Dear OMPI community,
I'm a simple end-user with no particular experience.
I compile quantum chemical programs and use them in parallel.
My system is a 4 socket, 12 core per socket Opteron 6168 system for a total
of 48 cores and 64 Gb of RAM. It has 8 NUMA nodes:
openmpi $ hwloc-info
depth 0:
10 matches
Mail list logo