s also supposed to
work for collective nonblocking communication (which includes my
broadcasts). I haven't found any advice yet, so I hope to find some help in
this mailing list.
Kind regards,
Markus Jeromin
PS: Testbed for calling mpi cancel, written in Java.
___
package distributed.m
Am 25.04.2014 23:40, schrieb Ralph Castain:
We don't fully support THREAD_MULTIPLE, and most definitely not when
using IB. We are planning on extending that coverage in the 1.9
series
Ah OK, thanks for the fast reply.
--
Markus Wittmann, HPC Services
Friedrich-Alexander-Universität Erl
Best regards,
Markus
--
Markus Wittmann, HPC Services
Friedrich-Alexander-Universität Erlangen-Nürnberg
Regionales Rechenzentrum Erlangen (RRZE)
Martensstrasse 1, 91058 Erlangen, Germany
http://www.rrze.fau.de/hpc/
info.tar.bz2
Description: Binary data
// Compile with: mpicc test.c -pthread -o
Hi,
OK, that makes it clear.
Thank you for the fast response.
Regards,
Markus
Am 07.11.2012 13:49, schrieb Iliev, Hristo:
Hello, Markus,
The openib BTL component is not thread-safe. It disables itself when
the thread support level is MPI_THREAD_MULTIPLE. See this rant from
one of my
2048 (4)
active_mtu: 2048 (4)
sm_lid: 48
port_lid: 278
port_lmc: 0x00
Thanks for the help in advance.
Regards,
Markus
--
Markus Wittm
this
and this works now
But now i have the same problem again (the problem why i wrote u in the first
place):
markus@linux-6wa6:/media/808CCB178CCB069E/MD Simulations/Test Simu1>
sudo mpirun -n 4 ./DLPOLY.Z
root's password:
[linux-6wa6:05565] [[INVALID],INVALID] ORTE_ERROR_LOG: Not
aries and you could link against
those and that may work
MM
-Original Message-
From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On
Behalf Of Markus Stiller
Sent: 24 November 2011 20:41
To: us...@open-mpi.org
Subject: [OMPI users] open-mpi error
Hello,
i have some pr
aries and you could link against
those and that may work
MM
-Original Message-
From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On
Behalf Of Markus Stiller
Sent: 24 November 2011 20:41
To: us...@open-mpi.org
Subject: [OMPI users] open-mpi error
Hello,
i have some pr
or
mpirun -n 4 ./DLPOLY.z
I get this error:
--
[linux-6wa6:02927] [[INVALID],INVALID] ORTE_ERROR_LOG: Not found in file
orterun.c at line 543
markus@linux-6wa6:/media/808CCB178CCB069E/MD Simulations/Test Simu1>
sudo mpiexec -n 4 ./DLPOLY.Z
[linux-6wa6:03731] [[INVALID],IN
e and have an impact to the development of a new cloud
computing calculation platform.
Best
Markus
--
Dr. rer. nat. Markus Schmidberger
Senior Community Manager
Cloudnumbers.com GmbH
Chausseestraße 6
10119 Berlin
www.cloudnumbers.com
E-Mail: markus.schmidber...@clo
Dear Colleagues,
this is a short survey (21 questions that take about 10 minutes to
answer) in context of the research work for my PhD thesis and the Munich
Center of Advanced Computing (Project B2). It would be very helpful, if
you will take the time to answer my questions concerning the use
not using nested
vectors but just ones that contain PODs as value_type (or even
C-arrays).
If you insist on using complicated containers you will end up
writing your own MPI-C++ abstraction (resulting in a library). This
will be a lot of (unnecessary and hard) work.
Just my 2 cents.
Cheers,
Markus
-2 of openmpi.
Am I doing something completely wrong or have I accidentally found a bug?
Cheers,
Markus
#include"mpi.h"
#include
struct LocalIndex
{
int local_;
char attribute_;
char public_;
};
struct IndexPair
{
int global_;
LocalIndex local_;
};
int main(int argc, c
move some student exercises over
from LAM to OpenMPI. I don't expect to write actual applications that
treat MPI like this myself, but on the other hand, not having to do
throttling on top of MPI could be an advantage in some application
patterns.
Regards,
--
// John Markus Bjørndalen
// http://www.cs.uit.no/~johnm/
The opportunity for
pipelining the operations there is rather small since they can't get
much out of phase with each other.
Regards,
--
// John Markus Bjørndalen
// http://www.cs.uit.no/~johnm/
1 set(['PMPI_Reduce'])
--- snip-
I don't have any suggestions for a fix though, since this is the first
time I've looked into the OpenMPI code.
Btw. In case it makes a difference for triggering the bug: I'm running
this on a cluster with 1 frontend and 44 nodes. The cluster runs Rocks
4.1, and each of the nodes are 3.2GHz P4 Prescott machines with 2GB RAM,
connected with gigabit Ethernet.
Regards,
--
// John Markus Bjørndalen
// http://www.cs.uit.no/~johnm/
16 matches
Mail list logo