[OMPI users] MPI_CANCEL for nonblocking collective communication

2017-06-09 Thread Markus
s also supposed to work for collective nonblocking communication (which includes my broadcasts). I haven't found any advice yet, so I hope to find some help in this mailing list. Kind regards, Markus Jeromin PS: Testbed for calling mpi cancel, written in Java. ___ package distributed.m

Re: [OMPI users] Deadlocks and warnings from libevent when using MPI_THREAD_MULTIPLE

2014-04-26 Thread Markus Wittmann
Am 25.04.2014 23:40, schrieb Ralph Castain: We don't fully support THREAD_MULTIPLE, and most definitely not when using IB. We are planning on extending that coverage in the 1.9 series Ah OK, thanks for the fast reply. -- Markus Wittmann, HPC Services Friedrich-Alexander-Universität Erl

[OMPI users] Deadlocks and warnings from libevent when using MPI_THREAD_MULTIPLE

2014-04-25 Thread Markus Wittmann
Best regards, Markus -- Markus Wittmann, HPC Services Friedrich-Alexander-Universität Erlangen-Nürnberg Regionales Rechenzentrum Erlangen (RRZE) Martensstrasse 1, 91058 Erlangen, Germany http://www.rrze.fau.de/hpc/ info.tar.bz2 Description: Binary data // Compile with: mpicc test.c -pthread -o

Re: [OMPI users] Problems with btl openib and MPI_THREAD_MULTIPLE

2012-11-08 Thread Markus Wittmann
Hi, OK, that makes it clear. Thank you for the fast response. Regards, Markus Am 07.11.2012 13:49, schrieb Iliev, Hristo: Hello, Markus, The openib BTL component is not thread-safe. It disables itself when the thread support level is MPI_THREAD_MULTIPLE. See this rant from one of my

[OMPI users] Problems with btl openib and MPI_THREAD_MULTIPLE

2012-11-07 Thread Markus Wittmann
2048 (4) active_mtu: 2048 (4) sm_lid: 48 port_lid: 278 port_lmc: 0x00 Thanks for the help in advance. Regards, Markus -- Markus Wittm

Re: [OMPI users] open-mpi error

2011-11-26 Thread Markus Stiller
this and this works now But now i have the same problem again (the problem why i wrote u in the first place): markus@linux-6wa6:/media/808CCB178CCB069E/MD Simulations/Test Simu1> sudo mpirun -n 4 ./DLPOLY.Z root's password: [linux-6wa6:05565] [[INVALID],INVALID] ORTE_ERROR_LOG: Not

Re: [OMPI users] open-mpi error

2011-11-24 Thread Markus Stiller
aries and you could link against those and that may work MM -Original Message- From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On Behalf Of Markus Stiller Sent: 24 November 2011 20:41 To: us...@open-mpi.org Subject: [OMPI users] open-mpi error Hello, i have some pr

Re: [OMPI users] open-mpi error

2011-11-24 Thread Markus Stiller
aries and you could link against those and that may work MM -Original Message- From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On Behalf Of Markus Stiller Sent: 24 November 2011 20:41 To: us...@open-mpi.org Subject: [OMPI users] open-mpi error Hello, i have some pr

[OMPI users] open-mpi error

2011-11-24 Thread Markus Stiller
or mpirun -n 4 ./DLPOLY.z I get this error: -- [linux-6wa6:02927] [[INVALID],INVALID] ORTE_ERROR_LOG: Not found in file orterun.c at line 543 markus@linux-6wa6:/media/808CCB178CCB069E/MD Simulations/Test Simu1> sudo mpiexec -n 4 ./DLPOLY.Z [linux-6wa6:03731] [[INVALID],IN

[OMPI users] Running your MPI application on a Computer Cluster in the Cloud - cloudnumbers.com

2011-07-13 Thread Markus Schmidberger
e and have an impact to the development of a new cloud computing calculation platform. Best Markus -- Dr. rer. nat. Markus Schmidberger Senior Community Manager Cloudnumbers.com GmbH Chausseestraße 6 10119 Berlin www.cloudnumbers.com E-Mail: markus.schmidber...@clo

[OMPI users] [R] Short survey concerning the use of software engineering in the field of High Performance Computing

2010-08-31 Thread Markus Schmidberger
Dear Colleagues, this is a short survey (21 questions that take about 10 minutes to answer) in context of the research work for my PhD thesis and the Munich Center of Advanced Computing (Project B2). It would be very helpful, if you will take the time to answer my questions concerning the use

Re: [OMPI users] MPI and C++ - now Send and Receive of Classes and STL containers

2009-07-07 Thread Markus Blatt
not using nested vectors but just ones that contain PODs as value_type (or even C-arrays). If you insist on using complicated containers you will end up writing your own MPI-C++ abstraction (resulting in a library). This will be a lot of (unnecessary and hard) work. Just my 2 cents. Cheers, Markus

[OMPI users] Problem with cascading derived data types

2009-02-27 Thread Markus Blatt
-2 of openmpi. Am I doing something completely wrong or have I accidentally found a bug? Cheers, Markus #include"mpi.h" #include struct LocalIndex { int local_; char attribute_; char public_; }; struct IndexPair { int global_; LocalIndex local_; }; int main(int argc, c

Re: [OMPI users] OpenMPI 1.2.5 race condition / core dump with MPI_Reduce and MPI_Gather

2008-02-29 Thread John Markus Bjørndalen
move some student exercises over from LAM to OpenMPI. I don't expect to write actual applications that treat MPI like this myself, but on the other hand, not having to do throttling on top of MPI could be an advantage in some application patterns. Regards, -- // John Markus Bjørndalen // http://www.cs.uit.no/~johnm/

Re: [OMPI users] OpenMPI 1.2.5 race condition / core dump with MPI_Reduce and MPI_Gather

2008-02-28 Thread John Markus Bjørndalen
The opportunity for pipelining the operations there is rather small since they can't get much out of phase with each other. Regards, -- // John Markus Bjørndalen // http://www.cs.uit.no/~johnm/

[OMPI users] OpenMPI 1.2.5 race condition / core dump with MPI_Reduce and MPI_Gather

2008-02-22 Thread John Markus Bjørndalen
1 set(['PMPI_Reduce']) --- snip- I don't have any suggestions for a fix though, since this is the first time I've looked into the OpenMPI code. Btw. In case it makes a difference for triggering the bug: I'm running this on a cluster with 1 frontend and 44 nodes. The cluster runs Rocks 4.1, and each of the nodes are 3.2GHz P4 Prescott machines with 2GB RAM, connected with gigabit Ethernet. Regards, -- // John Markus Bjørndalen // http://www.cs.uit.no/~johnm/