[OMPI users] (no subject)

2017-05-15 Thread Ioannis Botsis
Hi I am trying to run the following simple demo to a cluster of two nodes -- #include #include int main(int argc, char** argv) { MPI_Init(NULL, NULL); int world_size; MPI_Comm_

Re: [OMPI users] (no subject)

2017-05-15 Thread gilles
Ioannis, ### What version of Open MPI are you using? (e.g., v1.10.3, v2.1.0, git branch name and hash, etc.) ### Describe how Open MPI was installed (e.g., from a source/ distribution tarball, from a git clone, from an operating system distribution package, etc.) ### Please describe the sy

[OMPI users] Form an intercommunicator across threads?

2017-05-15 Thread Clune, Thomas L. (GSFC-6101)
I am trying to craft a client-server layer that needs to have 2 different modes of operation. In the “remote server” mode, then the server runs on distinct processes, and intercommunicator is a perfect fit for my design. In the “local server” the server will actually run on a dedicate thre

Re: [OMPI users] mpi_scatterv problem in fortran

2017-05-15 Thread Jeff Hammond
Based upon the symbols in the backtrace, you are using Intel MPI, not Open-MPI. If there is a bug in the MPI library, it is likely also in MPICH, so you might try to reproduce this in MPICH. You can also try to run with Open-MPI. If you see a problem in both Intel MPI/MPICH and Open-MPI, it is a

Re: [OMPI users] Form an intercommunicator across threads?

2017-05-15 Thread George Bosilca
A process or rank is not allowed to participate multiple times in the same group (at least not in the current version of the MPI standard). The sentence about "dual membership" you pointed out makes sense only for inter-communicators (and the paragraph where the sentence is located clearly talks ab

Re: [OMPI users] (no subject)

2017-05-15 Thread Ioannis Botsis
Hi Gilles Thank you for your prompt response. Here is some information about the system Ubuntu 16.04 server Linux-4.4.0-75-generic-x86_64-with-Ubuntu-16.04-xenial On HP PROLIANT DL320R05 Generation 5, 4GB RAM, 4x120GB raid-1 HDD, 2 ethernet ports 10/100/1000 HP StorageWorks 70 Modular Smart A

Re: [OMPI users] mpi_scatterv problem in fortran

2017-05-15 Thread Siva Srinivas Kolukula
Dear Jeff Hammond Thanks a lot for the reply. I have tried with mpiexec, I am getting the same error. But according to this link: http://stackoverflow.com/questions/7549316/mpi-partition-matrix-into-blocks it is possible. Any suggestions/ advice? _ *SAVE WATER ** ~ **SAVE ENERGY**~ **~ **SAVE

Re: [OMPI users] mpi_scatterv problem in fortran

2017-05-15 Thread Gilles Gouaillardet
Hi, if you run this under a debugger and look at how MPI_Scatterv is invoked, you will find that - sendcounts = {1, 1, 1} - resizedtype has size 32 - recvcount*sizeof(MPI_INTEGER) = 32 on task 0, but 16 on task 1 and 2 => too much data is sent to tasks 1 and 2, hence the error. in this case

Re: [OMPI users] (no subject)

2017-05-15 Thread Gilles Gouaillardet
Thanks for all the information, what i meant by mpirun --mca shmem_base_verbose 100 ... is really you modify your mpirun command line (or your torque script if applicable) and add --mca shmem_base_verbose 100 right after mpirun Cheers, Gilles On 5/16/2017 3:59 AM, Ioannis Botsis wrot