Hi
I am trying to run the following simple demo to a cluster of two nodes
--
#include
#include
int main(int argc, char** argv) {
MPI_Init(NULL, NULL);
int world_size;
MPI_Comm_
Ioannis,
### What version of Open MPI are you using? (e.g., v1.10.3, v2.1.0, git
branch name and hash, etc.)
### Describe how Open MPI was installed (e.g., from a source/
distribution tarball, from a git clone, from an operating system
distribution package, etc.)
### Please describe the sy
I am trying to craft a client-server layer that needs to have 2 different modes
of operation. In the “remote server” mode, then the server runs on distinct
processes, and intercommunicator is a perfect fit for my design. In the
“local server” the server will actually run on a dedicate thre
Based upon the symbols in the backtrace, you are using Intel MPI, not
Open-MPI. If there is a bug in the MPI library, it is likely also in
MPICH, so you might try to reproduce this in MPICH. You can also try to
run with Open-MPI. If you see a problem in both Intel MPI/MPICH and
Open-MPI, it is a
A process or rank is not allowed to participate multiple times in the same
group (at least not in the current version of the MPI standard). The
sentence about "dual membership" you pointed out makes sense only for
inter-communicators (and the paragraph where the sentence is located
clearly talks ab
Hi Gilles
Thank you for your prompt response.
Here is some information about the system
Ubuntu 16.04 server
Linux-4.4.0-75-generic-x86_64-with-Ubuntu-16.04-xenial
On HP PROLIANT DL320R05 Generation 5, 4GB RAM, 4x120GB raid-1 HDD, 2
ethernet ports 10/100/1000
HP StorageWorks 70 Modular Smart A
Dear Jeff Hammond
Thanks a lot for the reply. I have tried with mpiexec, I am getting the
same error. But according to this link:
http://stackoverflow.com/questions/7549316/mpi-partition-matrix-into-blocks
it is possible. Any suggestions/ advice?
_
*SAVE WATER ** ~ **SAVE ENERGY**~ **~ **SAVE
Hi,
if you run this under a debugger and look at how MPI_Scatterv is
invoked, you will find that
- sendcounts = {1, 1, 1}
- resizedtype has size 32
- recvcount*sizeof(MPI_INTEGER) = 32 on task 0, but 16 on task 1 and 2
=> too much data is sent to tasks 1 and 2, hence the error.
in this case
Thanks for all the information,
what i meant by
mpirun --mca shmem_base_verbose 100 ...
is really you modify your mpirun command line (or your torque script if
applicable) and add
--mca shmem_base_verbose 100
right after mpirun
Cheers,
Gilles
On 5/16/2017 3:59 AM, Ioannis Botsis wrot