If the data distribution was sufficiently predictable and long-lived
through the life of the application, could one not define new
communicators to clean up the calls?
> After reading the previous discussion on AllReduce and AlltoAll, I
> thought I would ask my question. I have a case where I hav
Hi there
I am no slurm expert. However, it is our understanding that
SLURM_TASKS_PER_NODE means the number of slots allocated to the job, not the
number of tasks to be executed on each node. So the 4(x2) tells us that we
have 4 slots on each of two nodes to work with. You got 4 slots on each node
Hi Werner,
Open MPI does things a little bit differently than other MPIs when it
comes to supporting SLURM. See
http://www.open-mpi.org/faq/?category=slurm
for general information about running with Open MPI on SLURM.
After trying the commands you sent, I am actually a bit surprised by the
re
Sorry - my mistake - I meant AlltoAllV, which is what I use in my code.
Ashley Pittman wrote:
On Thu, 2008-03-20 at 10:27 -0700, Dave Grote wrote:
After reading the previous discussion on AllReduce and AlltoAll, I
thought I would ask my question. I have a case where I have data
On Thu, 2008-03-20 at 10:27 -0700, Dave Grote wrote:
> After reading the previous discussion on AllReduce and AlltoAll, I
> thought I would ask my question. I have a case where I have data
> unevenly distributed among the processes (unevenly means that the
> processes have differing amounts of
After reading the previous discussion on AllReduce and AlltoAll, I
thought I would ask my question. I have a case where I have data
unevenly distributed among the processes (unevenly means that the
processes have differing amounts of data) that I need to globally
redistribute, resulting in a
Hello,
OpenMPI 1.2.5 and earlier do not let you set the Errhandler for
MPI::FILE_NULL using the C++ bindings.
[You would want to do so because, on error, MPI::File::Open() and
MPI::File::Delete() call the Errhandler associated with FILE_NULL.]
With the C++ bindings, MPI::FILE_NULL is a const obj
Hi,
At our site here at the University of Karlsruhe we are running two
large clusters with SLURM and HP-MPI. For our new cluster we want to
keep SLURM and switch to OpenMPI. While testing I got the following
problem:
with HP-MPI I do something like
srun -N 2 -n 2 -b mpirun -srun helloworld
and