You should only run varlist with one rank. Also, there is some commented
out code in varlist that re-enables a check that doesn't work with
mvapich (they didn't include one of the error codes that was defined in
MPI-3.0).
-Nathan
On Wed, Jul 16, 2014 at 04:22:55PM +, Gallardo, Esthela wrote:
Hey,
Sorry I wasn't able to follow up. I did try varlist and the code I have with
trunk and 1.8.2, but I am still encountering errors. I'm debating if it's due
to how I am running the application. I use the following command:
mpirun -np 16 -hostfile hosts --mca btl openib,self ./varlist
Is t
Guys,
Don't do it. It doesn't work at all. I couldn't pick up maintenance of
it either, and the majority of the Windows support is removed as Ralph
said. Just use MPICH for Windows work and save yourself the pain.
Cheers,
Damien
On 2014-07-16 9:57 AM, Nathan Hjelm wrote:
It likely won't
It likely won't build because last I check the Microsoft toolchain does
not fit the minimum requirements (C99 or higher). You will have better
luck with either gcc or intel's compiler.
-Nathan
On Wed, Jul 16, 2014 at 04:52:53PM +0100, MM wrote:
> hello,
> I'm about to try to build 1.8.1 with win
Given that a number of Windows components and #if protections were removed in
the 1.7/1.8 series, I very much doubt this will build or work. Are you
intending to try and recreate that code?
Otherwise, there is a port to cygwin available from that community.
On Jul 16, 2014, at 8:52 AM, MM wrot
hello,
I'm about to try to build 1.8.1 with win msvc2013 toolkit in 64bit mode.
I know the win binaries were dropped after failure to find someone to
pick them up (following shiqin departure), and i'm afraid I wouldn't
volunteer due to lack of time, but is there any general advice before
I start?
The problem is that for the moment, the implementation uses
Isend/Irecv, but you don't know what will happen in the future
(hopefully, it will use something else).
If your program bypasses the required call to MPI_Iscatterv, then you
only have one option: implement MPI_Iscatterv yourself, with only
Thanks a lot.
You are right I am using MPI_Iscatterv, in a domain decomposition code, but
the problem is that for the domain which I have no data to send fro, the
program will jump the routine. I can not redesign the whole program.
Do you know what will happen to send call with zero size buffer? Ca
If you are using Iscatterv (I guess it is that one), it handles the
pairs itself. You shouldn't bypass it because you think it is better.
You don't know how it is implemented, so just call Iscatterv for all
ranks.
2014-07-16 14:33 GMT+01:00 Ziv Aginsky :
> I know the standard, but what if I can no
I know the standard, but what if I can not bypass the send message. For
example if I have MPI_Iscatter and for some ranks the send buffer has zero
size. At those ranks it will jump the MPI_Iscatter routine, which means I
have some zero size send and no receive.
On Wed, Jul 16, 2014 at 3:28 PM,
Hi,
The easiest would also to bypass the Isend as well! The standard is
clear, you need a pair of Isend/Irecv.
Cheers,
2014-07-16 14:27 GMT+01:00 Ziv Aginsky :
> I have a loop in which I will do some MPI_Isend. According to the MPI
> standard, for every send you need a recv
>
> If one or sev
I have a loop in which I will do some MPI_Isend. According to the MPI
standard, for every send you need a recv
If one or several of my MPI_Isend have zero size buffer, should I still
have the mpi_recv or I can do it without recv? I mean on the processor
which I should recv the data I know prio
They are just as you say but while one run until the end (case 3
using --deamons-debug) the other hangs (Case 1)
in the case 1 even if is only in one node with plm_rsh_no_tree_spawn 1
flag (that as you say shouldn't do anything), the process hangs,
while in the case 2 without this flags in the
Here it is:
$
LD_PRELOAD=/mnt/data/users/dm2/vol3/semenov/_scratch/mxm/mxm-3.0/lib/libmxm.so
mpirun -x LD_PRELOAD --mca plm_base_verbose 10 --debug-daemons -np 1 hello_c
[access1:29064] mca: base: components_register: registering plm components
[access1:29064] mca: base: components_register:
please add following flags to mpirun "--mca plm_base_verbose 10
--debug-daemons" and attach output.
Thx
On Wed, Jul 16, 2014 at 11:12 AM, Timur Ismagilov
wrote:
> Hello!
> I have Open MPI v1.9a1r32142 and slurm 2.5.6.
>
> I can not use mpirun after salloc:
>
> $salloc -N2 --exclusive -p test -J
Hello!
I have Open MPI v1.9a1r32142 and slurm 2.5.6.
I can not use mpirun after salloc:
$salloc -N2 --exclusive -p test -J ompi
$LD_PRELOAD=/mnt/data/users/dm2/vol3/semenov/_scratch/mxm/mxm-3.0/lib/libmxm.so
mpirun -np 1 hello_c
--
16 matches
Mail list logo