Re: [OMPI users] openmpi-1.7.4rc2r30425 produces unexpected output

2014-01-28 Thread tmishima
Thanks, Ralph. I'm happy to hear that. By the way, openmpi-1.7.4rc2 works fine for me. Tetsuya Mishima > Let me clarify: the functionality will remain as it is useful to many. What we need to do is somehow capture that command in the current map-by parameter so we avoid issues like the one you

Re: [OMPI users] openmpi-1.7.4rc2r30425 produces unexpected output

2014-01-28 Thread Ralph Castain
Let me clarify: the functionality will remain as it is useful to many. What we need to do is somehow capture that command in the current map-by parameter so we avoid issues like the one you are experiencing. HTH Ralph On Jan 27, 2014, at 8:18 PM, tmish...@jcity.maeda.co.jp wrote: > > > Thank

Re: [OMPI users] openmpi-1.7.4rc2r30425 produces unexpected output

2014-01-27 Thread tmishima
Thank you for your comment, Ralph. I understand your explanation including "it's too late". The ppr option is convinient for us because our environment is quite hetero. (It gives flexiblity to the number of procs) I hope you do not deprecate ppr in the future release and aply my proposal someda

Re: [OMPI users] openmpi-1.7.4rc2r30425 produces unexpected output

2014-01-27 Thread Ralph Castain
I'm afraid it is too late for 1.7.4 as I have locked that down, barring any last-second smoke test failures. I'll give this some thought for 1.7.5, but I'm a little leery of the proposed change. The problem is that ppr comes in thru a different MCA param than the "map-by" param, and hence we can

Re: [OMPI users] openmpi-1.7.4rc2r30425 produces unexpected output

2014-01-27 Thread tmishima
Hi Ralph, it seems you are rounding the final turn to release 1.7.4! I hope this will be my final request for openmpi-1.7.4 as well. I mostly use rr_mapper but sometimes use ppr_mapper. I have a simple request to ask you to improve its usability. Namely, I propose to remove redfining-policy-chec

Re: [OMPI users] openmpi-1.7.4rc2r30425 produces unexpected output

2014-01-27 Thread Ralph Castain
Kewl - thanks! On Jan 27, 2014, at 4:08 PM, tmish...@jcity.maeda.co.jp wrote: > > > Thanks, Ralph. I quickly checked the fix. It worked fine for me. > > Tetsuya Mishima > >> I fixed that in today's final cleanup >> >> On Jan 27, 2014, at 3:17 PM, tmish...@jcity.maeda.co.jp wrote: >> >> >>

Re: [OMPI users] openmpi-1.7.4rc2r30425 produces unexpected output

2014-01-27 Thread tmishima
Thanks, Ralph. I quickly checked the fix. It worked fine for me. Tetsuya Mishima > I fixed that in today's final cleanup > > On Jan 27, 2014, at 3:17 PM, tmish...@jcity.maeda.co.jp wrote: > > > > As for the NEWS - it is actually already correct. We default to map-by > core, not slot, as of 1.7.

Re: [OMPI users] openmpi-1.7.4rc2r30425 produces unexpected output

2014-01-27 Thread Ralph Castain
I fixed that in today's final cleanup On Jan 27, 2014, at 3:17 PM, tmish...@jcity.maeda.co.jp wrote: > > >> As for the NEWS - it is actually already correct. We default to map-by > core, not slot, as of 1.7.4. > > Is it correct? As far as I browse the source code, map-by slot is used if > np <

Re: [OMPI users] openmpi-1.7.4rc2r30425 produces unexpected output

2014-01-27 Thread tmishima
> As for the NEWS - it is actually already correct. We default to map-by core, not slot, as of 1.7.4. Is it correct? As far as I browse the source code, map-by slot is used if np <=2. [mishima@manage openmpi-1.7.4rc2r30425]$ cat -n orte/mca/rmaps/base/rmaps_base_map_job.c ... 107

Re: [OMPI users] openmpi-1.7.4rc2r30425 produces unexpected output

2014-01-27 Thread Ralph Castain
I've fixed the reporting flag - thanks! As for the NEWS - it is actually already correct. We default to map-by core, not slot, as of 1.7.4. However, if cpus-per-proc is given, we should probably fall back to map-by slot, so I'll make that change On Jan 26, 2014, at 3:02 PM, tmish...@jcity.mae

[OMPI users] openmpi-1.7.4rc2r30425 produces unexpected output

2014-01-26 Thread tmishima
Hi Ralph, I tried latest nightly snapshots of openmpi-1.7.4rc2r30425.tar.gz. Almost everything works fine, except that the unexpected output appears as below: [mishima@node04 ~]$ mpirun -cpus-per-proc 4 ~/mis/openmpi/demos/myprog App launch reported: 3 (out of 3) daemons - 8 (out of 12) procs ..