Thank you, Nathan.
I'll wait until 1.7.4 release.
Regards,
Tetsuya Mishima
> Looks like it is fixed in the development trunk but not 1.7.3. We can fix
> this in 1.7.4.
>
> -Nathan Hjelm
> HPC-3, LANL
>
> On Thu, Oct 31, 2013 at 04:17:30PM +0900, tmish...@jcity.maeda.co.jp
wrote:
> >
> > Hollo
Stupid question:
Why not just make your first level internal API equivalent to the MPI
public API except for s/int/size_t/g and have the Fortran bindings
drop directly into that? Going through the C int-erface seems like a
recipe for endless pain...
Jeff
On Thu, Oct 31, 2013 at 4:05 PM, Jeff Sq
For giggles, try using MPI_STATUS_IGNORE (assuming you don't need to look at
the status at all). See if that works for you.
Meaning: I wonder if we're computing the status size for Fortran incorrectly in
the -i8 case...
On Oct 31, 2013, at 1:58 PM, Jim Parker wrote:
> Some additional info t
On Oct 30, 2013, at 11:55 PM, Jim Parker wrote:
> Perhaps I should start with the most pressing issue for me. I need 64-bit
> indexing
>
> @Martin,
>you indicated that even if I get this up and running, the MPI library
> still uses signed 32-bit ints to count (your term), or index (my te
Some additional info that may jog some solutions. Calls to MPI_SEND do not
cause memory corruption. Only calls to MPI_RECV. Since the main
difference is the fact that MPI_RECV needs a "status" array and SEND does
not, seems to indicate to me that something is wrong with status.
Also, I can run
Hello all,
using 1.7.x (1.7.2 and 1.7.3 tested), we get SIGSEGV from somewhere in-deepth of
'hwlock' library - see the attached screenshot.
Because the error is strongly aligned to just one single node, which in turn is
kinda special one (see output of 'lstopo -'), it smells like an error in
Yes, though the degree of impact obviously depends on the messaging pattern of
the app.
On Oct 31, 2013, at 2:50 AM, MM wrote:
> Of course, by this you mean, with the same total number of nodes, for e.g. 64
> process on 1 node using shared mem, vs 64 processes spread over 2 nodes (32
> each
Looks like it is fixed in the development trunk but not 1.7.3. We can fix
this in 1.7.4.
-Nathan Hjelm
HPC-3, LANL
On Thu, Oct 31, 2013 at 04:17:30PM +0900, tmish...@jcity.maeda.co.jp wrote:
>
> Hollo, I asked Ralph to re-enable cpus-per-proc of openmpi-1.7.x one year
> ago.
>
> According to Ti
Of course, by this you mean, with the same total number of nodes, for e.g.
64 process on 1 node using shared mem, vs 64 processes spread over 2 nodes
(32 each for e.g.)?
On 29 October 2013 14:37, Ralph Castain wrote:
> As someone previously noted, apps will always run slower on multiple nodes
>
Hollo, I asked Ralph to re-enable cpus-per-proc of openmpi-1.7.x one year
ago.
According to Ticket #3350, it shows "(closed defect: fixed)".
So I tried latest openmpi-1.7.3, but I find that -cpus-per-proc is still
not accepted like bellow.
mpirun -np 4 -x OMP_NUM_THREADS=2 -cpus-per-proc 2 -repo
10 matches
Mail list logo