"Jeff Squyres (jsquyres)" writes:
>> Totally superficial, just passing "status(1)" instead of "status" or
>> "status(1:MPI_STATUS_SIZE)".
>
> That's a different type (INTEGER scalar vs. INTEGER array). So the
> compiler complaining about that is actually correct.
Yes, exactly.
> Under the cover
On Jan 8, 2014, at 8:17 PM, Jed Brown wrote:
>>> I don't call MPI from Fortran, but someone on a Fortran project that I
>>> watch mentioned that the compiler would complain about such and such a
>>> use (actually relating to types for MPI_Status in MPI_Recv rather than
>>> buffer types).
(chan
"Jeff Squyres (jsquyres)" writes:
> As I mentioned Craig and I debated long and hard to change that
> default, but, in summary, we apparently missed this clause on p610.
> I'll change it back.
Okay, thanks.
> I'll be happy when gfortran 4.9 is released that supports ignore TKR
> and you'll get
On Jan 7, 2014, at 11:23 PM, Jed Brown wrote:
> On page 610, I see text disallowing the explicit interfaces in
> ompi/mpi/fortran/use-mpi-tkr:
>
> In S2 and S3: [snip]
>
> Why did OMPI decide that this (presumably non-normative) text in the
> standard was not worth following? (Rejecting somet
It sounds like you are having filesystem permission issues -- i.e., your app is
trying to write to a file that is not writable (i.e., this doesn't sound like
an MPI issue).
On Jan 8, 2014, at 11:31 AM, Axel Viðarsson wrote:
> Thanks Ralph, now I can least run the examples.
>
> My app is call
Thanks Ralph, now I can least run the examples.
My app is called FDS or Fire dynamics simulator, if someone is familiar
with that or just those errors i am getting then i would appreciate any
help.
Thanks
Axel
2014/1/8 Ralph Castain
> I can't speak to your app as I don't know what it does. Ho
I can't speak to your app as I don't know what it does. However, you *do* have
to compile the example first! :-)
A simple "make" in the examples directory will create all the binaries
On Jan 8, 2014, at 7:29 AM, Axel Viðarsson wrote:
> Hey all
>
> My cluster consist of 2 workstations with hy
Hey all
My cluster consist of 2 workstations with hyper threaded Intel Xeon
processors and an old Dell dual core computer to control them.
I am failing to mpirun on the cluster.
1.When executing as user
[prufa@master]$ mpirun -np 16 --hostfile /home/prufa/prufa.mpi_hostfile
fds_mpi SST1SV20.fds
You are quite correct - r29719 did indeed reverse the logic of that param.
Thanks for tracking it down!
I pushed a fix to the trunk and scheduled it for 1.7.4
On Jan 7, 2014, at 9:45 PM, tmish...@jcity.maeda.co.jp wrote:
>
>
> Hi,
>
> I found that btl_tcp_use_nagle was negated in openmpi-1.
Hi Jeff,
thanks a lot. I will check that!
Best wishes,
Johanna
Am 07.01.2014 00:16, schrieb Jeff Squyres (jsquyres):
Sorry -- I was offline for the MPI_Festivus(3) break, and just returned to the
office today.
If you don't have the mpif90 or mpif77 executables in the same directory as the
Hi,
I found that btl_tcp_use_nagle was negated in openmpi-1.7.4rc1, which
causes severe slowdown of tcp-network for smaller size(< 1024) in our
environment as show at the bottom.
This happened in SVN r28719, where new MCA variable system was added.
The flag of tcp_not_use_nodelay was newly intr
11 matches
Mail list logo