Hello,
thanks for the quick answer. I am sorry that I forgot to mention this: I
did compile OpenMPI with MPI_THREAD_MULTIPLE support and test if
required == provided after the MPI_Thread_init call.
I do not see any mechanism for protecting the accesses to the requests to a
single thread? Wh
On May 19, 2011, at 11:24 PM, Tom Rosmond wrote:
> What fortran compiler did you use?
gfortran.
> In the original script my Intel compile used the -132 option,
> allowing up to that many columns per line.
Gotcha.
>> x.f90:99.77:
>>
>>call mpi_type_indexed(lenij,ijlena,ijdisp,mpi_real,i
I missed this email in my INBOX, sorry.
Can you be more specific about what exact error is occurring? You just say
that the application crashes...? Please send all the information listed here:
http://www.open-mpi.org/community/help/
On Apr 26, 2011, at 10:51 PM, 孟宪军 wrote:
> It seems th
Sorry for the super-late reply. :-\
Yes, ERR_TRUNCATE means that the receiver didn't have a large enough buffer.
Have you tried upgrading to a newer version of Open MPI? 1.4.3 is the current
stable release (I have a very dim and not guaranteed to be correct recollection
that we fixed somethin
On May 20, 2011, at 6:23 AM, Jeff Squyres wrote:
> Shouldn't ijlena and ijdisp be 1D arrays, not 2D arrays?
Ok, if I convert ijlena and ijdisp to 1D arrays, I don't get the compile error
(even though they're allocatable -- so allocate was a red herring, sorry).
That's all that "use mpi" is com
Hello,
Here in my windows machine, if i ran mpicc -showme, i get erroneous output like
below:-
**
C:\>C:\Users\BAAMARNA5617\Programs\mpi\OpenMPI_v1.5.3-win32\bin\mpicc.exe
--showme
Cannot open configuration file C:/Users/hpcfan/Documents/OpenMPI/openmpi-1.5.3/i
nstalled-32/share
Thanks Ralph. I've seen the messages generated in b...@open-mpi.org, so
I figured something was up! I was going to provide the unified diff,
but then ran into another issue in testing where we immediately ran into
a seq fault, even with this fix. It turns out that a pre-pending of
/lib64 (
We are still struggling we these problems. Actually the new version of
intel compilers does
not seem to be the real issue. We clash against the same errors using
also the `gcc' compilers.
We succeed in building an openmi-1.2.8 (with different compiler
flavours) rpm from the installation
of th
Hi,
Thanks for getting back to me (and thanks to Jeff for the explanation
too).
On Thu, 2011-05-19 at 09:59 -0600, Samuel K. Gutierrez wrote:
> Hi,
>
> On May 19, 2011, at 9:37 AM, Robert Horton wrote
>
> > On Thu, 2011-05-19 at 08:27 -0600, Samuel K. Gutierrez wrote:
> >> Hi,
> >>
> >> Try th
If you're using QLogic, you might want to try the native PSM Open MPI support
rather than the verbs support. QLogic cards only "sorta" support verbs in
order to say that they're OFED-complaint; their native PSM interface is more
performant than verbs for MPI.
Assuming you built OMPI with PSM s
Hi Salvatore
Just in case ...
You say you have problems when you use "--mca btl openib,self".
Is this a typo in your email?
I guess this will disable the shared memory btl intra-node,
whereas your other choice "--mca btl_tcp_if_include ib0" will not.
Could this be the problem?
Here we use "--mca
I have verified that disabling UAC does not fix the problem. xhlp.exe starts,
threads spin up on both machines, CPU usage is at 80-90% but no progress is
ever made.
>From this state, Ctrl-break on the head node yields the following output:
[REMOTEMACHINE:02032] [[20816,1],0]-[[20816,0],0] mca
MPI can get through your firewall, right?
Damien
On 20/05/2011 12:53 PM, Jason Mackay wrote:
I have verified that disabling UAC does not fix the problem. xhlp.exe
starts, threads spin up on both machines, CPU usage is at 80-90% but
no progress is ever made.
>From this state, Ctrl-break on th
"MPI can get through your firewall, right?"
As far as I can tell the firewall is not the problem - have tried it with
firewalls disabled, automatic fw polices based on port requests from MPI, and
with manual exception policies.
> From: users-requ...@open-mpi.org
> Subject: users Digest, Vol
14 matches
Mail list logo