Hi Ralph,
I tried following...
1) C:\test> mpirun -mca orte_headnode_name
where is returned by 'hostname' command.
2) C:\test> mpirun -mca ras ^ccp
but still observing same errors...
BTW: for further inforamtion on ompi_info you can see thread
http://www.open-mpi.org/community/lists/user
Hello,
I am working on a hybrid MPI (OpenMPI 1.4.3) and Pthread code. I am
using MPI_Isend and MPI_Irecv for communication and MPI_Test/MPI_Wait to
check if it is done. I do this repeatedly in the outer loop of my code.
The MPI_Test is used in the inner loop to check if some function can be
c
Hi,
I'm having problems getting the MPIRandomAccess part of the HPCC
benchmark to run with more than 32 processes on each node (each node has
4 x AMD 6172 so 48 cores total). Once I go past 32 processes I get an
error like:
[compute-1-13.local][[5637,1],18][../../../../../ompi/mca/btl/openib/conn
Hi Jason,
I'm afraid I won't be of much help but have you run your tests with UAC
completely disabled or not?
>From my experience, access to network shares and network drives is very
>problematic with UAC enabled and simply disabling it has solved a few problems
>in the past.
Running Op
Hi,
Try the following QP parameters that only use shared receive queues.
-mca btl_openib_receive_queues S,12288,128,64,32:S,65536,128,64,32
Samuel K. Gutierrez
Los Alamos National Laboratory
On May 19, 2011, at 5:28 AM, Robert Horton wrote:
> Hi,
>
> I'm having problems getting the MPIRandomA
Dear all,
I tried to configure open MPI on a Win XP sp2 64 bit system, but I met an error
’entry point not found’ when I run the executable file, I really hope you can
give me some help. I list what I did when I run my program in the following
parts;
1, I downloaded the OpenMPI_v1.5.3-2_win64.
Hi,
On 19 May 2011 15:54, Zhangping Wei wrote:
> 4, I use command window to run it in this way: ‘mpirun –n 4 **.exe ‘,then I
Probably not the problem, but shouldn't that be 'mpirun -np N ' ?
Paul
--
O< ascii ribbon campaign - stop html mail - www.asciiribbon.org
On Thu, 2011-05-19 at 08:27 -0600, Samuel K. Gutierrez wrote:
> Hi,
>
> Try the following QP parameters that only use shared receive queues.
>
> -mca btl_openib_receive_queues S,12288,128,64,32:S,65536,128,64,32
>
Thanks for that. If I run the job over 2 x 48 cores it now works and the
performa
Hi,
On May 19, 2011, at 9:37 AM, Robert Horton wrote
> On Thu, 2011-05-19 at 08:27 -0600, Samuel K. Gutierrez wrote:
>> Hi,
>>
>> Try the following QP parameters that only use shared receive queues.
>>
>> -mca btl_openib_receive_queues S,12288,128,64,32:S,65536,128,64,32
>>
>
> Thanks for tha
Dear Paul,
I checked the way 'mpirun -np N ' you mentioned, but it was the same
problem.
I guess it may related to the system I used, because I have used it correctly
in
another XP 32 bit system.
I look forward to more advice.Thanks.
Zhangping
发件人: "user
David,
I do not see any mechanism for protecting the accesses to the requests to a
single thread? What is the thread model you're using?
>From an implementation perspective, your code is correct only if you
>initialize the MPI library with MPI_THREAD_MULTIPLE and if the library
>accepts. Other
Unfortunately, our Windows guy (Shiqing) is off getting married and will be out
for a little while. :-(
All that I can cite is the README.WINDOWS.txt file in the top-level directory.
I'm afraid that I don't know much else about Windows. :-(
On May 18, 2011, at 8:17 PM, Jason Mackay wrote:
On May 19, 2011, at 10:54 AM, Zhangping Wei wrote:
> 4, I use command window to run it in this way: ‘mpirun –n 4 **.exe ‘,then I
> met the error: ‘entry point not found: the procedure entry point inet_pton
> could not be located in the dynamic link library WS2_32.dll’
Unfortunately our Windows
Sorry for the late reply.
Other users have seen something similar but we have never been able to
reproduce it. Is this only when using IB? If you use "mpirun --mca
btl_openib_cpc_if_include rdmacm", does the problem go away?
On May 11, 2011, at 6:00 PM, Marcus R. Epperson wrote:
> I've seen
What Sam is alluding to is that the OpenFabrics driver code in OMPI is sucking
up oodles of memory for each IB connection that you're using. The
receive_queues param that he sent tells OMPI to use all shared receive queues
(instead of defaulting to one per-peer receive queue and the rest shared
On May 13, 2011, at 8:31 AM, francoise.r...@obs.ujf-grenoble.fr wrote:
> Here is the MUMPS portion of code (in zmumps_part1.F file) where the slaves
> call MPI_COMM_DUP , id%PAR and MASTER are initialized to 0 before :
>
> CALL MPI_COMM_SIZE(id%COMM, id%NPROCS, IERR )
I re-indented so that I co
Props for that testio script. I think you win the award for "most easy to
reproduce test case." :-)
I notice that some of the lines went over 72 columns, so I renamed the file
x.f90 and changed all the comments from "c" to "!" and joined the two &-split
lines. The error about implicit type f
Thanks for looking at my problem. Sounds like you did reproduce my
problem. I have added some comments below
On Thu, 2011-05-19 at 22:30 -0400, Jeff Squyres wrote:
> Props for that testio script. I think you win the award for "most easy to
> reproduce test case." :-)
>
> I notice that some o
18 matches
Mail list logo