Hi Bowen,
Thanks very much. I had checked my writev system call, I thought it was him
that caused all these bad things :)
Best Regards
Xianjun Meng
2010/12/8 Bowen Zhou
> On 12/05/2010 10:13 PM,
>
>> hi,
>>
>> I met a question recently when I tested the MPI_send and MPI_Recv
>> functions. Wh
FWIW: I just tested the -x option on a multi-node system and had no problem
getting the value of DISPLAY to propagate. I was able to define it on the cmd
line, saw it set correctly on every process, etc.
This was with our devel trunk - not sure what version you are using.
On Dec 7, 2010, at 12
On 12/05/2010 10:13 PM,
hi,
I met a question recently when I tested the MPI_send and MPI_Recv
functions. When I run the following codes, the processes hanged and I
found there was not data transmission in my network at all.
BTW: I finished this test on two X86-64 computers with 16GB memory an
Thanks for your responses! I'm at home today so I can't actually do any
tests to 'see' if anything works. But I logged in remotely and I did as
Ralph suggested and ran env as my app. No process returned a value for
DISPLAY. Then I made a small program that calls getenv("DISPLAY") to run
with mpi
Are you using ssh to launch OMPI between your nodes? (i.e., is mpirun using
ssh under the covers to launch on remote nodes)
If so, you might want to just set OMPI to use "ssh -X", which sets up SSH
tunneled X forwarding, and therefore it sets DISPLAY for you properly on all
the remote nodes au
On Dec 7, 2010, at 8:33 AM, Gus Correa wrote:
> Did I understand you right?
>
> Are you saying that one can effectively double the counting
> capability (i.e. the "count" parameters in MPI calls) of OpenMPI
> by compiling it with 8-byte integer flags?
Yes and no.
If you increase the size of INT
Hi Jeff
Did I understand you right?
Are you saying that one can effectively double the counting
capability (i.e. the "count" parameters in MPI calls) of OpenMPI
by compiling it with 8-byte integer flags?
And long as one consistently uses the same flags to compile
the application, everything wou
I recompiled MPI with -g, but it didn't solve the problem. Two things that
have changed are: buf in PMPI_Recv is no longer of value 0 and backtrace in
gdb shows more functions (eg. mca_pml_ob1_recv_frag_callback_put as #1).
As you recommended, I will try to walk up the stack, but it's not so easy
It is always a good idea to have your application's sizeof(INTEGER) match the
MPI's sizeof(INTEGER). Having them mismatch is a recipe for trouble.
Meaning: if you're compiling your app with -make-integer-be-8-bytes, then you
should configure/build Open MPI with that same flag.
I'm thinking tha
You might want to ask the boost people - we wouldn't have any idea what asio is
or does.
On Dec 7, 2010, at 6:06 AM, Hannes Brandstätter-Müller wrote:
> Hello!
>
> I am using OpenMPI in combination with the boost libraries (especially
> boost::asio) and came across a weird interaction. When I
I am not sure this has anything to do with your problem but if you look
at the stack entry for PMPI_Recv I noticed the buf has a value of 0.
Shouldn't that be an address?
Does your code fail if the MPI library is built with -g? If it does
fail the same way, the next step I would do would be
Some update on this issue. I've attached gdb to the crashing
application and I got:
-
Program received signal SIGSEGV, Segmentation fault.
mca_pml_ob1_send_request_put (sendreq=0x130c480, btl=0xc49850,
hdr=0xd10e60) at pml_ob1_sendreq.c:1231
1231pml_ob1_sendreq.c: No such file or directory
Hello!
I am using OpenMPI in combination with the boost libraries (especially
boost::asio) and came across a weird interaction. When I use asio to send a
message via TCP, some messages do not arrive at the server.
The effect is exhibited when I send a message from the tcp client to the
server aft
BTW: you might check to see if the DISPLAY envar is being correctly set on all
procs. Two ways to do it:
1. launch "env" as your app to print out the envars - can be messy on the
output end, though you could use the mpirun options to tag and/or split the
output from the procs
2. in your app, j
Hi Gus Correa
First of all, thanks for your suggestions.
1) The malloc function do return a non_NULL pointer.
2) I didn't tried the MPI_Isend function, actually, The really function I
need to use is MPI_Allgatherv(). When I used it, I found this function
didn't work when the the data >= 2GB, the
15 matches
Mail list logo