Maybe this could solve your problem: Just add \n in the string you want
to display:
printf("Please give N= \n");
Of course, this will return, but the string is displayed. This run by me
without the fflush().
On the other hand, do you really observe that the time of the scanf ()
and the time
You remind me. I now realize that it is not an matter of compiler, but an
issue of C language. The printf() function in C doesn't print messages onto
the standard ouput immediately, but instead stores them in a buffer. Only in
some cases does the standard output work, defined in standard C:
1.
I don't remember this being a function of the C language or the compiler; I'm
pretty sure you can change whether stdout is line buffered or not at the OS
level.
As for OMPI, we deliberately set each MPI process' stdout to be line buffered
before relaying it up to the mpirun process. This was b
If you have OpenMP questions, you might want to direct those to a different
list and/or check the relevant compiler documentation; this list is for Open
MPI support.
Good luck.
On Mar 29, 2011, at 5:26 PM, job hunter wrote:
> hi all,
> mpiCC -openmp test.c -o test . I fixed the error
How many messages are you sending, and how large are they? I.e., if your
message passing is tiny, then the network transport may not be the bottleneck
here.
On Mar 28, 2011, at 9:41 AM, Michele Marena wrote:
> I run ompi_info --param btl sm and this is the output
>
> MCA btl
When I have seen this problem before, it *usually* indicated a mismatch of Open
MPI versions that were not ABI compatible. I.e., the application was compiled
and linked against Open MPI version X, but then at run time, it found the
shared libraries for Open MPI version Y -- and X is not ABI com
Hi
I'm trying to get fault tolerant ompi running on our cluster for my
semesterthesis.
On the login node i was successful, checkpointing works.
Since the compute nodes have different kernels, i had to compile blcr on the
compute nodes again. blcr on the compute nodes works. after that i instal
Hi Jeff,
I thank you for your help,
I've launched my app with mpiP both when two processes are on different node
and when two processes are on the same node.
The process 0 is the manager (gathers the results only), processes 1 and 2
are workers (compute).
This is the case processes 1 and 2 are o
Michele Marena wrote:
I've launched my app with mpiP both when two processes are
on different node and when two processes are on the same node.
The process 0 is the manager (gathers the results only),
processes 1 and 2 are  workers (compute).
This is the case processes 1 and 2 a
On 3/30/2011 10:08 AM, Eugene Loh wrote:
Michele Marena wrote:
I've launched my app with mpiP both when two processes are on
different node and when two processes are on the same node.
The process 0 is the manager (gathers the results only), processes 1
and 2 are workers (compute).
This is the
I am trying to figure out why my jobs aren't getting distributed and need
some help. I have an install of sun cluster tools on Rockscluster 5.2
(essentially centos4u2). this user's account has its home dir shared via
nfs. I am getting some strange errors. here's an example run
[jian@therock ~]$ /
As one of the error message suggests, you need to add the openmpi library to
your LD_LIBRARY_PATH to all your nodes.
On Wed, Mar 30, 2011 at 1:24 PM, Nehemiah Dacres wrote:
> I am trying to figure out why my jobs aren't getting distributed and need
> some help. I have an install of sun cluster t
12 matches
Mail list logo