I think the limit for a write (and also for a read) is 2^31-1 (2G-1). In
a C program, after this value, an integer becomes negative. I suppose
this is also true in
Fortran. The solution, is to make a loop of writes (reads) of no more
than this value.
Pascal
Ricardo Reis a écrit :
On Tue, 16
a écrit :
On Wed, 17 Nov 2010, Pascal Deveze wrote:
I think the limit for a write (and also for a read) is 2^31-1 (2G-1).
In a C program, after this value, an integer becomes negative. I
suppose this is also true in
Fortran. The solution, is to make a loop of writes (reads) of no more
than this
Maybe this could solve your problem: Just add \n in the string you want
to display:
printf("Please give N= \n");
Of course, this will return, but the string is displayed. This run by me
without the fflush().
On the other hand, do you really observe that the time of the scanf ()
and the time
Could you check that your programm closes all MPI-IO files before
calling MPI_Finalize ?
fa...@email.com a écrit :
> Even inside MPICH2, I have given little attention to threadsafety and
> the MPI-IO routines. In MPICH2, each MPI_File* function grabs the big
> critical section lock -- not prett
Why don't you use the command "mpirun" to run your mpi programm ?
Pascal
fa...@email.com a écrit :
Pascal Deveze wrote:
> Could you check that your programm closes all MPI-IO files before
calling MPI_Finalize ?
Yes, I checked that. All files should be closed. I've als
Christian,
Suppose you have N processes calling the first MPI_File_get_position_shared
().
Some of them are running faster and could execute the call to
MPI_File_seek_shared() before all the other have got their file position.
(Note that the "collective" primitive is not a synchronization. In th
kend :>
==rob
--
Rob Latham
Mathematics and Computer Science Division
Argonne National Lab, IL USA
[pièce jointe "shared_file_ptr_jumpshot.png" supprimée par Pascal
Deveze/FR/BULL] ___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
Hi all,
When slurm is configured with the following parameters
TaskPlugin=task/affinity
TaskPluginParam=Cpusets
srun binds the processes by placing them into different
cpusets, each containing a single core.
e.g. "srun -N 2 -n 4" will create 2 cpusets in each of the two allocated
nodes and
users-boun...@open-mpi.org a écrit sur 18/08/2011 14:41:25 :
> De : Ralph Castain
> A : Open MPI Users
> Date : 18/08/2011 14:45
> Objet : Re: [OMPI users] Bindings not detected with slurm (srun)
> Envoyé par : users-boun...@open-mpi.org
>
> Afraid I am confused. I assume this refers to the tru
Hi,
I am not sure I understand what you are doing.
users-boun...@open-mpi.org a écrit sur 03/09/2011 11:05:04 :
> De : alibeck
> A : Open MPI Users
> Date : 03/09/2011 11:05
> Objet : [OMPI users] problem with MPI-IO at filesizes greater then
> the 32 Bit limit...
> Envoyé par : users-boun..
I do not see where you initialize the offset on the "Non-master tasks".
This could be the problem.
Pascal
users-boun...@open-mpi.org a écrit sur 19/04/2012 09:18:31 :
> De : Rohan Deshpande
> A : Open MPI Users
> Date : 19/04/2012 09:18
> Objet : Re: [OMPI users] machine exited on signal 11 (
users-boun...@open-mpi.org a écrit sur 19/04/2012 10:24:16 :
> De : Rohan Deshpande
> A : Open MPI Users
> Date : 19/04/2012 10:24
> Objet : Re: [OMPI users] machine exited on signal 11 (Segmentation
fault).
> Envoyé par : users-boun...@open-mpi.org
>
> Hi Pascal,
>
> The offset is received
users-boun...@open-mpi.org a écrit sur 19/04/2012 12:42:44 :
> De : Rohan Deshpande
> A : Open MPI Users
> Date : 19/04/2012 12:44
> Objet : Re: [OMPI users] machine exited on signal 11 (Segmentation
fault).
> Envoyé par : users-boun...@open-mpi.org
>
> No I havent tried using valgrind.
>
>
users-boun...@open-mpi.org a écrit sur 01/12/2012 14:47:09 :
> De : Eric Chamberland
> A : us...@open-mpi.org
> Date : 01/12/2012 14:47
> Objet : [OMPI users] Lustre hints via environment variables/runtime
parameters
> Envoyé par : users-boun...@open-mpi.org
>
> Hi,
>
> I am using openmpi 1.6.
14 matches
Mail list logo