Hello everybody
I implemented a parallel simulated annealing algorithm in fortran. The
algorithm is describes as follows
1. The MPI program initially generates P processes that have rank 0,1,...,P-1.
2. The MPI program generates a starting point and sends it for all processes
set T=T03. At the
array when #temps > 5
>>
>>
>>
>>
>> On Tue, Apr 15, 2014 at 10:46 AM, Oscar Mojica > <mailto:o_moji...@hotmail.com>> wrote:
>>
>>Hello everybody
>>
>>I implemented a parallel simulated annealing algorithm in fort
> * - stack -1
> * - nofile 4096
>
> See 'man limits.conf' for details.
>
> If it is a cluster, and this should be set on all nodes,
> and you may need to ask your system administrator to do it.
>
> I hope this helps,
> Gus Correa
>
memlock -1
> > * - stack -1
> > * - nofile 4096
> >
> > See 'man limits.conf' for details.
> >
> > If it is a cluster, and this should be set on all nodes,
> > and you may need to ask your system administrator to do it.
Hello everybody
I am trying to run a hybrid mpi + openmp program in a cluster. I created a
queue with 14 machines, each one with 16 cores. The program divides the work
among the 14 processors with MPI and within each processor a loop is also
divided into 8 threads for example, using openmp. Th
ality (I'm assuming you're using Open MPI
> > 1.8.x)?
> >
> > You can try "mpirun --bind-to none ...", which will have Open MPI not bind
> > MPI processes to cores, which might allow OpenMP to think that it can use
> > all the cores, and therefore
> command line or inside the job script in #$ lines as job requests. This would
> mean to collect slots in bunches of OMP_NUM_THREADS on each machine to reach
> the overall specified slot count. Whether OMP_NUM_THREADS or n times
> OMP_NUM_THREADS is allowed per machine needs to be disc
mails, your advices had been very useful
PS: The version of SGE is OGS/GE 2011.11p1
Oscar Fabian Mojica Ladino
Geologist M.S. in Geophysics
> From: re...@staff.uni-marburg.de
> Date: Fri, 15 Aug 2014 20:38:12 +0200
> To: us...@open-mpi.org
> Subject: Re: [OMPI users] Running a hybrid M
27;m learning a lot
Oscar Fabian Mojica Ladino
Geologist M.S. in Geophysics
> From: re...@staff.uni-marburg.de
> Date: Tue, 19 Aug 2014 19:51:46 +0200
> To: us...@open-mpi.org
> Subject: Re: [OMPI users] Running a hybrid MPI+openMP program
>
> Hi,
>
> Am 19.08.2014
with simple precision.
Any idea what may be going on? I do not know if this is related to MPI
Oscar Mojica
Geologist Ph.D. in Geophysics
SENAI CIMATEC Supercomputing Center
Lattes: http://lattes.cnpq.br/0796232840554652
___
users mailing list
users@lists
Thanks guys for your answers.
Actually, the optimization was not disabled, and that was the problem,
compiling it with -o0 solves it. Sorry.
Oscar Mojica
Geologist Ph.D. in Geophysics
SENAI CIMATEC Supercomputing Center
Lattes: http://lattes.cnpq.br/0796232840554652
improve the consistency and reproducibility of floating-point results while
limiting the impact on performance. My program is quite large, so I preferred
to use this option
because there are certainly other problems that are not only related to
reassociation.
Oscar Mojica
Geologist Ph.D. in
12 matches
Mail list logo