Hi, Am 22.08.2018 um 17:49 schrieb Diego Avesani:
> Dear all, > > I have a philosophical question. > > I am reading a lot of papers where people use Portable Batch System or job > scheduler in order to parallelize their code. To parallelize their code? I would more phrase it: "Using a scheduler allows to simultaneously execute compiled applications in a cluster without overloading the nodes; being it serial or parallel applications which are executed several times at the same time or one after the other as available cores permit". Any batch scheduler (like PBS) and any parallel library (like MPI in any implementation [like Open MPI]) don't compete, but cover different aspects of running jobs in a cluster. There are even problems, where you don't gain anything by parallelizing the code: think of the task of rendering 5000 images or apply certain effects to them: if the code is perfectly parallel and cuts the execution time by half with each doubling of the number of cores: the overall time to get the final result of all images stays the same. Essentially in such a situation you can execute several serial instances of an application at the same time in a cluster, which might be referred to "running in parallel" – but depending on the context such a statement might be ambiguous. But if you need the result of the first image resp. computation to decide how to proceed, then it's advantageous to parallelize the application on its own instead. -- Reuti > What are the advantages in using MPI instead? > > I am writing a report on my code, where of course I use openMPI. So tell me > please how can I cite you. You deserve all the credits. > > Thanks a lot, > Thanks again, > > > Diego > > _______________________________________________ > users mailing list > users@lists.open-mpi.org > https://lists.open-mpi.org/mailman/listinfo/users _______________________________________________ users mailing list users@lists.open-mpi.org https://lists.open-mpi.org/mailman/listinfo/users