Hi there, I have a 2-cpu system (linux/x86-64), running openmpi-1.1. I do not specify a hostfile. Lately I'm having performance problems when running my mpi-app this way:
mpiexec -n 2 ./mpi-app config.ini Both mpi-app processes are running on cpu0, leaving cpu1 idle. After reading the mpirun manpage, it seems that openmpi bind tasks to cpus in a round-robin way, meaning that this should not happen. But given my problem, I assume that it's not detecting this is a 2-way smp system, (assuming a UP system) and binding both tasks to cpu0.. Is this correct? The openmpi-default-hostfile says I should not specify localhost in there.. and let the job dispatcher/rca "detect" the single-node setup. Where should I define/configure system wide, that this is a single-node, 2-slot system? I would like to avoid making the system users be obliged to pass a hostfile to mpirun/mpiexec. I simply want mpiexec -n N ./mpi-task to do the propper job of _really_ spreading the processes evenly between all the system's CPUs. Best regards, waiting for your answer. ps.: should I upgrade to latest openMPI to have my problem "automagically" solved? -- Miguel Sousa Filipe