Hi,
Am 11.01.2012 um 19:12 schrieb Mark Suhovecky:
> Edmund-
>
> Yeah, I've tried that. No difference. Our template .bash_profile sources
> the user's .bashrc, so non-interactive bash shells in our setup are sourcing
> .bashrc.
SGE 6.2u5 can't handle multi line environment variables or func
Hi Mark
I wonder if you need to initialize the module command environment inside your
SGE
bash submission script:
$MODULESHOME/init/
where is bash in this case. See 'man module' for more details.
This would be before you actually invoke the module command:
module load openmpi
I am guess
Edmund-
Yeah, I've tried that. No difference. Our template .bash_profile sources
the user's .bashrc, so non-interactive bash shells in our setup are sourcing
.bashrc.
The modules environment is defined, and works- only jonbs that run across
multiple machines see this error.
Mark Suhovecky
H
Hi Mark,
Have you tried adding -l to the #! line?
#!/bin/bash -l
On Wed, Jan 11, 2012 at 10:42 AM, Mark Suhovecky wrote:
> #!/bin/bash
> #$
>
> module load ompi
>
> mpiexec
>
> when the mpiexec is run, we'll see the following errors
>
>
> bash: module: line 1: syntax error: unexpected end of
Hi-
We run OpenMPI 1.4.3 on RHEL5 in a cluster environment.
We use Univa Grid Engine 8.0.1 (an SGE spinoff) for job submission.
We've just recently begun supporting the bash shell for submitted jobs,
and are seeing a problem with submitted MPI jobs.
Our software environment is manged with Module
Ralph, Jeff, thanks!
I managed to make it work with the following configure options:
./configure --with-pmi=/usr/ --with-slurm=/usr/ --without-psm
--prefix=`pwd`/install
Regards,
Andrew Senin
On Wed, Jan 11, 2012 at 7:17 PM, Ralph Castain wrote:
> Well, yes - but it isn't quite that simple. :
Well, yes - but it isn't quite that simple. :-/
If you want to direct-launch on slurm without using the resv_ports option, you
need to build OMPI to include PMI support by including --with-pmi on your
configure cmd line. You may need to point to where pmi.h resides (e.g.,
--with-pmi=/opt/slurm/
I am a little confused by your problem statement. Are you saying you
want to have each MPI process to have multiple threads that can call MPI
concurrently? If so you'll want to read up on the MPI_Init_thread
function.
--td
On 1/11/2012 7:19 AM, Hamilton Fischer wrote:
Hi, I'm actually usin
The latest -- 1.5.5rc2 (just released last night) -- has direct "srun
my_mpi_application" integration. It's not in a final release yet, but as you
can probably guess by the version number, it'll be in the final version of
1.5.5.
We have 1-2 bugs remaining in 1.5.5 that are actively being worke
Hi, I'm actually using mpi4py but my question should be similar to normal MPI
in spirit.
Simply, I want to do a MPMD application with a dedicated thread for each node
(I have a small network). I was wondering if it was okay to do a blocking recv
in each independent thread. Of course, since each
Hi,
Am 11.01.2012 um 05:46 schrieb Ralph Castain:
> You might want to ask that on the Beowulf mailing lists - I suspect it has
> something to do with the mount procedure, but honestly have no real idea how
> to resolve it.
>
> On Jan 10, 2012, at 8:45 PM, Shaandar Nyamtulga wrote:
>
>> Hi
>>
Hi,
Could you please describe the current status of SLURM integration? I
had a feeling srun supports direct launch of OpenMpi applications
(without mpirun) compiled with the 1.5 branch. At least one of my
colleagu succeeded on that.
But when I installed SLURM and the head revision of OpenMPI 1.5
12 matches
Mail list logo