of $PBS_NODEFILE and see how many nodes it contains.
>
> On Jun 15, 2010, at 3:56 AM, Govind Songara wrote:
>
> Hi,
>
> I have using openmpi build with tm support
> When i run the job requesting for two nodes it run only on single node.
> Here is my script.
> >cat mpipbs-script.
des = 1
Can someone please advise if i missing anything here.
Regards
Govind
Hi Ralph,
The allocation looks fine, but why it show number of slots as 1. The
executions host has 4 processor, in nodes file also defined np=4.
== ALLOCATED NODES ==
Data for node: Name: node56.beowulf.clusterNum slots: 1Max slots: 0
==
Hi Gus,
OpenMPI was not built with tm support.
The submission/execution hosts does not have any of the
PBS environment variable set
PBS_O_WORKDIR, $PBS_NODEFILE.
How i can make set it
regards
Govind
On 9 June 2010 18:45, Gus Correa wrote:
> Hi Govind
>
> Besides what Ralph said,
de the path to "hello" unless it sits in your PATH
> environment!
>
> On Jun 9, 2010, at 9:37 AM, Govind wrote:
>
>
> #!/bin/sh
> /usr/lib64/openmpi/1.4-gcc/bin/mpirun hello
>
>
> On 9 June 2010 16:21, David Zhang wrote:
>
>> what does your my-scr
#!/bin/sh
/usr/lib64/openmpi/1.4-gcc/bin/mpirun hello
On 9 June 2010 16:21, David Zhang wrote:
> what does your my-script.sh looks like?
>
> On Wed, Jun 9, 2010 at 8:17 AM, Govind wrote:
>
>> Hi,
>>
>> I have installed following openMPI packge on worker node f
t of 4 on node56.beowulf.cluster
Hello World! from process 0 out of 4 on node56.beowulf.cluster
Hello World! from process 3 out of 4 on node56.beowulf.cluster
Hello World! from process 1 out of 4 on node56.beowulf.cluster
Could you please advise, if I missing anything here.
Regards
Govind