Thanks
I have downloaded http://padb.googlecode.com/files/padb-3.2-beta1.tar.gz
and followed the instructions of INSTALL file and installed it at
/mypath/padb32
But, I got:
-bash-3.2$ padb -Ormgr=pbs -Q 48279.clusterJob 48279.cluster is not active
Actually, the job was running.
I have installed
can you install MPI on your local machine? As someone said earlier, you
don't need a cluster to run MPI. You can run MPI with multiple processes on
a single computer.
On Mon, Oct 25, 2010 at 12:40 PM, Ashley Pittman wrote:
>
> On 25 Oct 2010, at 20:18, Jack Bryan wrote:
>
> > Thanks
> > I have d
On 25 Oct 2010, at 20:18, Jack Bryan wrote:
> Thanks
> I have downloaded
> http://padb.googlecode.com/files/padb-3.0.tgz
>
> and compile it.
>
> But, no user manual, I can not use it by padb -aQ.
The -a flag is a shortcut to all jobs, if you are providing a jobid (which is
normally numeric)
Also, btw using these two values and make clean, I was able to both
configure and build Open MPI properly. After that I compiled an example
code with -m32 flag and it compiled properly too :D. It remains to be seen
whether my setup of machines run them properly or not.
Regards,
Saahil
On O
ThanksI have downloaded http://padb.googlecode.com/files/padb-3.0.tgz
and compile it.
But, no user manual, I can not use it by padb -aQ.
./padb -aQ myjobpadb: Error: --all incompatible with specific ids
Actually, myjob is running in the queue.
Do you have use manual about how to use it ?
thanks
Ralph, well my --host flag contains i686-pc-linux-gnu and so does --build.
On Oct 26, 2010 12:15am, Ralph Castain wrote:
The problem is that you set the build and the host to the -same-
architecture - that indicates it isn'ta cross-compile situation. The
--host flag should indicate the arch
thanks
But, the code is too long.
Jack Oct. 25 2010
> Date: Mon, 25 Oct 2010 14:08:54 -0400
> From: g...@ldeo.columbia.edu
> To: us...@open-mpi.org
> Subject: Re: [OMPI users] Open MPI program cannot complete
>
> Your job may be queued, not executing, because there are no
> resources available,
The problem is that you set the build and the host to the -same- architecture -
that indicates it isn't a cross-compile situation. The --host flag should
indicate the arch of the machines that will run the code - in your case, that
would be i386-pc-linux-gnu
On Oct 25, 2010, at 12:30 PM, saahi
Make sure to "make clean", too -- just to clean up any kruft that may be left
over from a prior build that is not necessarily compatible with your new set of
configure/build/link options.
On Oct 25, 2010, at 2:35 PM, Ralph Castain wrote:
> I think you are missing the --host flag, so it still t
I think you are missing the --host flag, so it still thinks it is building for
the current machine.
On Oct 25, 2010, at 12:29 PM, saahil...@gmail.com wrote:
> Ralph,
> As you suggested, I configured with the following options -
>
> ./configure --prefix=/home/wolf/openmpi/ CFLAGS=-m32 CXXFLAG
I also tried by adding
--host=i686-pc-linux-gnu
alongwith the --build option. Same error :(
On Oct 25, 2010 11:59pm, saahil...@gmail.com wrote:
Ralph,
As you suggested, I configured with the following options -
./configure --prefix=/home/wolf/openmpi/ CFLAGS=-m32 CXXFLAGS=-m32
FFLAGS=-m32
Ralph,
As you suggested, I configured with the following options -
./configure --prefix=/home/wolf/openmpi/ CFLAGS=-m32 CXXFLAGS=-m32
FFLAGS=-m32 FCFLAGS=-m32 --build=i686-pc-linux-gnu LDFLAGS=-m32
I'm afraid I am still getting the same error messages while making as I did
last time. Did I
You might also need LDFLAGS=-m32 ...?
On Oct 25, 2010, at 1:56 PM, saahil...@gmail.com wrote:
> Hello,
> I am a beginner using Open MPI to set up a simple Beowulf cluster of PCs for
> my Distributed Systems lab. My head node is my x86_64 architecture Fedora 12
> machine. The rest of my nodes a
Your job may be queued, not executing, because there are no
resources available, all nodes are busy.
Try qstat -a.
Posting a code snippet with all your MPI calls may prove effective.
You might get a trove of advice for a thrift of effort.
Jeff Squyres wrote:
Check the man page for qsub for prop
Do ./configure --help and you'll see options for specifying the host and build
target. You need to do that when cross-compiling.
On Oct 25, 2010, at 12:01 PM, saahil...@gmail.com wrote:
> -- Forwarded message --
> From: saahil...@gmail.com
> Date: Oct 25, 2010 11:26pm
> Subject:
-- Forwarded message --
From: saahil...@gmail.com
List-Post: users@lists.open-mpi.org
Date: Oct 25, 2010 11:26pm
Subject: Cross compiling for 32 bit from a 64 bit machine
To: us...@open-mpi.org
CC:
Hello,
I am a beginner using Open MPI to set up a simple Beowulf cluster of PCs
Hello,
I am a beginner using Open MPI to set up a simple Beowulf cluster of PCs
for my Distributed Systems lab. My head node is my x86_64 architecture
Fedora 12 machine. The rest of my nodes are i386 Fedora 13 machines. I
understand that I need to compile Open MPI with CFLAGS=-m32 so that I
Check the man page for qsub for proper use.
On Oct 25, 2010, at 1:49 PM, Jack Bryan wrote:
> thanks
>
> I use
> qsub -I nsga2_job.sh
> qsub: waiting for job 48270.clusterName to start
>
> By qstat
> I found the job name is none and no results show up.
>
> No shell prompt
thanks
I use qsub -I nsga2_job.shqsub: waiting for job
48270.clusterName to start
By qstatI found the job name is none and no results show up.
No shell prompt appear, the command line is hang there , no response.
Any help is appreciated.
Thanks
Jack
Oct. 25 2010
> From: js
On Mon, Oct 25, 2010 at 19:35, Jack Bryan wrote:
> I have to use #PBS to submit any jobs in my cluster.
> I cannot use command line to hang a job on my cluster.
>
You don't need a cluster to run MPI jobs, can you run the job on whatever
you development machine is? Does it hang there?
PBS inter
Can you use the interactive mode of PBS to get 5 cores on 1 node? IIRC, "qsub
-I ..." ?
Then you get a shell prompt with your allocated cores and can run stuff
interactively. I don't know if your site allows this, but interactive
debugging here might be *significantly* easier than try to auto
thanks
I have to use #PBS to submit any jobs in my cluster. I cannot use command line
to hang a job on my cluster.
this is my script: --#!/bin/bash#PBS -N
jobname#PBS -l walltime=00:08:00,nodes=1#PBS -q
queuenameCOMMAND=/mypath/myprogNCORES=5
cd $PBS_O_WORKD
On Mon, Oct 25, 2010 at 19:07, Jack Bryan wrote:
> I need to use #PBS parallel job script to submit a job on MPI cluster.
>
Is it not possible to reproduce locally? Most clusters have a way to submit
an interactive job (which would let you start this thing and then inspect
individual processes)
On 25 Oct 2010, at 17:26, Jack Bryan wrote:
> Thanks, the problem is still there.
>
> I used:
>
> Only process 0 returns. Other processes are still struck in
> MPI_Finalize().
>
> Any help is appreciated.
You can use the command "padb -aQ" to show you the message queues for your
appl
thanks,
Would like to tell me how to use
(gdb --batch -ex 'bt full' -ex 'info reg' -pid ZOMBIE_PID)
in MPI ?
I need to use #PBS parallel job script to submit a job on MPI cluster.
Where should I put the (gdb --batch -ex 'bt full' -ex 'info reg' -pid
ZOMBIE_PID) in the script ?
How to get the
On Mon, Oct 25, 2010 at 18:26, Jack Bryan wrote:
> Thanks, the problem is still there.
This really doesn't prove that there are no outstanding asynchronous
requests, but perhaps you know that there are not, despite not being able to
post a complete test case here. I suggest attaching a debugge
Thanks, the problem is still there.
I used:
cout << "In main(), I am rank " << myRank << " , I am before
MPI_Barrier(MPI_COMM_WORLD). \n\n" << endl ;
MPI_Barrier(MPI_COMM_WORLD);cout << "In main(), I am rank "
<< myRank << " , I am before MPI_Finalize() and after
I think I got this problem before. Put a mpi_barrier(mpi_comm_world) before
mpi_finalize for all processes. For me, mpi terminates nicely only when all
process are calling mpi_finalize the same time. So I do it for all my
programs.
On Mon, Oct 25, 2010 at 7:13 AM, Jack Bryan wrote:
> Thanks,
Thanks, But, I have put a mpi_waitall(request) before
cout << " I am rank " << rank << " I am before MPI_Finalize()" << endl;
If the above sentence has been printed out, it means that all requests have
been checked and finished. right ?
What may be the possible reasons for that stuck ?
So what you are saying is *all* the ranks have entered MPI_Finalize
and only a subset has exited per placing prints before and after
MPI_Finalize. Good. So my guess is that the processes stuck in
MPI_Finalize have a prior MPI request outstanding that for whatever
reason is unable to complete
thanksI found a problem:
I used: cout << " I am rank " << rank << " I am before MPI_Finalize()"
<< endl; MPI_Finalize();cout << " I am rank " << rank << " I am
after MPI_Finalize()" << endl; return 0;I can get the output " I am
rank 0 (1, 2, ) I am before M
thanks
I used:
cout << " I am rank " << rank << " I am before MPI_Finalize()" <<
endl; MPI_Finalize(); return 0;
I can get the output " I am rank 0 (1, 2, ) I am before MPI_Finalize() ".
Are there other better ways to check this ?
Any help is appreciated.
32 matches
Mail list logo