I've benn investigating and there is no firewall that could stop TCP
traffic in the cluster. With the option --mca plm_base_verbose 30 I get
the following output:
[itanium1] /home/otro > mpirun --mca plm_base_verbose 30 --host itanium2
helloworld.out
[itanium1:08311] mca: base: components_open: Lo
Looks to me like you have an error in your cmd line - you aren't specifying the
number of procs to run. My guess is that the system is hanging trying to
resolve the process map as a result. Try adding "-np 1" to the cmd line.
The output indicates it is dropping slurm because it doesn't see a slu
Hello List,
I hope you can help us out on that one, as we are trying to figure out
since weeks.
The situation: We have a program being capable of slitting to several
processes to be shared on nodes within a cluster network using openmpi.
We were running that system on "older" cluster hardware (In
I've been having similar problems using Fedora core 9. I believe the
issue may be with SELinux, but this is just an educated guess. In my
setup, shortly after a login via mpi, there is a notation in the
/var/log/messages on the compute node as follows:
Mar 30 12:39:45 kernel: type=1400 audi
i looked at TORQUE and looks good, indeed. i will give it a try for testing
i just have some questions,
Torque requires moab, but from what i've read on the site you have to buy
moab right?
im looking for a 100% free solution
Cristobal
On Mon, Mar 29, 2010 at 3:48 PM, Jody Klymak wrote:
>
>
On Mar 30, 2010, at 11:12 AM, Cristobal Navarro wrote:
i just have some questions,
Torque requires moab, but from what i've read on the site you have
to buy moab right?
I am pretty sure you can download torque w/o moab. I do not use moab,
which I think is a higher-level scheduling layer
Jody Klymak wrote:
>
> On Mar 30, 2010, at 11:12 AM, Cristobal Navarro wrote:
>
>> i just have some questions,
>> Torque requires moab, but from what i've read on the site you have to
>> buy moab right?
>
> I am pretty sure you can download torque w/o moab. I do not use moab,
> which I think i
Craig Tierney wrote:
Jody Klymak wrote:
On Mar 30, 2010, at 11:12 AM, Cristobal Navarro wrote:
i just have some questions,
Torque requires moab, but from what i've read on the site you have to
buy moab right?
I am pretty sure you can download torque w/o moab. I do not use moab,
which I thin
Hi Jeff,
I tested 1.4.2a1r22893, and it does not hang in ompi_free_list_grow.
I hadn't noticed that the 1.4.1 installation I was using was configured
with --enable-mpi-threads. Could that have been related to this problem?
Cheers,
Shaun
On Mon, 2010-03-29 at 17:00 -0700, Jeff Squyres wrote:
> C
On Mar 30, 2010, at 3:15 PM, Shaun Jackman wrote:
> Hi Jeff,
>
> I tested 1.4.2a1r22893, and it does not hang in ompi_free_list_grow.
>
> I hadn't noticed that the 1.4.1 installation I was using was configured
> with --enable-mpi-threads. Could that have been related to this problem?
Yes, very
I changed the SELinux config to permissive (log only), and it didn't
change anything. Back to the drawing board.
Robert Collyer wrote:
I've been having similar problems using Fedora core 9. I believe the
issue may be with SELinux, but this is just an educated guess. In my
setup, shortly aft
Hi all,
I posted before about doing a domain decomposition on a 3D array in C, and this
is sort of a follow up to that. I was able to get the calculations working
correctly by performing the calculations on XZ sub-domains for all Y dimensions
of the space. I think someone referred to this as a
Hi Derek
Great to read that you parallelized the code.
Sorry to hear about the OO problems,
although I enjoyed to read your characterization of it. :)
We also have plenty of that,
mostly with some Fortran90 codes that go OOverboard.
I think I suggested "YZ-books", i.e., decompose the domain acr
If using the master/slace IO model, would it be better to cicle through
all the process and each one would write it's part of the array into the
file. This file would be open in "stream" mode...
like
do p=0,nprocs-1
if(my_rank.eq.i)then
openfile (append mode)
write_to_file
Salve Ricardo Reis!
Como vai a Radio Zero?
Doesn't this serialize the I/O operation across the processors,
whereas MPI_Gather followed by rank_0 I/O may perhaps move
the data faster to rank_0, and eventually to disk
(particularly when the number of processes is large)?
I never thought of your s
15 matches
Mail list logo