About a dozen people e-mailed me after I sent the first mail asking
where the videos were located. :-)
http://www.open-mpi.org/video/
Also, "Videos" is a link on the left-hand side navigation of the Open
MPI web site, so there's no need to memorize the link.
On May 27, 2008, at 6:43
Yep. Wall time is no where near violation (dies about 2 minutes into
a 30 minute allocation). I did a ulimit -a through qsub and direct on
the node (as the same user in both cases), and the results were
identical (most items were unlimited).
Any other ideas?
--Jim
On Tue, May 27, 2008 at 9:25
Jeff Squyres wrote:
> Over the past year or two, I have been slowly creating a large set of
> Open MPI training material that I've used to present to my company's
> customers and partners. I have just recently received permission to
> release all of my slides to the greater HPC community. W
On May 27, 2008, at 9:33 AM, Gabriele Fatigati wrote:
Great, it works!
Thank you very very much.
But, can you explain me how this parameter works?
You might want to have a look at this short video for a little
background on some relevant OpenFabrics concepts:
http://www.open-mpi.org/vi
Oops! I neglected to include the link to the videos on the web site:
http://www.open-mpi.org/video/
On May 27, 2008, at 12:41 PM, Jeff Squyres wrote:
Over the past year or two, I have been slowly creating a large set of
Open MPI training material that I've used to present to my company's
Over the past year or two, I have been slowly creating a large set of
Open MPI training material that I've used to present to my company's
customers and partners. I have just recently received permission to
release all of my slides to the greater HPC community. Woo hoo!
Note that "receivi
Hi all:
I've got a problem with a users' MPI job. This code is in use on
dozzens of clusters around the world, but for some reason, when run on
my Rocks 4.3 cluster, it dies at random timesteps. The logs are quite
unhelpful:
[root@aeolus logs]# more 2047.aeolus.OU
Warning: no access to tty (Bad
I have updated to OpenMPI 1.2.6 and had the user rerun his jobs. He's
getting similar output:
[root@aeolus logs]# more 2047.aeolus.OU
Warning: no access to tty (Bad file descriptor).
Thus no job control in this shell.
data directory is /mnt/pvfs2/patton/data/chem/aa1
exec directory is /mnt/pvfs
Great, it works!
Thank you very very much.
But, can you explain me how this parameter works?
On Thu, 15 May 2008 21:40:45 -0400, Jeff Squyres said:
>
> Sorry this message escaped for so long it got buried in my INBOX. The
> problem you're seeing might be related to one we just answered about a