On Jan 26, 2009, at 4:04 PM, Hartzman, Leslie D (MS) wrote:
Process 'A'
-
Initialize requests to MPI_REQUEST_NULL
for i=0; i < n; i++
{
if (rank == 0)
{
initialize 'command' structure
On Jan 26, 2009, at 4:57 PM, Ted Yu wrote:
I'm new to this group. I'm trying to implement a parallel quantum
code called "Seqquest".
I'm trying to figure out why there is an error in the implementation
of this code where there is an error:
This job has allocated 2 cpus
Signal:11 info.si_er
You may be able to use an Intel series 11 Fortran compiler with gcc to
compile Open MPI, but it depends on exactly what that series 11
Fortran compiler supports. If they support mixing object files from
multiple compilers like this, then hypothetically OMPI can be compiled
this way (ifort
I'm afraid I don't have access to any Itaniums, so debugging this will
be difficult.
The source file between OMPI v1.2 and OMPI v1.3 should be the same.
So *something* is different between the two. Can one of you examine
the output of "make" to determine what is different?
- check the s
Could the nodes be running out of shared memory and/or temp filesystem
space?
On Jan 29, 2009, at 3:05 PM, Rolf vandeVaart wrote:
I have not seen this before. I assume that for some reason, the
shared memory transport layer cannot create the file it uses for
communicating within a node
It looks like you compiled Open MPI against the QLogic PSM libraries
-- I see the PSM MTL plugin available. Here's some text from the OMPI
v1.3 README that clarifies the situation:
- There are two MPI network models available: "ob1" and "cm". "ob1"
uses BTL ("Byte Transfer Layer") compone
On Jan 30, 2009, at 4:54 PM, Dirk Eddelbuettel wrote:
| > where things end in the loop over oapl_list() elements. I still
see a
| > fprintf() statment just before
| >
| > if (MCA_SUCCESS == component->mca_register_component_params()) {
| >
| > in the middle of the open_components function i
On Jan 31, 2009, at 11:39 AM, Ralph Castain wrote:
For anyone following this thread:
I have completed the IOF options discussed below. Specifically, I
have added the following:
* a new "timestamp-output" option that timestamp's each line of output
* a new "output-filename" option that redi
On Sat, Jan 31, 2009 at 6:27 PM, Reuti wrote:
> Am 31.01.2009 um 08:49 schrieb Sangamesh B:
>
>> On Fri, Jan 30, 2009 at 10:20 PM, Reuti
>> wrote:
>>>
>>> Am 30.01.2009 um 15:02 schrieb Sangamesh B:
>>>
Dear Open MPI,
Do you have a solution for the following problem of Open MPI (1.
Thanx for the info. It turned out to be a problem with the software, and not
an open-mpi issue.
Ted
--- On Sun, 2/1/09, Jeff Squyres wrote:
From: Jeff Squyres
Subject: Re: [OMPI users] Question about compatibility issues
To: ted...@wag.caltech.edu, "Open MPI Users"
List-Post: users@lists.ope
Am 01.02.2009 um 16:00 schrieb Sangamesh B:
On Sat, Jan 31, 2009 at 6:27 PM, Reuti
wrote:
Am 31.01.2009 um 08:49 schrieb Sangamesh B:
On Fri, Jan 30, 2009 at 10:20 PM, Reuti
wrote:
Am 30.01.2009 um 15:02 schrieb Sangamesh B:
Dear Open MPI,
Do you have a solution for the following prob
I'm afraid we discovered a bug in optimized builds with r20392. Please
use any tarball with r20394 or above.
Sorry for the confusion
Ralph
On Feb 1, 2009, at 5:27 AM, Jeff Squyres wrote:
On Jan 31, 2009, at 11:39 AM, Ralph Castain wrote:
For anyone following this thread:
I have completed
On Sun, Feb 1, 2009 at 10:37 PM, Reuti wrote:
> Am 01.02.2009 um 16:00 schrieb Sangamesh B:
>
>> On Sat, Jan 31, 2009 at 6:27 PM, Reuti wrote:
>>>
>>> Am 31.01.2009 um 08:49 schrieb Sangamesh B:
>>>
On Fri, Jan 30, 2009 at 10:20 PM, Reuti
wrote:
>
> Am 30.01.2009 um 15:02 schri
13 matches
Mail list logo