Hmmm...it -should- work, but I've never tried it on Windows. I will verify it
under Linux, but will have to defer to Shiqing to see if there is something
particular about the Windows environment.
On Nov 13, 2011, at 8:13 PM, Naor Movshovitz wrote:
> I have open-mpi v1.5.4, installed from the b
I just found out that there were missing updates for Windows in
singleton module (in trunk but not in 1.5 branch). I'll make a CMR for this.
On 2011-11-14 1:45 PM, Ralph Castain wrote:
Hmmm...it -should- work, but I've never tried it on Windows. I will verify it
under Linux, but will have to
Hello,
I have problem in using OpenMPI 1.4.3 with PGI 11.8. A simple hello-world test
program gives segfault and ompi_info gives segfault, sometimes, too. Using a
debugger the problem seems to arise from libnuma
http://imageshack.us/photo/my-images/822/stacktracesegfaultpgi11.png/
I tried to
Hi,
The problem I'm facing now is how to print information on computing nodes.
E.g. I've got 10 real computers wired into one cluster with pelicanhpc.
I need each one of them to print results independently on their
screens. How To?
It may be an easy task, but I'm new to this and didn't find proper
Hi,
Am 14.11.2011 um 19:54 schrieb Radomir Szewczyk:
> The problem I'm facing now is how to print information on computing nodes.
> E.g. I've got 10 real computers wired into one cluster with pelicanhpc.
> I need each one of them to print results independently on their
> screens. How To?
the std
So there is no solution? e.g. my 2 computers that are computing nodes
and are placed in different room on different floors. And the target
user wants to monitor the progress of computation independently which
have to be printed on their lcd monitors.
2011/11/14 Reuti :
> Hi,
>
> Am 14.11.2011 um 1
On Nov 14, 2011, at 12:18 PM, Radomir Szewczyk wrote:
> So there is no solution? e.g. my 2 computers that are computing nodes
> and are placed in different room on different floors. And the target
> user wants to monitor the progress of computation independently which
> have to be printed on thei
lets say computing node no. 2 is dual core and uses 2 processes, it
prints out only the solution for lets say no 2 and 3 processes. kinda
if(id == 2 || id == 3) cout << "HW"; the rest ignores this
information. That's what I'm talking about. Thanks for your response.
2011/11/14 Ralph Castain :
>
>
On Nov 14, 2011, at 12:28 PM, Radomir Szewczyk wrote:
> lets say computing node no. 2 is dual core and uses 2 processes, it
> prints out only the solution for lets say no 2 and 3 processes. kinda
> if(id == 2 || id == 3) cout << "HW"; the rest ignores this
> information. That's what I'm talking a
Am 14.11.2011 um 20:37 schrieb Ralph Castain:
>
> On Nov 14, 2011, at 12:28 PM, Radomir Szewczyk wrote:
>
>> lets say computing node no. 2 is dual core and uses 2 processes, it
>> prints out only the solution for lets say no 2 and 3 processes. kinda
>> if(id == 2 || id == 3) cout << "HW"; the r
Hello:
A colleague and I have been running a large F90 application that does an
enormous number of mpi_bcast calls during execution. I deny any
responsibility for the design of the code and why it needs these calls,
but it is what we have inherited and have to work with.
Recently we ported the c
I'm trying to establish communications between two mpi processes using
MPI_Open_port / MPI_Publish_name / MPI_Comm_accept
in a server and
MPI_Lookup_name / MPI_Comm_connect
in a client.
The source code is in fortran, and the client fails with some sort of
"malloc error".
It seems that the different
Yes, this is well documented - may be on the FAQ, but certainly has been in the
user list multiple times.
The problem is that one process falls behind, which causes it to begin
accumulating "unexpected messages" in its queue. This causes the matching logic
to run a little slower, thus making th
13 matches
Mail list logo