Or just compiling with -g or -traceback (depending on the compiler) will
give you more information about the point of failure
in the error message.
On 04/15/2014 04:25 PM, Ralph Castain wrote:
Have you tried using a debugger to look at the resulting core file? It
will probably point you right at
Have you tried using a debugger to look at the resulting core file? It will
probably point you right at the problem. Most likely a case of overrunning
some array when #temps > 5
On Tue, Apr 15, 2014 at 10:46 AM, Oscar Mojica wrote:
> Hello everybody
>
> I implemented a parallel simulated annea
Using the instructions that you gave me I actually managed to setup the one
that was already installed. I followed commands that you sent me and I made
it work. It is an MPICH MPI. I feel somewhat bad not to installing Open
MPI, but I think I will in a couple of weeks. I have to finish some
simulat
Hi Djordje
That is great news.
Congrats for making it work!
Just out of curiosity: What did the trick?
Did you install Open MPI from source, or did you sort out
the various MPI flavors which were previously installed on your system?
Now the challenge is to add OpenMP and run WRF
in hybrid mode
Hi,
It is working now. It shows:
starting wrf task0 of4
starting wrf task1 of4
starting wrf task2 of4
starting wrf task3 of4
-
Hello everybody
I implemented a parallel simulated annealing algorithm in fortran. The
algorithm is describes as follows
1. The MPI program initially generates P processes that have rank 0,1,...,P-1.
2. The MPI program generates a starting point and sends it for all processes
set T=T03. At the
Hi Djordje
"locate mpirun" shows items labled "intel", "mpich", and "openmpi",
maybe more.
Is it Ubuntu or Debian?
Anyway, if you got this mess from somebody else,
instead of sorting it out,
it may save you time and headaches installing Open MPI from
source.
Since it is a single machine, there
Have you tried a typical benchmark (e.g., NetPipe or OMB) to ensure the problem
isn't in your program? Outside of that, you might want to explicitly tell it to
--bind-to core just to be sure it does so - it's supposed to do that by
default, but might as well be sure. You can check by adding --re
Hi Rob,
The applications of the two users in question are different; I haven¹t
looked through much of either code. I can respond to your highlighted
situations in sequence:
>- everywhere in NFS. If you have a Lustre file system exported to some
>clients as NFS, you'll get NFS (er, that might no
On 15/04/2014 14:42, Jeff Squyres (jsquyres) wrote:
On Apr 15, 2014, at 8:35 AM, Marco Atzeri wrote:
on 64bit 1.7.5,
as Symantec Endpoint protections, just decided
that a portion of 32bit MPI is a Trojan...
It's the infamous MPI trojan. We take over your computer and use it to help
cure
On Apr 15, 2014, at 8:35 AM, Marco Atzeri wrote:
> on 64bit 1.7.5,
> as Symantec Endpoint protections, just decided
> that a portion of 32bit MPI is a Trojan...
It's the infamous MPI trojan. We take over your computer and use it to help
cure cancer.
:p
--
Jeff Squyres
jsquy...@cisco.com
Fo
On 15/04/2014 13:31, Cristian Butincu wrote:
This is the simple MPI program (test.c) I was talking about:
#include
#include
int main(int argc, char* argv[]) {
int my_rank; /* rank of process */
int p; /* number of processes */
/* start up MPI */
MPI_Init(&argc, &argv);
This is the simple MPI program (test.c) I was talking about:
#include
#include
int main(int argc, char* argv[]) {
int my_rank; /* rank of process */
int p; /* number of processes */
/* start up MPI */
MPI_Init(&argc, &argv);
/* find out process rank */
MPI_Comm_rank(MP
Hi,
I am trying to benchmark Open MPI performance on 10G Ethernet network
between two hosts. The performance numbers of benchmarks are less than
expected. The maximum bandwidth achieved by OMPI-C is 5678 Mbps and I was
expecting around 9000+ Mbps. Moreover latency is also quite higher than
expected
14 matches
Mail list logo