ook at the configure script and
see what's going wrong.
On Sep 24, 2015, at 4:51 PM, Fabrice Roy wrote:
Hi,
I have made some other tests. I don't know if it can help you but here is what
I observed.
Using the array contructor [] solves the problem for a scalar, as someone wrote
on
the mpi_f08 bindings and the Intel 2016 compilers.
It looks like configure is choosing to generate a different pragma for Intel
2016 vs. Intel 2015 compilers, and that's causing a problem.
Let me look into this a little more...
On Sep 24, 2015, at 11:09 AM, Fabrice Roy wrote:
Hello,
that your program compiles and runs
just fine if you use mpi_f08 module (!)
Cheers,
Gilles
On 9/24/2015 1:00 AM, Fabrice Roy wrote:
program testmpi
use mpi
implicit none
integer :: pid
integer :: ierr
integer :: tok
call mpi_init(ierr)
call mpi_comm_rank
compilers and both
versions of the test code (with tok declared as an integer or as an
integer, dimension(1) ) compile and execute.
Open MPI was configured with the same options with both compilers.
Do you have any idea how I could solve this problem?
Thanks,
Fabrice Roy
--
Fabrice Roy
naged to hit on this solution by guesswork, but it's quite a
relief to know that its correctness is actually mandated by the MPI
standand not just my dumb luck.
Thanks again,
---
Roy Stogner
case I'm seeing failures with both MPICH2 and
OpenMPI and so I've got to assume my own code is at fault. Any help
would be appreciated. If there's anything I can do to make the issue
easier to replicate please let me know.
Thanks,
---
Roy Stogner#include
#include
#include &quo
Segmentation fault.
*/
//sleep(2); //un-comment this line to have the sleep, and avoid
the core
dumps.
/* shut down MPI */
MPI_Finalize();
}
return 0;
}
Anyone for the rescue ?
Thank you,
Roy Avidor
in build option over the last
couple of years.
I've attached the logs for my "configure" and "make all" steps. Our email
filter will not allow me to send zipped files, so I've attached the two log
files. I'd appreciate any advice.
Thank you,
Roy
Hi all,
I'm trying to compile openMPI 1.4.2 in Cygwin 1.7.5-1.
After ./configure I do make and after some time I always get this
error. I've tried "make clean" and "make" again, but that doesn't
help. It looks to me like I have all the requirements of the
README.Windows file (Cygwin and libtool
uter Center, University of Tromsø, N-9037 TROMSØ Norway.
phone:+47 77 64 41 07, fax:+47 77 64 41 00
Roy Dragseth, Team Leader, High Performance Computing
Direct call: +47 77 64 62 56. email: roy.drags...@uit.no
, University of Tromsø, N-9037 TROMSØ Norway.
phone:+47 77 64 41 07, fax:+47 77 64 41 00
Roy Dragseth, Team Leader, High Performance Computing
Direct call: +47 77 64 62 56. email: roy.drags...@uit.no
g issue though...
Regards,
r.
--
The Computer Center, University of Tromsø, N-9037 TROMSØ Norway.
phone:+47 77 64 41 07, fax:+47 77 64 41 00
Roy Dragseth, Team Leader, High Performance Computing
Direct call: +47 77 64 62 56. email: roy.drags...@uit.no
omatically picks up the PE_NODEFILE if it detects that it is launched
within an SGE job. Would it be possible to have the same functionality for
torque? The code looks a bit too complex at first sight for me to fix this
myself.
Best regards,
Roy.
--
The Computer Center, Univers
nly has those 2 devices. (all of the above assume that all your eth0's are
> on one subnet, all your eth1's are on another subnet, ...etc.)
>
> Does that work for you?
>
>
>
> On Aug 25, 2009, at 7:14 PM, Jayanta Roy wrote:
>
> Hi,
>>
>> I am using
Hi,
I am using Openmpi (version 1.2.2) for MPI data transfer using non-blocking
MPI calls like MPI_Isend, MPI_Irecv etc. I am using "--mca
btl_tcp_if_include eth0,eth1" to use both the eth link for data transfer
within 48 nodes. Now I have added eth2 and eth3 links on the 32 compute
nodes. My aim
Hi,
I am using Openmpi (version 1.2.2) for MPI data transfer using non-blocking
MPI calls like MPI_Isend, MPI_Irecv etc. I am using "--mca
btl_tcp_if_include eth0,eth1" to use both the eth link for data transfer
within 48 nodes. Now I have added eth2 and eth3 links on the 32 compute
nodes. My aim
solve to something in the openmpi libraries and the
example runs without crashing.
Regards,
r.
--
The Computer Center, University of Tromsø, N-9037 TROMSØ Norway.
phone:+47 77 64 41 07, fax:+47 77 64 41 00
Roy Dragseth, Team Leader, High Performance Computing
Direct call: +47 77 64 62 56. email: roy.drags...@uit.no
ø, N-9037 TROMSØ Norway.
phone:+47 77 64 41 07, fax:+47 77 64 41 00
Roy Dragseth, Team Leader, High Performance Computing
Direct call: +47 77 64 62 56. email: roy.drags...@uit.no
Hi,
I was trying to install openmpi-1.2.2 under 2.4.32 kernel.
./configure --prefix=/mnt/shared/jroy/openmpi-1.2.2/ CC=icc CXX=icpc
F77=ifort FC=ifort
make all install
It installed successfully, but during mpirun I got...
mpirun --mca btl_tcp_if_include eth0 -n 4 -bynode -hostfile test_nodes
.
Dear Rainer and Adrian,
Thank you lot for the help. It works. I was trying this for long time but
didn't notice the mistakes. I can't understand how can I overlooked that!
Regards,
Jayanta
On 5/15/07, Adrian Knoth wrote:
On Mon, May 14, 2007 at 11:59:18PM +0530, Jayanta Roy wr
Hi,
In my 4 nodes cluster I want to run two MPI_Reduce on two communicators (one
using Node1, Node2 and other using Node3, Node4).
Now to create communicator I used ...
MPI_Comm MPI_COMM_G1, MPI_COMM_G2;
MPI_Group g0, g1, g2;
MPI_Comm_group(MPI_COMM_WORLD,&g0);
MPI_Group_incl(g0,g_size,&r_array[0
Hi,
To optimize our network throughput we set jumbo frame=8000. The transfers
are going smoothly. But after few minitues we are seeing drastic drop in
network throughput, looks like a problem of deadlock in network transfer
speed (slow down by a factor of 100!). This situation is not happening if
ct 23, 2006, at 4:56 AM, Jayanta Roy wrote:
Hi,
Sometime before I have posted doubts about using dual gigabit support
fully. See I get ~140MB/s full duplex transfer rate in each of
following
runs.
mpirun --mca btl_tcp_if_include eth0 -n 4 -bynode -hostfile host a.out
mpirun --mca btl_tcp_
w
routines in ipv4 processing and recompile the Kernel, if you are familiar
with Kernel building and your OS is Linux.
On 10/23/06, Jayanta Roy wrote:
Hi,
Sometime before I have posted doubts about using dual gigabit support
fully. See I get ~140MB/s full duplex transfer rate in each of foll
Hi,
Sometime before I have posted doubts about using dual gigabit support
fully. See I get ~140MB/s full duplex transfer rate in each of following
runs.
mpirun --mca btl_tcp_if_include eth0 -n 4 -bynode -hostfile host a.out
mpirun --mca btl_tcp_if_include eth1 -n 4 -bynode -hostfile host
Hi,
I was running mpirun in the linux cluster we have.
mpirun -n 5 -bynode -hostfile test_nodes a.out
Sometime occationaly after MPI initialization I have the following error..
rank: 1 of: 5
rank: 4 of: 5
rank: 3 of: 5
rank: 0 of: 5
rank: 2
ou are going to be
limited, also memory bandwidth could also be the bottleneck.
Thanks,
Galen
Jayanta Roy wrote:
Hi,
In between two nodes I have dual Gigabit ethernet full duplex links. I was
doing benchmarking using non-blocking MPI send and receive. But I am
getting only speed corresponds
the
ports, then why I am not getting full throughput from dual Gigabit
ethernet ports? Can anyone please help me in this?
Regards,
Jayanta
Jayanta Roy
National Centre for Radio Astrophysics | Phone : +91-20-25697107
T
28 matches
Mail list logo