Jeff,
I tried building 1.7.1 on my Ubuntu system. The default gfortran is
v4.6.3, so configure won't enable the mpi_f08 module build. I also
tried a three week old snapshot of the gfortran 4.9 trunk. This has
Tobias's new TYPE(*) in it, but not his latest !GCC$ attributes
NO_ARG_CHECK stuf
On Apr 25, 2013, at 5:33 PM, Vladimir Yamshchikov wrote:
> $NSLOTS is what requested by -pe openmpi in the script, my
> understanding that by default it is threads.
No - it is the number of processing elements (typically cores) that are
assigned to your job.
> $NSLOTS processes each spinnin
$NSLOTS is what requested by -pe openmpi in the script, my
understanding that by default it is threads. $NSLOTS processes each
spinning -t threads is not what is wanted as each process could spin
off more threads then there are physical or logical cores per node, thus
degrading performance or eve
To follow up for the web archives...
We fixed this bug off-list. It will be included in 1.6.5 and (likely) 1.7.2.
On Apr 5, 2013, at 3:18 PM, Eric Chamberland
wrote:
> Hi again,
>
> I have attached a very small example which raise the assertion.
>
> The problem is arising from a process wh
Depends on what NSLOTS is and what your program's "-t" option does :-)
Assuming your "-t" tells your program the number of threads to start, then the
command you show will execute NSLOTS number of processes, each of which will
spin off the number of indicated threads.
On Apr 25, 2013, at 11:39
I'm guessing you're the alter ego of
http://www.open-mpi.org/community/lists/devel/2013/04/12309.php? :-)
My first suggestion to you is to upgrade your version of Open MPI -- 1.4.0 is
ancient. Can you upgrade to 1.6.4?
On Apr 25, 2013, at 2:08 PM, Padma Pavani wrote:
> Hi Team,
>
> I am f
Hi all,
The FAQ has excellent entries on how to schedule on a SGE cluster non-MPI
jobs, yet only simple jobs are exemplified. But wnat about jobs that can be
run in multithreaded mode, say specifying option -t number_of_threads? In
other words, consider a command an esample qsub script:
..
Hi Team,
I am facing some problem while running HPL benchmark.
I am using Intel mpi -4.0.1 with Qlogic-OFED-1.5.4.1 to run benchmark and
also tried with openmpi-1.4.0 but getting same error.
Error File :
[compute-0-1.local:06936] [[14544,1],25] ORTE_ERROR_LOG: A message is
attempting to be
"Elken, Tom" writes:
>> > Intel has acquired the InfiniBand assets of QLogic
>> > about a year ago. These SDR HCAs are no longer supported, but should
>> > still work.
> [Tom]
> I guess the more important part of what I wrote is that " These SDR HCAs are
> no longer supported" :)
Sure, though
On Apr 25, 2013, at 9:11 AM, Dave Love wrote:
> Ralph Castain writes:
>
>> On Apr 24, 2013, at 8:58 AM, Dave Love wrote:
>>
>>> "Elken, Tom" writes:
>>>
> I have seen it recommended to use psm instead of openib for QLogic cards.
[Tom]
Yes. PSM will perform better and be mor
Ralph Castain writes:
> On Apr 24, 2013, at 8:58 AM, Dave Love wrote:
>
>> "Elken, Tom" writes:
>>
I have seen it recommended to use psm instead of openib for QLogic cards.
>>> [Tom]
>>> Yes. PSM will perform better and be more stable when running OpenMPI
>>> than using verbs.
>>
>> Bu
Hi Jeff,
I just downloaded 1.7.1. The new files in the use-mpi-f08 look great!
However the use-mpi-tkr and use-mpi-ignore-tkr directories don't fare so
well. Literally all the interfaces are still 'ierr'.
While I realize that both the F90 mpi module and interface checking,
were optional pr
we apologize if you receive multiple copies of this message
===
CALL FOR PAPERS
2013 Workshop on
Middleware for HPC and Big Data Systems
MHPC '13
as part of Euro-Par 2013, Aachen, Germany
=
13 matches
Mail list logo