Thanks for your answer.
Yes, I mistakenly printed the return value of the function rather than
atomicity.
My real problem is that I want to access the fields from the MPI_File
structure other than the ones provided by the API e.g. the fd_sys.
Atomicity was just one example I used to explain my p
not off the top of my head. However, as noted earlier, there is absolutely no
advantage to a singleton vs mpirun start - all the singleton does is
immediately fork/exec "mpirun" to support the rest of the job. In both cases,
you have a daemon running the job - only difference is in the number of
On Thu, Aug 30, 2012 at 5:12 AM, Jeff Squyres wrote:
> On Aug 29, 2012, at 2:25 PM, Yong Qin wrote:
>
>> This issue has been observed on OMPI 1.6 and 1.6.1 with openib btl but
>> not on 1.4.5 (tcp btl is always fine). The application is VASP and
>> only one specific dataset is identified during th
Thanks a lot!
Z Koza
2012/8/30 Gus Correa
> Hi Zbigniew
>
> Besides the OpenMPI processor affinity capability that Jeff mentioned.
>
> If your Curie cluster has a resource manager [Torque, SGE, etc],
> your job submission script to the resource manager/ queue system
> should specifically reque
Hi Zbigniew
Besides the OpenMPI processor affinity capability that Jeff mentioned.
If your Curie cluster has a resource manager [Torque, SGE, etc],
your job submission script to the resource manager/ queue system
should specifically request a single node, for the test that you have in
mind.
On Aug 30, 2012, at 11:26 AM, Tom Rosmond wrote:
> I just built Openmpi 1.6.1 with the '--with-libnuma=(dir)' and got a
> 'WARNING: unrecognized options' message. I am running on a NUMA
> architecture and have needed this feature with earlier Openmpi releases.
> Is the support now native in the 1
Hi all -
I'm writing a program which will start in a single process. This
program will call init (THREAD_MULTIPLE), and finalize. In between,
it will call spawn an unknown number of times (think of the program as
a daemon that launches jobs over and over again).
I'm running a simple example rig
Hi,
Modern Fortran has a feature called ISO_C_BINDING. It essentially
allows to declare a binding of external function to be used from
Fortran program. You only need to provide a corresponding interface.
ISO_C_BINDING module contains C-like extensions in type system, but
you don't need them, as yo
In the event that I need to get this up-and-running soon (I do need
something working within 2 weeks), can you recommend an older version
where this is expected to work?
Thanks,
Brian
On Tue, Aug 28, 2012 at 4:58 PM, Brian Budge wrote:
> Thanks!
>
> On Tue, Aug 28, 2012 at 4:57 PM, Ralph Casta
I just built Openmpi 1.6.1 with the '--with-libnuma=(dir)' and got a
'WARNING: unrecognized options' message. I am running on a NUMA
architecture and have needed this feature with earlier Openmpi releases.
Is the support now native in the 1.6 versions? If not, what should I
do?
T. Rosmond
Hi Shiqing,
> Could you please send the output of ompi_info command under you 64 bit
> env? And could you please also check if you have CCP or HPC pack
> installed? The incorrect configuration of that might cause Open MPI hanging.
I haven't installed Microsoft's Compute Cluster Pack or High Per
On Aug 29, 2012, at 2:25 PM, Yong Qin wrote:
> This issue has been observed on OMPI 1.6 and 1.6.1 with openib btl but
> not on 1.4.5 (tcp btl is always fine). The application is VASP and
> only one specific dataset is identified during the testing, and the OS
> is SL 6.2 with kernel 2.6.32-220.23.
In the OMPI v1.6 series, you can use the processor affinity options. And you
can use --report-bindings to show exactly where processes were bound. For
example:
-
% mpirun -np 4 --bind-to-core --report-bindings -bycore uptime
[svbu-mpi056:18904] MCW rank 0 bound to socket 0[core 0]: [B . .
Good question. You might want to ask in some Fortran-based user forums. :-)
This list is for support of Open MPI, not necessarily any direct language
support.
But in that light, I'll warn you that fork and friends are not directly
supported in MPI applications (e.g., it can cause problems if
On Aug 30, 2012, at 5:05 AM, Ammar Ahmad Awan wrote:
> int atomicity;
>
> // method 1
> printf("atomicity : %d", MPI_File_get_atomicity(fh,&atomicity));
I think you want:
int atomicity;
MPI_File_get_atomicity(fh, &atomicity);
printf("atomicity: %d\n", atomicity);
MPI_File is an opaque structur
Dear users,
How to use fork(), vfork() type functions in Fortran programming ??
Thanking you in advance
--
Sudhir Kumar Sahoo
Ph.D Scholar
Dept. Of Chemistry
IIT Kanpur-208016
Hi,
consider this specification:
"Curie fat consists in 360 nodes which contains 4 eight cores CPU
Nehalem-EX clocked at 2.27 GHz, let 32 cores / node and 11520 cores for
the full fat configuration"
Suppose I would like to run some performance tests just on a single
processor rather than 4
Hi Siegmar,
Could you please send the output of ompi_info command under you 64 bit
env? And could you please also check if you have CCP or HPC pack
installed? The incorrect configuration of that might cause Open MPI hanging.
Regards,
Shiqing
On 2012-08-29 12:33 PM, Siegmar Gross wrote:
Hi
Dear All,
I am using a simple program to access MPI_File attributes. I know that the
API provides functions such as MPI_File_get_atomicity( ) but is there a way
to access them directly through code?
Example:
int atomicity;
// method 1
printf("atomicity : %d", MPI_File_get_atomicity(fh,&atomicit
Paul I tried NetPipeMPI - (belatedly because their site was down down for a
couple of days)
The results show a max of 7.4 Gb/s at 8388605 bytes which seems fine.
But my program still runs slowly and stalls occasionally.
I've using 1 buffer per process - I assume this is ok.
Is it of any signific
20 matches
Mail list logo