d all future versions. It can always be
> resurrected from SVN history if someone wants to pick up this effort again
> in the future.
>
>
> On Dec 6, 2012, at 11:07 AM, Damien wrote:
>
> > So far, I count three people interested in OpenMPI on Windows. That's
>
All
Since I did not see any Microsoft/other 'official' folks pick up the ball,
let me step up. I have been lurking in this list for quite a while and I am
a generic scientific programmer (i.e. I use many frameworks such as
OpenCL/OpenMP etc, not just MPI)
Although I am primarily a Linux user, I do
Dear OpenMPI developers
I'd like to add my 2 cents that this would be a very desirable feature
enhancement for me as well (and perhaps others).
Best regards
Durga
On Tue, Aug 14, 2012 at 4:29 PM, Zbigniew Koza wrote:
> Hi,
>
> I've just found this information on nVidia's plans regarding enha
This is just a thought:
according to the system() man page, 'SIGCHLD' is blocked during the
execution of the program. Since you are executing your command as a
daemon in the background, it will be permanently blocked.
Does OpenMPI daemon depend on SIGCHLD in any way? That is about the
only differ
I think this is a *great* topic for discussion, so let me throw some
fuel to the fire: the mechanism described in the blog (that makes
perfect sense) is fine for (N)UMA shared memory architectures. But
will it work for asymmetric architectures such as the Cell BE or
discrete GPUs where the data bet
Since /tmp is mounted across a network and /dev/shm is (always) local,
/dev/shm seems to be the right place for shared memory transactions.
If you create temporary files using mktemp is it being created in
/dev/shm or /tmp?
On Thu, Nov 3, 2011 at 11:50 AM, Bogdan Costescu wrote:
> On Thu, Nov 3,
Any particular reason these calls don't nest? In some other HPC-like
paradigms (e.g. VSIPL) such calls are allowed to nest (i.e. only the
finalize() that matches the first init() will destroy allocated
resources.)
Just a curiosity question, doesn't really concern me in any particular way.
Best re
Is there any provision/future plans to add OpenCL support as well?
CUDA is an Nvidia-only technology, so it might be a bit limiting in
some cases.
Best regards
Durga
On Thu, Oct 27, 2011 at 2:45 PM, Rolf vandeVaart wrote:
> Actually, that is not quite right. From the FAQ:
>
>
>
> “This feature
If the mmap() pages are created with MAP_SHARED, then they should be
sharable with other processes in the same node, isn't it? MPI
processes are just like any other process, aren't they? Will one of
the MPI Gurus please comment?
Regards
Durga
On Mon, Oct 17, 2011 at 9:45 AM, Gabriele Fatigati w
Is anything done at the kernel level portable (e.g. to Windows)? It
*can* be, in principle at least (by putting appropriate #ifdef's in
the code), but I am wondering if it is in reality.
Also, in 2005 there was an attempt to implement SSI (Single System
Image) functionality to the then-current 2.6
A follow-up question (and pardon if this sounds stupid) is this:
If I want to make my process multithreaded, BUT only one thread has
anything to do with MPI (for example, using OpenMP inside MPI), then
the results will be correct EVEN IF #1 or #2 of Eugene holds true. Is
this correct?
Thanks
Durg
I'd like to add to this question the following:
If I compile with --enable-heterogenous flag for different
*architectures* (I have a mix of old 32 bit x86, newer x86_64 and some
Cell BE based boxes (PS3)), would I be able to form a MPD ring between
all these different machines?
Best regards
Durga
I think the 'middle ground' approach can be simplified even further if
the data file is in a shared device (e.g. NFS/Samba mount) that can be
mounted at the same location of the file system tree on all nodes. I
have never tried it, though and mmap()'ing a non-POSIX compliant file
system such as Sam
Is the data coming from a read-only file? In that case, a better way
might be to memory map that file in the root process and share the map
pointer in all the slave threads. This, like shared memory, will work
only for processes within a node, of course.
On Fri, Sep 24, 2010 at 3:46 AM, Andrei Fo
Hi Brad/others
Sorry for waking this very stale thread, but I am researching the
prospects of CellBE based supercomputing and I found this old email a
promising lead.
My question is: what was the reason for choosing to mix an x86 based
AMD cores and PPC 970 based Cell? Was the Cell based computer
This would be a very welcoming new feature for me as well. My two
thumbs up when it happens.
Best regards
Durga
On Tue, Apr 13, 2010 at 10:28 AM, Ralph Castain wrote:
> Not right now, but coming later this year...
>
> On Apr 13, 2010, at 7:21 AM, Jürgen Kaiser wrote:
>
>> Hi,
>>
>> Can I force
e.
>
>
> On Nov 15, 2009, at 14:39 , Durga Choudhury wrote:
>
>> I apologize for dragging in this conversation in a different
>> direction, but I'd be very interested to know why the behavior with
>> the Playstation is different from other architectures. The PS3 b
I apologize for dragging in this conversation in a different
direction, but I'd be very interested to know why the behavior with
the Playstation is different from other architectures. The PS3 box has
a single gigabit ethernet and no exapansion ports, so I'd assume it's
behavior would be no differen
an you please tell how to write such programs in Open MPI.
>>
>> Thanks in advance.
>>
>> Regards,
>> On Thu, Jul 9, 2009 at 8:30 PM, Durga Choudhury wrote:
>>>
>>> Although I have perhaps the least experience on the topic in this
>>> list,
The 'system' command will fork a separate process to run. If I
remember correctly, forking within MPI can lead to undefined behavior.
Can someone in OpenMPI development team clarify?
What I don't understand is: why is your TCP network so unstable that
you are worried about reachability? For MPI to
Although I have perhaps the least experience on the topic in this
list, I will take a shot; more experienced people, please correct me:
MPI standards specify communication mechanism, not fault tolerance at
any level. You may achieve network tolerance at the IP level by
implementing 'equal cost mul
Josh
This actually is a concern addressed to all the authors/OpenMPI
contributors. The links to IEEExplore or ACM requires a subscription
which, unfortunately, not all the list subscribers have.
Would it be a copyright violation to post the actual paper/article to
the list instead of just a link?
You could use a separate namespace (if you are using C++) and define
your functions there...
Durga
On Wed, May 13, 2009 at 1:20 PM, Le Duy Khanh wrote:
> Dear,
>
> I intend to override some MPI functions such as MPI_Init, MPI_Recv... but I
> don't want to dig into OpenMPI source code.Therefore,
Jeff
I would perhaps remember your statement like part of a religious scripture!
Request to you and everyone else: if you know of a good book and/or
online tutorial on 'how to write large parallel scientific programs',
I am sure it would be of immense use to everyone in this list.
Best regards
Automatically striping large messages across multiple NICs is certainly a
very nice feature; I was not aware that OpenMPI does this transparently. (I
wonder if other MPI implementations do this or not). However, I have the
following concern: Since the communication over an ethernet NIC is most
like
Did you export your variables? Otherwise the child shell that forks the MPI
process will not inherit it.
On 8/14/07, Rodrigo Faccioli wrote:
>
> Thanks, Tim Prins for your email.
>
> However It did't resolve my problem.
>
> I set the enviroment variable on my Kubuntu Linux:
>
> faccioli@facciol
Even simpler, you could just write a macro wrapper around MPI_Send (as
opposed to C code). However, if your calls are happening inside a
precompiled library (and you don't have source code for it or don't want to
recompile it) then this won't work and you'd want a real profiler. However,
I don't t
otential of the resources
(ie, memory) ending up associated with a different processor than the
one the process gets pinned to. That isn't a big deal on Intel
machines, but is a major issue for AMD processors.
Just my $0.02, anyway.
Brian
On Nov 28, 2006, at 6:09 PM, Durga Choudhury wrote:
&
Jeff (and everybody else)
First of all, pardon me if this is a stupid comment; I am learning the
nuts-and-bolts of parallel programming; but my comment is as follows:
Why can't this be done *outside* openMPI, by calling Linux's processor
affinity APIs directly? I work with a blade server kind of
Chev
Interesting question; I too would like to hear about it from the experts in
this forum. However, off the top of my head, I have the following advise for
you.
Yes, you could share the memory between processes using the shm_xxx system
calls of unix. However, it would be a lot easier if you us
Calin
Your questions don't belong in this forum. You either need to be computer
literate (your questions are basic OS related) or delegate this task to
someone more experienced.
Good luck
Durga
On 11/3/06, calin pal wrote:
/*please read the mail and ans my query*/
sir,
in f
Calin
You do not need to be root to do this. Just add the following line to your
.bashrc file, located in your home directory. (or an equivalent file if you
are not a bash user):
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib
(if you ran ./configure with a --prefix option, then use $pref
As an alternate suggestion (although George's is better, since this will
affect your entire network connectivity), you could override the default TCP
timeout values with the "sysctl -w" command.
The following three OIDs affect TCP timeout behavior under Linux:
net.ipv4.tcp_keepalive_intvl = 75 <-
Very interesting, indeed! Message passing running over raw Ethernet using
cheap COTS PCs is indeed the need of the hours for people like me who has a
very shallow pocket. Great work! What would make this effort *really* cool
is to have a one-to-one mapping of APIs from MPI domain to GAMMA domain,
wn to ~130MB/s and after long run finally comes to ~54MB/s. Why this
type of network slowing down with time is happenning?
Regards,
Jayanta
On Mon, 23 Oct 2006, Durga Choudhury wrote:
> Did you try channel bonding? If your OS is Linux, there are plenty of
> "howto" on the internet wh
Did you try channel bonding? If your OS is Linux, there are plenty of
"howto" on the internet which will tell you how to do it.
However, your CPU might be the bottleneck in this case. How much of CPU
horsepower is available at 140MB/s?
If the CPU *is* the bottleneck, changing your network driver
George
I knew that was the answer to Calin's question, but I still would like to
understand the issue:
by default, the openMPI installer installs the libraries in /usr/local/lib,
which is a standard location for the C compiler to look for libraries. So
*why* do I need to explicitly specify this
My opinion would be to use pthreads, for a couple of reasons:
1. You don't need an OMP aware compiler; any old compiler would do.
2. The pthread library is more well adapted and hence might be more
optimized than the code emitted from an OMP compiler.
If your operating system is Linux, you may u
.
Thanks
Durga
On 8/28/06, Miguel Figueiredo Mascarenhas Sousa Filipe <
miguel.fil...@gmail.com> wrote:
Hi there,
On 8/27/06, Durga Choudhury wrote:
>
> Hi all
>
> I am getting an error (details follow) in the simplest of the possible
> test scenarios:
>
> Two
Hi all
I am getting an error (details follow) in the simplest of the possible test
scenarios:
Two identical regular Dell PCs connected back-to-back via an ethernet switch
on the 10/100 ethernet. Both run Fedora Core 4. Identical version (1.1) of
Open MPI is compiled and installed on both of them
I am also guessing you might be actually using only one of the gigabit links
even though you have two available. I assume you have configured the
equal-cost-multi-path (ECMP) IP routes between the two hosts correctly; even
then, ECMP, as implemented in most IP stacks (not sure if there is an RFC
f
Hi All
We have been using the Argonne MPICH (over TCP/IP) on our in-house designed
embedded multicomputer for last several months with satisfactory results.
Our network technology is custom built and is * *not** infiniband (or any
published standards, such as Myrinet) based. This is due to the na
Do you want to use MPI to chain a bunch of such laptops together (e.g. via
ethernet) or just for the cores to talk to each other? If the latter; you do
not need MPI. Your SMP operating system (e.g. Linux) will automatically
utilize both cores. The Linux 2.6 kernel also supports processor affinity
Thanks, Jonathan
This patch would be particularly useful for me.
Best regards
Durga
On 5/11/06, Jonathan Day wrote:
Hi,
As I've said before, I've been working on MIPS support
for OpenMPI, as the current implementation is
Irix-specific in places. Well, it is finally done and
I present to y
gt; On Feb 28, 2006, at 7:45 PM, Durga Choudhury wrote:
>
> > When I downloaded openMPI and tried to compile it for our MIPS64
> > platform, it broke at 3 places.
>
> I'm guessing since you call it MIPS64 that you aren't running IRIX,
> since most SGI users just call
Hi All
I am a total novice to the MPI world, so please forgive me if any of my
questions/comments sound stupid.
First, a few *possible* bugfixes:
When I downloaded openMPI and tried to compile it for our MIPS64 platform,
it broke at 3 places.
1. The configure script in the root directory did no
46 matches
Mail list logo