Hi,
I think, on the contrary, that he did notice the AMD/ARM issue. I suppose
you haven't read the text (and I like the fact that there are different
opinions on this issue).
Matthieu
2018-01-05 8:23 GMT+01:00 Gilles Gouaillardet :
> John,
>
>
> The technical assessment so to speak is linked in
I don't think there is anything OpenMPI can do for you here. The issue is
clearly on how you are compiling your application.
To start, you can try to compile without the --march=generic and use
something as generic as possible (i.e. only SSE2). Then if this doesn't
work for your app, do the same fo
If you don't need to know if the data was transferred or not, then why do
you transfer it in the first place? The schema seems kind of strange, as
you don't have any clue that the data was actually transferred. Actually
without Wait and Test, you can pretty much assume you don't transfer
anything.
Hi,
I think you have to call either Wait or Test to make the communications
move forward in the general case. Some hardware may have a hardware thread
that makes the communication, but usually you have to make it "advance"
yourself by either calling Wait ot Test.
Cheers,
Matthieu
2015-04-03 5:4
just
> set the request to MPI_SUCCESS for ranks which I will send zero buffer to
> and have no receive call?
> Is there any other MPI routine that can do MPI_Scatterv on selected ranks?
> without creating a new communicator?
>
>
>
>
> On Wed, Jul 16, 2014 at 3:42 PM, Matthieu B
what if I can not bypass the send message. For
> example if I have MPI_Iscatter and for some ranks the send buffer has zero
> size. At those ranks it will jump the MPI_Iscatter routine, which means I
> have some zero size send and no receive.
>
>
>
>
> On Wed, Jul 16, 2014
Hi,
The easiest would also to bypass the Isend as well! The standard is
clear, you need a pair of Isend/Irecv.
Cheers,
2014-07-16 14:27 GMT+01:00 Ziv Aginsky :
> I have a loop in which I will do some MPI_Isend. According to the MPI
> standard, for every send you need a recv
>
> If one or sev
A simple test would be to run it with valgrind, so that out of bound
reads and writes will be obvious.
Cheers,
Matthieu
2014-05-08 21:16 GMT+02:00 Spenser Gilliland :
> George & Mattheiu,
>
>> The Alltoall should only return when all data is sent and received on
>> the current rank, so there sho
The Alltoall should only return when all data is sent and received on
the current rank, so there shouldn't be any race condition.
Cheers,
Matthieu
2014-05-08 15:53 GMT+02:00 Spenser Gilliland :
> George & other list members,
>
> I think I may have a race condition in this example that is masked
he results are
> the same. It seems the communication didn't overlap with computation.
>
> Regards,
> Zehan
>
> On 4/5/14, Matthieu Brucher wrote:
>> Hi,
>>
>> Try waiting on all gathers at the same time, not one by one (this is
>> what non blo
Hi,
Try waiting on all gathers at the same time, not one by one (this is
what non blocking collectives are made for!)
Cheers,
Matthieu
2014-04-05 10:35 GMT+01:00 Zehan Cui :
> Hi,
>
> I'm testing the non-blocking collective of OpenMPI-1.8.
>
> I have two nodes with Infiniband to perform allgath
that the error is not caught is because opal_argv_join
doesn't get argc as one of its parameters, so it can't check the
value. It just assumes the standard was respected.
Matthieu
2013/11/12 Ralph Castain :
>
> On Nov 12, 2013, at 8:56 AM, Matthieu Brucher
> wrote:
>
> I
It seems that argv[argc] should always be NULL according to the
standard. So OMPI failure is not actually a bug!
Cheers,
2013/11/12 Matthieu Brucher :
> Interestingly enough, in ompi_mpi_init, opal_argv_join is called
> without then array length, so I suppose that in the usual argc/argv
&g
fault occured at MPI_Init. The code works fine if I use
> MPI_Init(NULL,NULL) instead. The same code also compiles and runs without a
> problem on my laptop with mpich2-1.4.
>
> Best,
> Yu-Hang
>
>
>
> On Tue, Nov 12, 2013 at 11:18 AM, Matthieu Brucher
> wrote:
>>
Hi,
Are you sure this is the correct code? This seems strange and not a good idea:
MPI_Init(&argc,&argv);
// do something...
for( int i = 0 ; i < argc ; i++ ) delete [] argv[i];
delete [] argv;
Did you mean argc_new and argv_new instead?
Do you have the same error without CUDA?
Hi,
I tried with the latest nightly (well now it may not be the latest
anymore), and orte-info didn't crash. So I'll try again later with my
app.
thanks,
Matthieu
2013/9/15 Matthieu Brucher :
> I can try later this week, yes.
> Thanks
>
> Le 15 sept. 2013 19:09, &q
.7.3 shortly and it is mostly complete at this time.
>
>
> On Sep 15, 2013, at 10:43 AM, Matthieu Brucher
> wrote:
>
> Yes, ompi_info does not crash.
> Le 15 sept. 2013 18:05, "Ralph Castain" a écrit :
>
>> No - out of curiosity, does ompi_info work? I'm wo
Yes, ompi_info does not crash.
Le 15 sept. 2013 18:05, "Ralph Castain" a écrit :
> No - out of curiosity, does ompi_info work? I'm wondering if this is
> strictly an orte-info problem.
>
> On Sep 15, 2013, at 10:03 AM, Matthieu Brucher
> wrote:
>
> Just --wi
Just --with-lsf. Perhaps because then it must be launched through lsf?
Le 15 sept. 2013 18:02, "Ralph Castain" a écrit :
> I'm not entirely sure - I don't see anything that would cause that problem
> in that location. How did you configure this?
>
>
> On
Hi,
I compiled OpenMPI on a RHEL6 box with LSF support, but when I run
sonthing, it crashes. Also orte-info crashes:
Package: Open MPI mbruc...@xxx.com Distribution
Open RTE: 1.7.2
Open RTE repo revision: r28673
Open RTE release date: Jun 26, 2013
Hi,
I saw a typo on the FAQ page
http://www.open-mpi.org/faq/?category=mpi-apps. It says that the
variable to change the CXX compiler is OMPI_MPIXX, but it is
OMPI_MPICXX (a C is missing).
Cheers,
--
Information System Engineer, Ph.D.
Blog: http://matt.eifelle.com
LinkedIn: http://www.linkedin.c
Hi,
I guess you have another problem in your application, surely a memory error
somewhere else.
Cheers,
2013/6/21 Mohamad Ali Rostami
> Hi there
>
> My MPI program works completely without any problem in the interactive
> mode, i.e. before submitting to HPC. However when I submit it with "bsu
Hi,
This may be because you have an error in the parallel communication
pattern. Without other information, it is difficult to say something
else. Try degugging your application.
Matthieu
2013/2/24, Mohammad Mohsenie :
> Dear All,
> Greetings,
>
> I have installed openmpi to make my siesta packag
Hi,
You need to use the command prompt provided by Visual Studio and it will
work.
Matthieu
2012/5/18 Ghobad Zarrinchian
> Hi. I've installed Visual Studio 2008 on my machine. But i have still the
> same problem. How can i solve it? thx
>
>
> On Fri, May 11, 2012 at 10:50 PM, Ghobad Zarrinchia
width.
Just my opinion.
Matthieu Brucher
2011/12/23 Santosh Ansumali
> Dear All,
>We are running a PDE solver which is memory bound. Due to
> cache related issue, smaller number of grid point per core leads to
> better performance for this code. Thus, though available mem
each
other. This is what MPI_Init is used for.
Matthieu Brucher
2011/12/14 Dmitry N. Mikushin
> Dear colleagues,
>
> For GPU Winter School powered by Moscow State University cluster
> "Lomonosov", the OpenMPI 1.7 was built to test and popularize CUDA
> capabilities of
Don't forget that MPT has some optimizations OpenMPI may not have, as
"overriding" free(). This way, MPT can have a huge performance boost
if you're allocating and freeing memory, and the same happens if you
communicate often.
Matthieu
2010/12/21 Gilbert Grosdidier :
> Hi George,
> Thanks for yo
2010/6/21 Jack Bryan :
> Hi,
> thank you very much for your help.
> What is the meaning of " must find a system so that every task can be
> serialized in the same form." What is the meaning of "serize " ?
Serialize is the process of converting an object instance into a
text/binary stream, and to c
2010/6/20 Jack Bryan :
> Hi, Matthieu:
> Thanks for your help.
> Most of your ideas show that what I want to do.
> My scheduler should be able to be called from any C++ program, which can
> put
> a list of tasks to the scheduler and then the scheduler distributes the
> tasks to other client nodes.
Hi Jack,
What you are seeking is the client/server pattern. Have one node act
as a server. It will create a list of tasks or even a graph of tasks
if you have dependencies, and then create clients that will connect to
the server with an RPC protocol (I've done this with a SOAP+TCP
protocol, the se
Hi,
You can try MPE (free) or Vampir (not free, but can be integrated
inside OpenMPI).
Matthieu
2009/9/29 Rahul Nabar :
> I have a code that seems to run about 40% faster when I bond together
> twin eth interfaces. The question, of course, arises: is it really
> producing so much traffic to keep
Strange that it indicates the whole path. I had the same issue, but it
only said that orted couldn't be found. In my .bashrc, I put what it
needed to get orted in my PATH, and it worked.
Matthieu
2009/8/8 Ralph Castain :
> Not that I know of - I don't think we currently have any way for you to
>
> IF boost is attached to MPI 3 (or whatever), AND it becomes part of the
> mainstream MPI implementations, THEN you can have the discussion again.
Hi,
At the moment, I think that Boost.MPI only supports MPI1.1, and even
then, some additional work may be done, at least regarding the complex
datat
Thank you a lot for this.
I've just checked everything again, recompiled my code as well (I'm
using SCons so it detects that the headers and the libraries changed)
and it works without a warning.
Matthieu
2009/5/12 Jeff Squyres :
> On May 12, 2009, at 8:17 AM, Matthieu Brucher w
2009/5/12 Jeff Squyres :
> Or it could be that you installed 1.3.2 over 1.2.8 -- some of the 1.2.8
> components that no longer exist in the 1.3 series are still in the
> installation tree, but failed to open properly (unfortunately, libltdl gives
> an incorrect "file not found" error message if it
Hi,
I've managed to use 1.3.2 (still not with LSF and InfiniPath, I start
one step after another), but I have additional warnings that didn't
show up in 1.2.8:
[host-b:09180] mca: base: component_find: unable to open
/home/brucher/lib/openmpi/mca_ras_dash_host: file not found (ignored)
[host-b:09
ssary environment variables and
> eventually calls the correct mpirun. (the option "-a openmpi" tells LSF that
> we're using OpenMPI so don't try to autodetect)
>
>
>
> Regards,
>
>
>
> Jeroen Kleijer
>
> On Tue, May 5, 2009 at 2:23 PM, Jeff Sq
2009/5/6 Jeff Squyres :
> On May 5, 2009, at 10:01 AM, Matthieu Brucher wrote:
>
>> > What Terry said is correct. It means that "mpirun" will use, under the
>> > covers, the "native" launching mechanism of LSF to launch jobs (vs.,
>> > say,
&g
2009/5/5 Jeff Squyres :
> On May 5, 2009, at 6:10 AM, Matthieu Brucher wrote:
>
>> The first is what the support of LSF by OpenMPI means. When mpirun is
>> executed, it is an LSF job that is actually ran? Or what does it
>> imply? I've tried to search on the open
r/use case.
My second question is about the LSF detection. lsf.h is detected, but
when lsb_launch is searched for ion libbat.so, it fails because
parse_time and parse_time_ex are not found. Is there a way to add
additional lsf libraries so that the search can be done?
Matthieu Brucher
--
Informat
40 matches
Mail list logo