Re: [OMPI users] latest Intel CPU bug

2018-01-05 Thread Matthieu Brucher
Hi,

I think, on the contrary, that he did notice the AMD/ARM issue. I suppose
you haven't read the text (and I like the fact that there are different
opinions on this issue).

Matthieu

2018-01-05 8:23 GMT+01:00 Gilles Gouaillardet :

> John,
>
>
> The technical assessment so to speak is linked in the article and is
> available at https://googleprojectzero.blogspot.jp/2018/01/reading-privil
> eged-memory-with-side.html.
>
> The long rant against Intel PR blinded you and you did not notice AMD and
> ARM (and though not mentionned here, Power and Sparc too) are vulnerable to
> some bugs.
>
>
> Full disclosure, i have no affiliation with Intel, but i am getting pissed
> with the hysteria around this issue.
>
> Gilles
>
>
> On 1/5/2018 3:54 PM, John Chludzinski wrote:
>
>> That article gives the best technical assessment I've seen of Intel's
>> architecture bug. I noted the discussion's subject and thought I'd add some
>> clarity. Nothing more.
>>
>> For the TL;DR crowd: get an AMD chip in your computer.
>>
>> On Thursday, January 4, 2018, r...@open-mpi.org 
>> mailto:r...@open-mpi.org>> wrote:
>>
>> Yes, please - that was totally inappropriate for this mailing list.
>> Ralph
>>
>>
>> On Jan 4, 2018, at 4:33 PM, Jeff Hammond >> > wrote:
>>>
>>> Can we restrain ourselves to talk about Open-MPI or at least
>>> technical aspects of HPC communication on this list and leave the
>>> stock market tips for Hacker News and Twitter?
>>>
>>> Thanks,
>>>
>>> Jeff
>>>
>>> On Thu, Jan 4, 2018 at 3:53 PM, John
>>> Chludzinski>> >wrote:
>>>
>>> Fromhttps://semiaccurate.com/2018/01/04/kaiser-security-hole
>>> s-will-devastate-intels-marketshare/
>>> >> will-devastate-intels-marketshare/>
>>>
>>>
>>>
>>>   Kaiser security holes will devastate Intel’s marketshare
>>>
>>>
>>>   Analysis: This one tips the balance toward AMD in a big way
>>>
>>>
>>>   Jan 4, 2018 by Charlie Demerjian
>>>   
>>>
>>>
>>>
>>> This latest decade-long critical security hole in Intel CPUs
>>> is going to cost the company significant market share.
>>> SemiAccurate thinks it is not only consequential but will
>>> shift the balance of power away from Intel CPUs for at least
>>> the next several years.
>>>
>>> Today’s latest crop of gaping security flaws have three sets
>>> of holes across Intel, AMD, and ARM processors along with a
>>> slew of official statements and detailed analyses. On top of
>>> that the statements from vendors range from detailed and
>>> direct to intentionally misleading and slimy. Lets take a
>>> look at what the problems are, who they effect and what the
>>> outcome will be. Those outcomes range from trivial patching
>>> to destroying the market share of Intel servers, and no we
>>> are not joking.
>>>
>>> (*Authors Note 1:* For the technical readers we are
>>> simplifying a lot, sorry we know this hurts. The full
>>> disclosure docs are linked, read them for the details.)
>>>
>>> (*Authors Note 2:* For the financial oriented subscribers out
>>> there, the parts relevant to you are at the very end, the
>>> section is titled *Rubber Meet Road*.)
>>>
>>> *The Problem(s):*
>>>
>>> As we said earlier there are three distinct security flaws
>>> that all fall somewhat under the same umbrella. All are ‘new’
>>> in the sense that the class of attacks hasn’t been publicly
>>> described before, and all are very obscure CPU speculative
>>> execution and timing related problems. The extent the fixes
>>> affect differing architectures also ranges from minor to
>>> near-crippling slowdowns. Worse yet is that all three flaws
>>> aren’t bugs or errors, they exploit correct CPU behavior to
>>> allow the systems to be hacked.
>>>
>>> The three problems are cleverly labeled Variant One, Variant
>>> Two, and Variant Three. Google Project Zero was the original
>>> discoverer of them and has labeled the classes as Bounds
>>> Bypass Check, Branch Target Injection, and Rogue Data Cache
>>> Load respectively. You can read up on the extensive and gory
>>> details here
>>> >> ileged-memory-with-side.html> if
>>>
>>> you wish.
>>>
>>> If you are the TLDR type the very simplified summary is that
>>> modern CPUs will speculatively execute operations ahead of
>>> the one they are currently running. Some architectures will
>>> allow these executions to start even when they v

Re: [OMPI users] latest Intel CPU bug

2018-01-05 Thread John Chludzinski
I believe this snippet sums it up pretty well:

"Now you have a bit more context about why Intel’s response was, well, a
non-response. They blamed others, correctly, for having the same problem
but their blanket statement avoided the obvious issue of the others aren’t
crippled by the effects of the patches like Intel. Intel screwed up, badly,
and are facing a 30% performance hit going forward for it. AMD did right
and are probably breaking out the champagne at HQ about now."

On Fri, Jan 5, 2018 at 5:38 AM, Matthieu Brucher  wrote:

> Hi,
>
> I think, on the contrary, that he did notice the AMD/ARM issue. I suppose
> you haven't read the text (and I like the fact that there are different
> opinions on this issue).
>
> Matthieu
>
> 2018-01-05 8:23 GMT+01:00 Gilles Gouaillardet :
>
>> John,
>>
>>
>> The technical assessment so to speak is linked in the article and is
>> available at https://googleprojectzero.blogspot.jp/2018/01/reading-privil
>> eged-memory-with-side.html.
>>
>> The long rant against Intel PR blinded you and you did not notice AMD and
>> ARM (and though not mentionned here, Power and Sparc too) are vulnerable to
>> some bugs.
>>
>>
>> Full disclosure, i have no affiliation with Intel, but i am getting
>> pissed with the hysteria around this issue.
>>
>> Gilles
>>
>>
>> On 1/5/2018 3:54 PM, John Chludzinski wrote:
>>
>>> That article gives the best technical assessment I've seen of Intel's
>>> architecture bug. I noted the discussion's subject and thought I'd add some
>>> clarity. Nothing more.
>>>
>>> For the TL;DR crowd: get an AMD chip in your computer.
>>>
>>> On Thursday, January 4, 2018, r...@open-mpi.org 
>>> mailto:r...@open-mpi.org>> wrote:
>>>
>>> Yes, please - that was totally inappropriate for this mailing list.
>>> Ralph
>>>
>>>
>>> On Jan 4, 2018, at 4:33 PM, Jeff Hammond >>> > wrote:

 Can we restrain ourselves to talk about Open-MPI or at least
 technical aspects of HPC communication on this list and leave the
 stock market tips for Hacker News and Twitter?

 Thanks,

 Jeff

 On Thu, Jan 4, 2018 at 3:53 PM, John
 Chludzinski>>> >wrote:

 Fromhttps://semiaccurate.com/2018/01/04/kaiser-security-hole
 s-will-devastate-intels-marketshare/
 



   Kaiser security holes will devastate Intel’s marketshare


   Analysis: This one tips the balance toward AMD in a big
 way


   Jan 4, 2018 by Charlie Demerjian
   



 This latest decade-long critical security hole in Intel CPUs
 is going to cost the company significant market share.
 SemiAccurate thinks it is not only consequential but will
 shift the balance of power away from Intel CPUs for at least
 the next several years.

 Today’s latest crop of gaping security flaws have three sets
 of holes across Intel, AMD, and ARM processors along with a
 slew of official statements and detailed analyses. On top of
 that the statements from vendors range from detailed and
 direct to intentionally misleading and slimy. Lets take a
 look at what the problems are, who they effect and what the
 outcome will be. Those outcomes range from trivial patching
 to destroying the market share of Intel servers, and no we
 are not joking.

 (*Authors Note 1:* For the technical readers we are
 simplifying a lot, sorry we know this hurts. The full
 disclosure docs are linked, read them for the details.)

 (*Authors Note 2:* For the financial oriented subscribers out
 there, the parts relevant to you are at the very end, the
 section is titled *Rubber Meet Road*.)

 *The Problem(s):*

 As we said earlier there are three distinct security flaws
 that all fall somewhat under the same umbrella. All are ‘new’
 in the sense that the class of attacks hasn’t been publicly
 described before, and all are very obscure CPU speculative
 execution and timing related problems. The extent the fixes
 affect differing architectures also ranges from minor to
 near-crippling slowdowns. Worse yet is that all three flaws
 aren’t bugs or errors, they exploit correct CPU behavior to
 allow the systems to be hacked.

 The three problems are cleverly labeled Variant One, Variant
 Two, and Variant Three. Google Project Zero was the 

Re: [OMPI users] latest Intel CPU bug

2018-01-05 Thread Ray Sheppard

Hello All,
  Please people, just drop it.  I appreciated the initial post in 
response to to the valid question of how these bugs might impact OMPI 
and message passing in general.  At this point, y'all are beating the 
proverbial dead horse.  If you wish to debate, please mail each other 
directly.  Thank you.

   Ray

On 1/5/2018 9:09 AM, John Chludzinski wrote:

I believe this snippet sums it up pretty well:

"Now you have a bit more context about why Intel’s response was, well, 
a non-response. They blamed others, correctly, for having the same 
problem but their blanket statement avoided the obvious issue of the 
others aren’t crippled by the effects of the patches like Intel. Intel 
screwed up, badly, and are facing a 30% performance hit going forward 
for it. AMD did right and are probably breaking out the champagne at 
HQ about now."


On Fri, Jan 5, 2018 at 5:38 AM, Matthieu Brucher 
mailto:matthieu.bruc...@gmail.com>> wrote:


Hi,

I think, on the contrary, that he did notice the AMD/ARM issue. I
suppose you haven't read the text (and I like the fact that there
are different opinions on this issue).

Matthieu

2018-01-05 8:23 GMT+01:00 Gilles Gouaillardet mailto:gil...@rist.or.jp>>:

John,


The technical assessment so to speak is linked in the article
and is available at

https://googleprojectzero.blogspot.jp/2018/01/reading-privileged-memory-with-side.html

.

The long rant against Intel PR blinded you and you did not
notice AMD and ARM (and though not mentionned here, Power and
Sparc too) are vulnerable to some bugs.


Full disclosure, i have no affiliation with Intel, but i am
getting pissed with the hysteria around this issue.

Gilles


On 1/5/2018 3:54 PM, John Chludzinski wrote:

That article gives the best technical assessment I've
seen of Intel's architecture bug. I noted the discussion's
subject and thought I'd add some clarity. Nothing more.

For the TL;DR crowd: get an AMD chip in your computer.

On Thursday, January 4, 2018, r...@open-mpi.org
 > mailto:r...@open-mpi.org> >> wrote:

    Yes, please - that was totally inappropriate for this
mailing list.
    Ralph


    On Jan 4, 2018, at 4:33 PM, Jeff Hammond
mailto:jeff.scie...@gmail.com>
    >> wrote:

    Can we restrain ourselves to talk about Open-MPI
or at least
    technical aspects of HPC communication on this
list and leave the
    stock market tips for Hacker News and Twitter?

    Thanks,

    Jeff

    On Thu, Jan 4, 2018 at 3:53 PM, John
    Chludzinskimailto:john.chludzin...@gmail.com>
    >>wrote:

       

Fromhttps://semiaccurate.com/2018/01/04/kaiser-security-holes-will-devastate-intels-marketshare/


       

>



          Kaiser security holes will devastate Intel’s
marketshare


              Analysis: This one tips the balance
toward AMD in a big way


              Jan 4, 2018 by Charlie Demerjian
             
>



        This latest decade-long critical security hole
in Intel CPUs
        is going to cost the company significant
market share.
        SemiAccurate thinks it is not only
consequential but will
        shift the balance of power away from Intel
CPUs for at least
        the next several years.

        Today’s latest crop of gaping security flaws
have three sets
        of holes across Intel, AMD, and ARM processors
along with a
        slew of official statements and de

Re: [OMPI users] latest Intel CPU bug

2018-01-05 Thread r...@open-mpi.org
That is enough, folks. This is an email forum for users to get help regarding 
Open MPI, not a place to vent your feelings about specific vendors. We ask that 
you respect that policy and refrain from engaging in such behavior.

We don’t care if you are quoting someone else - the fact that “Mikey said it” 
doesn’t justify violating the policy. So please stop this here and now.

Thank you
Ralph


> On Jan 5, 2018, at 6:09 AM, John Chludzinski  
> wrote:
> 
> I believe this snippet sums it up pretty well:
> 
> "Now you have a bit more context about why Intel’s response was, well, a 
> non-response. They blamed others, correctly, for having the same problem but 
> their blanket statement avoided the obvious issue of the others aren’t 
> crippled by the effects of the patches like Intel. Intel screwed up, badly, 
> and are facing a 30% performance hit going forward for it. AMD did right and 
> are probably breaking out the champagne at HQ about now."
> 
> On Fri, Jan 5, 2018 at 5:38 AM, Matthieu Brucher  > wrote:
> Hi,
> 
> I think, on the contrary, that he did notice the AMD/ARM issue. I suppose you 
> haven't read the text (and I like the fact that there are different opinions 
> on this issue).
> 
> Matthieu
> 
> 2018-01-05 8:23 GMT+01:00 Gilles Gouaillardet  >:
> John,
> 
> 
> The technical assessment so to speak is linked in the article and is 
> available at 
> https://googleprojectzero.blogspot.jp/2018/01/reading-privileged-memory-with-side.html
>  
> .
> 
> The long rant against Intel PR blinded you and you did not notice AMD and ARM 
> (and though not mentionned here, Power and Sparc too) are vulnerable to some 
> bugs.
> 
> 
> Full disclosure, i have no affiliation with Intel, but i am getting pissed 
> with the hysteria around this issue.
> 
> Gilles
> 
> 
> On 1/5/2018 3:54 PM, John Chludzinski wrote:
> That article gives the best technical assessment I've seen of Intel's 
> architecture bug. I noted the discussion's subject and thought I'd add some 
> clarity. Nothing more.
> 
> For the TL;DR crowd: get an AMD chip in your computer.
> 
> On Thursday, January 4, 2018, r...@open-mpi.org  
> >    >> wrote:
> 
> Yes, please - that was totally inappropriate for this mailing list.
> Ralph
> 
> 
> On Jan 4, 2018, at 4:33 PM, Jeff Hammond  
> >> wrote:
> 
> Can we restrain ourselves to talk about Open-MPI or at least
> technical aspects of HPC communication on this list and leave the
> stock market tips for Hacker News and Twitter?
> 
> Thanks,
> 
> Jeff
> 
> On Thu, Jan 4, 2018 at 3:53 PM, John
> Chludzinskimailto:john.chludzin...@gmail.com>
>  >>wrote:
> 
> 
> Fromhttps://semiaccurate.com/2018/01/04/kaiser-security-holes-will-devastate-intels-marketshare/
>  
> 
> 
>   
> >
> 
> 
> 
>   Kaiser security holes will devastate Intel’s marketshare
> 
> 
>   Analysis: This one tips the balance toward AMD in a big way
> 
> 
>   Jan 4, 2018 by Charlie Demerjian
>    >
> 
> 
> 
> This latest decade-long critical security hole in Intel CPUs
> is going to cost the company significant market share.
> SemiAccurate thinks it is not only consequential but will
> shift the balance of power away from Intel CPUs for at least
> the next several years.
> 
> Today’s latest crop of gaping security flaws have three sets
> of holes across Intel, AMD, and ARM processors along with a
> slew of official statements and detailed analyses. On top of
> that the statements from vendors range from detailed and
> direct to intentionally misleading and slimy. Lets take a
> look at what the problems are, who they effect and what the
> outcome will be. Those outcomes range from trivial patching
> to destroying the market share of Intel servers, and no we
> are not joking.
> 
> (*Authors Note 1:* For the technical readers we are
> simplifying a lot, sorry we know this hurts. The full
> disclosure docs are linked, read them for the details.)
> 
> (*Authors Note 2:* For t

Re: [OMPI users] latest Intel CPU bug

2018-01-05 Thread Jeff Hammond
An article with "market share" in the title is not a technical assessment,
but in any case, you aren't willing to respect the request to focus on
Open-MPI on the Open-MPI list, so I'll be piping mail from your address to
trash from now on.

Jeff

On Thu, Jan 4, 2018 at 10:54 PM, John Chludzinski <
john.chludzin...@gmail.com> wrote:

> That article gives the best technical assessment I've seen of Intel's
> architecture bug. I noted the discussion's subject and thought I'd add some
> clarity. Nothing more.
>
> For the TL;DR crowd: get an AMD chip in your computer.
>
> On Thursday, January 4, 2018, r...@open-mpi.org  wrote:
>
>> Yes, please - that was totally inappropriate for this mailing list.
>> Ralph
>>
>>
>> On Jan 4, 2018, at 4:33 PM, Jeff Hammond  wrote:
>>
>> Can we restrain ourselves to talk about Open-MPI or at least technical
>> aspects of HPC communication on this list and leave the stock market tips
>> for Hacker News and Twitter?
>>
>> Thanks,
>>
>> Jeff
>>
>> On Thu, Jan 4, 2018 at 3:53 PM, John Chludzinski > gmail.com> wrote:
>>
>>> From https://semiaccurate.com/2018/01/04/kaiser-security-hol
>>> es-will-devastate-intels-marketshare/
>>>
>>> Kaiser security holes will devastate Intel’s marketshareAnalysis: This
>>> one tips the balance toward AMD in a big wayJan 4, 2018 by Charlie
>>> Demerjian 
>>>
>>>
>>>
>>> This latest decade-long critical security hole in Intel CPUs is going to
>>> cost the company significant market share. SemiAccurate thinks it is not
>>> only consequential but will shift the balance of power away from Intel CPUs
>>> for at least the next several years.
>>>
>>> Today’s latest crop of gaping security flaws have three sets of holes
>>> across Intel, AMD, and ARM processors along with a slew of official
>>> statements and detailed analyses. On top of that the statements from
>>> vendors range from detailed and direct to intentionally misleading and
>>> slimy. Lets take a look at what the problems are, who they effect and what
>>> the outcome will be. Those outcomes range from trivial patching to
>>> destroying the market share of Intel servers, and no we are not joking.
>>>
>>> (*Authors Note 1:* For the technical readers we are simplifying a lot,
>>> sorry we know this hurts. The full disclosure docs are linked, read them
>>> for the details.)
>>>
>>> (*Authors Note 2:* For the financial oriented subscribers out there,
>>> the parts relevant to you are at the very end, the section is titled *Rubber
>>> Meet Road*.)
>>>
>>> *The Problem(s):*
>>>
>>> As we said earlier there are three distinct security flaws that all fall
>>> somewhat under the same umbrella. All are ‘new’ in the sense that the class
>>> of attacks hasn’t been publicly described before, and all are very obscure
>>> CPU speculative execution and timing related problems. The extent the fixes
>>> affect differing architectures also ranges from minor to near-crippling
>>> slowdowns. Worse yet is that all three flaws aren’t bugs or errors, they
>>> exploit correct CPU behavior to allow the systems to be hacked.
>>>
>>> The three problems are cleverly labeled Variant One, Variant Two, and
>>> Variant Three. Google Project Zero was the original discoverer of them and
>>> has labeled the classes as Bounds Bypass Check, Branch Target Injection,
>>> and Rogue Data Cache Load respectively. You can read up on the
>>> extensive and gory details here
>>> 
>>>  if
>>> you wish.
>>>
>>> If you are the TLDR type the very simplified summary is that modern CPUs
>>> will speculatively execute operations ahead of the one they are currently
>>> running. Some architectures will allow these executions to start even when
>>> they violate privilege levels, but those instructions are killed or rolled
>>> back hopefully before they actually complete running.
>>>
>>> Another feature of modern CPUs is virtual memory which can allow memory
>>> from two or more processes to occupy the same physical page. This is a good
>>> thing because if you have memory from the kernel and a bit of user code in
>>> the same physical page but different virtual pages, changing from kernel to
>>> userspace execution doesn’t require a page fault. This saves massive
>>> amounts of time and overhead giving modern CPUs a huge speed boost. (For
>>> the really technical out there, I know you are cringing at this
>>> simplification, sorry).
>>>
>>> These two things together allow you to do some interesting things and
>>> along with timing attacks add new weapons to your hacking arsenal. If you
>>> have code executing on one side of a virtual memory page boundary, it can
>>> speculatively execute the next few instructions on the physical page that
>>> cross the virtual page boundary. This isn’t a big deal unless the two
>>> virtual pages are mapped to processes that are from different users or
>>> different privilege levels. Then you h

[OMPI users] Installation of openmpi-1.10.7 fails

2018-01-05 Thread Vahid Askarpour
I am attempting to install openmpi-1.10.7 on CentOS Linux (7.4.1708) using 
GCC-6.4.0. 

When compiling, I get the following error:

make[2]: Leaving directory '/home/vaskarpo/bin/openmpi-1.10.7/ompi/mca/pml/ob1'
Making all in mca/pml/ucx
make[2]: Entering directory '/home/vaskarpo/bin/openmpi-1.10.7/ompi/mca/pml/ucx'
  CC   pml_ucx.lo
  CC   pml_ucx_request.lo
  CC   pml_ucx_datatype.lo
  CC   pml_ucx_component.lo
  CCLD mca_pml_ucx.la
libtool:   error: require no space between '-L' and '-lrt'
make[2]: *** [Makefile:1725: mca_pml_ucx.la] Error 1
make[2]: Leaving directory '/home/vaskarpo/bin/openmpi-1.10.7/ompi/mca/pml/ucx'
make[1]: *** [Makefile:3261: all-recursive] Error 1
make[1]: Leaving directory '/home/vaskarpo/bin/openmpi-1.10.7/ompi'
make: *** [Makefile:1777: all-recursive] Error 1

Thank you,

Vahid
___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users


Re: [OMPI users] Installation of openmpi-1.10.7 fails

2018-01-05 Thread Jeff Squyres (jsquyres)
I forget what the underlying issue was, but this issue just came up and was 
recently fixed:

https://github.com/open-mpi/ompi/issues/4345

However, the v1.10 series is fairly ancient -- the fix was not applied to that 
series.  The fix was applied to the v2.1.x series, and a snapshot tarball 
containing the fix is available here (generally just take the latest tarball):

https://www.open-mpi.org/nightly/v2.x/

The fix is still pending for the v3.0.x and v3.1.x series (i.e., there are 
pending pull requests that haven't been merged yet, so the nightly snapshots 
for the v3.0.x and v3.1.x branches do not yet contain this fix).



> On Jan 5, 2018, at 1:34 PM, Vahid Askarpour  wrote:
> 
> I am attempting to install openmpi-1.10.7 on CentOS Linux (7.4.1708) using 
> GCC-6.4.0. 
> 
> When compiling, I get the following error:
> 
> make[2]: Leaving directory 
> '/home/vaskarpo/bin/openmpi-1.10.7/ompi/mca/pml/ob1'
> Making all in mca/pml/ucx
> make[2]: Entering directory 
> '/home/vaskarpo/bin/openmpi-1.10.7/ompi/mca/pml/ucx'
>  CC   pml_ucx.lo
>  CC   pml_ucx_request.lo
>  CC   pml_ucx_datatype.lo
>  CC   pml_ucx_component.lo
>  CCLD mca_pml_ucx.la
> libtool:   error: require no space between '-L' and '-lrt'
> make[2]: *** [Makefile:1725: mca_pml_ucx.la] Error 1
> make[2]: Leaving directory 
> '/home/vaskarpo/bin/openmpi-1.10.7/ompi/mca/pml/ucx'
> make[1]: *** [Makefile:3261: all-recursive] Error 1
> make[1]: Leaving directory '/home/vaskarpo/bin/openmpi-1.10.7/ompi'
> make: *** [Makefile:1777: all-recursive] Error 1
> 
> Thank you,
> 
> Vahid
> ___
> users mailing list
> users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/users


-- 
Jeff Squyres
jsquy...@cisco.com



___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users


Re: [OMPI users] Installation of openmpi-1.10.7 fails

2018-01-05 Thread Vahid Askarpour
Thank you Jeff for your suggestion to use the v.2.1 series.

I am attempting to use openmpi with EPW. On the EPW website 
(http://epw.org.uk/Main/DownloadAndInstall), it is stated that:


Compatibility of EPW

EPW is tested and should work on the following compilers and libraries:

  *   gcc640 serial
  *   gcc640 + openmpi-1.10.7
  *   intel 12 + openmpi-1.10.7
  *   intel 17 + impi
  *   PGI 17 + mvapich2.3

EPW is know to have the following incompatibilities with:

  *   openmpi 2.0.2 (but likely on all the 2.x.x version): Works but memory 
leak. If you open and close a file a lot of times with openmpi 2.0.2, the 
memory increase linearly with the number of times the file is open.

So I am hoping to avoid the 2.x.x series and use the 1.10.7 version suggested 
by the EPW developers. However, it appears that this is not possible.

Vahid

On Jan 5, 2018, at 5:06 PM, Jeff Squyres (jsquyres) 
mailto:jsquy...@cisco.com>> wrote:

I forget what the underlying issue was, but this issue just came up and was 
recently fixed:

   https://github.com/open-mpi/ompi/issues/4345

However, the v1.10 series is fairly ancient -- the fix was not applied to that 
series.  The fix was applied to the v2.1.x series, and a snapshot tarball 
containing the fix is available here (generally just take the latest tarball):

   https://www.open-mpi.org/nightly/v2.x/

The fix is still pending for the v3.0.x and v3.1.x series (i.e., there are 
pending pull requests that haven't been merged yet, so the nightly snapshots 
for the v3.0.x and v3.1.x branches do not yet contain this fix).



On Jan 5, 2018, at 1:34 PM, Vahid Askarpour 
mailto:vh261...@dal.ca>> wrote:

I am attempting to install openmpi-1.10.7 on CentOS Linux (7.4.1708) using 
GCC-6.4.0.

When compiling, I get the following error:

make[2]: Leaving directory '/home/vaskarpo/bin/openmpi-1.10.7/ompi/mca/pml/ob1'
Making all in mca/pml/ucx
make[2]: Entering directory '/home/vaskarpo/bin/openmpi-1.10.7/ompi/mca/pml/ucx'
CC   pml_ucx.lo
CC   pml_ucx_request.lo
CC   pml_ucx_datatype.lo
CC   pml_ucx_component.lo
CCLD mca_pml_ucx.la
libtool:   error: require no space between '-L' and '-lrt'
make[2]: *** [Makefile:1725: mca_pml_ucx.la] Error 1
make[2]: Leaving directory '/home/vaskarpo/bin/openmpi-1.10.7/ompi/mca/pml/ucx'
make[1]: *** [Makefile:3261: all-recursive] Error 1
make[1]: Leaving directory '/home/vaskarpo/bin/openmpi-1.10.7/ompi'
make: *** [Makefile:1777: all-recursive] Error 1

Thank you,

Vahid
___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users


--
Jeff Squyres
jsquy...@cisco.com



___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users

___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users

Re: [OMPI users] Installation of openmpi-1.10.7 fails

2018-01-05 Thread Jeff Squyres (jsquyres)
You can still give Open MPI 2.1.1 a try.  It should be source compatible with 
EPW.  Hopefully the behavior is close enough that it should work.

If not, please encourage the EPW developers to upgrade.  v3.0.x is the current 
stable series; v1.10.x is ancient.



> On Jan 5, 2018, at 5:22 PM, Vahid Askarpour  wrote:
> 
> Thank you Jeff for your suggestion to use the v.2.1 series.
> 
> I am attempting to use openmpi with EPW. On the EPW website 
> (http://epw.org.uk/Main/DownloadAndInstall), it is stated that:
> 
>> Compatibility of EPW
>> 
>> EPW is tested and should work on the following compilers and libraries:
>> 
>>  • gcc640 serial 
>>  • gcc640 + openmpi-1.10.7
>>  • intel 12 + openmpi-1.10.7
>>  • intel 17 + impi
>>  • PGI 17 + mvapich2.3
>> EPW is know to have the following incompatibilities with:
>> 
>>  • openmpi 2.0.2 (but likely on all the 2.x.x version): Works but memory 
>> leak. If you open and close a file a lot of times with openmpi 2.0.2, the 
>> memory increase linearly with the number of times the file is open.
> 
> So I am hoping to avoid the 2.x.x series and use the 1.10.7 version suggested 
> by the EPW developers. However, it appears that this is not possible.
> 
> Vahid
> 
>> On Jan 5, 2018, at 5:06 PM, Jeff Squyres (jsquyres)  
>> wrote:
>> 
>> I forget what the underlying issue was, but this issue just came up and was 
>> recently fixed:
>> 
>>https://github.com/open-mpi/ompi/issues/4345
>> 
>> However, the v1.10 series is fairly ancient -- the fix was not applied to 
>> that series.  The fix was applied to the v2.1.x series, and a snapshot 
>> tarball containing the fix is available here (generally just take the latest 
>> tarball):
>> 
>>https://www.open-mpi.org/nightly/v2.x/
>> 
>> The fix is still pending for the v3.0.x and v3.1.x series (i.e., there are 
>> pending pull requests that haven't been merged yet, so the nightly snapshots 
>> for the v3.0.x and v3.1.x branches do not yet contain this fix).
>> 
>> 
>> 
>>> On Jan 5, 2018, at 1:34 PM, Vahid Askarpour  wrote:
>>> 
>>> I am attempting to install openmpi-1.10.7 on CentOS Linux (7.4.1708) using 
>>> GCC-6.4.0. 
>>> 
>>> When compiling, I get the following error:
>>> 
>>> make[2]: Leaving directory 
>>> '/home/vaskarpo/bin/openmpi-1.10.7/ompi/mca/pml/ob1'
>>> Making all in mca/pml/ucx
>>> make[2]: Entering directory 
>>> '/home/vaskarpo/bin/openmpi-1.10.7/ompi/mca/pml/ucx'
>>> CC   pml_ucx.lo
>>> CC   pml_ucx_request.lo
>>> CC   pml_ucx_datatype.lo
>>> CC   pml_ucx_component.lo
>>> CCLD mca_pml_ucx.la
>>> libtool:   error: require no space between '-L' and '-lrt'
>>> make[2]: *** [Makefile:1725: mca_pml_ucx.la] Error 1
>>> make[2]: Leaving directory 
>>> '/home/vaskarpo/bin/openmpi-1.10.7/ompi/mca/pml/ucx'
>>> make[1]: *** [Makefile:3261: all-recursive] Error 1
>>> make[1]: Leaving directory '/home/vaskarpo/bin/openmpi-1.10.7/ompi'
>>> make: *** [Makefile:1777: all-recursive] Error 1
>>> 
>>> Thank you,
>>> 
>>> Vahid
>>> ___
>>> users mailing list
>>> users@lists.open-mpi.org
>>> https://lists.open-mpi.org/mailman/listinfo/users
>> 
>> 
>> -- 
>> Jeff Squyres
>> jsquy...@cisco.com
>> 
>> 
>> 
>> ___
>> users mailing list
>> users@lists.open-mpi.org
>> https://lists.open-mpi.org/mailman/listinfo/users
> 
> ___
> users mailing list
> users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/users


-- 
Jeff Squyres
jsquy...@cisco.com



___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users

Re: [OMPI users] Installation of openmpi-1.10.7 fails

2018-01-05 Thread Gilles Gouaillardet
 Vahid,

This looks like the description of the issue reported at
https://github.com/open-mpi/ompi/issues/4336
The fix is currently available in 3.0.1rc1, and I will back port the fix fo
the v2.x branch.
A workaround is to use ROM-IO instead of ompio, you can achieve this with
mpirun —mca io ^ompio ...
(FWIW 1.10 series use ROM-IO by default, so there is no leak out of the box)

IIRC, a possible (and ugly) workaround for the compilation issue is to
configure —with-ucx=/usr ...
That being said, you should really upgrade to a supported version of Open
MPI as previously suggested

Cheers,

Gilles

On Saturday, January 6, 2018, Jeff Squyres (jsquyres) 
wrote:

> You can still give Open MPI 2.1.1 a try.  It should be source compatible
> with EPW.  Hopefully the behavior is close enough that it should work.
>
> If not, please encourage the EPW developers to upgrade.  v3.0.x is the
> current stable series; v1.10.x is ancient.
>
>
>
> > On Jan 5, 2018, at 5:22 PM, Vahid Askarpour  wrote:
> >
> > Thank you Jeff for your suggestion to use the v.2.1 series.
> >
> > I am attempting to use openmpi with EPW. On the EPW website (
> http://epw.org.uk/Main/DownloadAndInstall), it is stated that:
> >
> >> Compatibility of EPW
> >>
> >> EPW is tested and should work on the following compilers and libraries:
> >>
> >>  • gcc640 serial
> >>  • gcc640 + openmpi-1.10.7
> >>  • intel 12 + openmpi-1.10.7
> >>  • intel 17 + impi
> >>  • PGI 17 + mvapich2.3
> >> EPW is know to have the following incompatibilities with:
> >>
> >>  • openmpi 2.0.2 (but likely on all the 2.x.x version): Works but
> memory leak. If you open and close a file a lot of times with openmpi
> 2.0.2, the memory increase linearly with the number of times the file is
> open.
> >
> > So I am hoping to avoid the 2.x.x series and use the 1.10.7 version
> suggested by the EPW developers. However, it appears that this is not
> possible.
> >
> > Vahid
> >
> >> On Jan 5, 2018, at 5:06 PM, Jeff Squyres (jsquyres) 
> wrote:
> >>
> >> I forget what the underlying issue was, but this issue just came up and
> was recently fixed:
> >>
> >>https://github.com/open-mpi/ompi/issues/4345
> >>
> >> However, the v1.10 series is fairly ancient -- the fix was not applied
> to that series.  The fix was applied to the v2.1.x series, and a snapshot
> tarball containing the fix is available here (generally just take the
> latest tarball):
> >>
> >>https://www.open-mpi.org/nightly/v2.x/
> >>
> >> The fix is still pending for the v3.0.x and v3.1.x series (i.e., there
> are pending pull requests that haven't been merged yet, so the nightly
> snapshots for the v3.0.x and v3.1.x branches do not yet contain this fix).
> >>
> >>
> >>
> >>> On Jan 5, 2018, at 1:34 PM, Vahid Askarpour  wrote:
> >>>
> >>> I am attempting to install openmpi-1.10.7 on CentOS Linux (7.4.1708)
> using GCC-6.4.0.
> >>>
> >>> When compiling, I get the following error:
> >>>
> >>> make[2]: Leaving directory '/home/vaskarpo/bin/openmpi-1.
> 10.7/ompi/mca/pml/ob1'
> >>> Making all in mca/pml/ucx
> >>> make[2]: Entering directory '/home/vaskarpo/bin/openmpi-1.
> 10.7/ompi/mca/pml/ucx'
> >>> CC   pml_ucx.lo
> >>> CC   pml_ucx_request.lo
> >>> CC   pml_ucx_datatype.lo
> >>> CC   pml_ucx_component.lo
> >>> CCLD mca_pml_ucx.la
> >>> libtool:   error: require no space between '-L' and '-lrt'
> >>> make[2]: *** [Makefile:1725: mca_pml_ucx.la] Error 1
> >>> make[2]: Leaving directory '/home/vaskarpo/bin/openmpi-1.
> 10.7/ompi/mca/pml/ucx'
> >>> make[1]: *** [Makefile:3261: all-recursive] Error 1
> >>> make[1]: Leaving directory '/home/vaskarpo/bin/openmpi-1.10.7/ompi'
> >>> make: *** [Makefile:1777: all-recursive] Error 1
> >>>
> >>> Thank you,
> >>>
> >>> Vahid
> >>> ___
> >>> users mailing list
> >>> users@lists.open-mpi.org
> >>> https://lists.open-mpi.org/mailman/listinfo/users
> >>
> >>
> >> --
> >> Jeff Squyres
> >> jsquy...@cisco.com
> >>
> >>
> >>
> >> ___
> >> users mailing list
> >> users@lists.open-mpi.org
> >> https://lists.open-mpi.org/mailman/listinfo/users
> >
> > ___
> > users mailing list
> > users@lists.open-mpi.org
> > https://lists.open-mpi.org/mailman/listinfo/users
>
>
> --
> Jeff Squyres
> jsquy...@cisco.com
>
>
>
> ___
> users mailing list
> users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/users
___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users

Re: [OMPI users] Installation of openmpi-1.10.7 fails

2018-01-05 Thread Vahid Askarpour
Gilles,

I will try the 3.0.1rc1 version to see how it goes.

Thanks,

Vahid

On Jan 5, 2018, at 8:40 PM, Gilles Gouaillardet 
mailto:gilles.gouaillar...@gmail.com>> wrote:

 Vahid,

This looks like the description of the issue reported at 
https://github.com/open-mpi/ompi/issues/4336
The fix is currently available in 3.0.1rc1, and I will back port the fix fo the 
v2.x branch.
A workaround is to use ROM-IO instead of ompio, you can achieve this with
mpirun —mca io ^ompio ...
(FWIW 1.10 series use ROM-IO by default, so there is no leak out of the box)

IIRC, a possible (and ugly) workaround for the compilation issue is to
configure —with-ucx=/usr ...
That being said, you should really upgrade to a supported version of Open MPI 
as previously suggested

Cheers,

Gilles

On Saturday, January 6, 2018, Jeff Squyres (jsquyres) 
mailto:jsquy...@cisco.com>> wrote:
You can still give Open MPI 2.1.1 a try.  It should be source compatible with 
EPW.  Hopefully the behavior is close enough that it should work.

If not, please encourage the EPW developers to upgrade.  v3.0.x is the current 
stable series; v1.10.x is ancient.



> On Jan 5, 2018, at 5:22 PM, Vahid Askarpour 
> mailto:vh261...@dal.ca>> wrote:
>
> Thank you Jeff for your suggestion to use the v.2.1 series.
>
> I am attempting to use openmpi with EPW. On the EPW website 
> (http://epw.org.uk/Main/DownloadAndInstall), it is stated that:
>
>> Compatibility of EPW
>>
>> EPW is tested and should work on the following compilers and libraries:
>>
>>  • gcc640 serial
>>  • gcc640 + openmpi-1.10.7
>>  • intel 12 + openmpi-1.10.7
>>  • intel 17 + impi
>>  • PGI 17 + mvapich2.3
>> EPW is know to have the following incompatibilities with:
>>
>>  • openmpi 2.0.2 (but likely on all the 2.x.x version): Works but memory 
>> leak. If you open and close a file a lot of times with openmpi 2.0.2, the 
>> memory increase linearly with the number of times the file is open.
>
> So I am hoping to avoid the 2.x.x series and use the 1.10.7 version suggested 
> by the EPW developers. However, it appears that this is not possible.
>
> Vahid
>
>> On Jan 5, 2018, at 5:06 PM, Jeff Squyres (jsquyres) 
>> mailto:jsquy...@cisco.com>> wrote:
>>
>> I forget what the underlying issue was, but this issue just came up and was 
>> recently fixed:
>>
>>https://github.com/open-mpi/ompi/issues/4345
>>
>> However, the v1.10 series is fairly ancient -- the fix was not applied to 
>> that series.  The fix was applied to the v2.1.x series, and a snapshot 
>> tarball containing the fix is available here (generally just take the latest 
>> tarball):
>>
>>https://www.open-mpi.org/nightly/v2.x/
>>
>> The fix is still pending for the v3.0.x and v3.1.x series (i.e., there are 
>> pending pull requests that haven't been merged yet, so the nightly snapshots 
>> for the v3.0.x and v3.1.x branches do not yet contain this fix).
>>
>>
>>
>>> On Jan 5, 2018, at 1:34 PM, Vahid Askarpour 
>>> mailto:vh261...@dal.ca>> wrote:
>>>
>>> I am attempting to install openmpi-1.10.7 on CentOS Linux (7.4.1708) using 
>>> GCC-6.4.0.
>>>
>>> When compiling, I get the following error:
>>>
>>> make[2]: Leaving directory 
>>> '/home/vaskarpo/bin/openmpi-1.10.7/ompi/mca/pml/ob1'
>>> Making all in mca/pml/ucx
>>> make[2]: Entering directory 
>>> '/home/vaskarpo/bin/openmpi-1.10.7/ompi/mca/pml/ucx'
>>> CC   pml_ucx.lo
>>> CC   pml_ucx_request.lo
>>> CC   pml_ucx_datatype.lo
>>> CC   pml_ucx_component.lo
>>> CCLD mca_pml_ucx.la
>>> libtool:   error: require no space between '-L' and '-lrt'
>>> make[2]: *** [Makefile:1725: mca_pml_ucx.la] Error 1
>>> make[2]: Leaving directory 
>>> '/home/vaskarpo/bin/openmpi-1.10.7/ompi/mca/pml/ucx'
>>> make[1]: *** [Makefile:3261: all-recursive] Error 1
>>> make[1]: Leaving directory '/home/vaskarpo/bin/openmpi-1.10.7/ompi'
>>> make: *** [Makefile:1777: all-recursive] Error 1
>>>
>>> Thank you,
>>>
>>> Vahid
>>> ___
>>> users mailing list
>>> users@lists.open-mpi.org
>>> https://lists.open-mpi.org/mailman/listinfo/users
>>
>>
>> --
>> Jeff Squyres
>> jsquy...@cisco.com
>>
>>
>>
>> ___
>> users mailing list
>> users@lists.open-mpi.org
>> https://lists.open-mpi.org/mailman/listinfo/users
>
> ___
> users mailing list
> users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/users


--
Jeff Squyres
jsquy...@cisco.com



___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users
___
users mailing list
users@lists.open-mpi.org