Am 04.01.2018 um 23:45 schrieb r...@open-mpi.org:

> As more information continues to surface, it is clear that this original 
> article that spurred this thread was somewhat incomplete - probably released 
> a little too quickly, before full information was available. There is still 
> some confusion out there, but the gist from surfing the various articles (and 
> trimming away the hysteria) appears to be:
> 
> * there are two security issues, both stemming from the same root cause. The 
> “problem” has actually been around for nearly 20 years, but faster processors 
> are making it much more visible.
> 
> * one problem (Meltdown) specifically impacts at least Intel, ARM, and AMD 
> processors. This problem is the one that the kernel patches address as it can 
> be corrected via software, albeit with some impact that varies based on 
> application. Those apps that perform lots of kernel services will see larger 
> impacts than those that don’t use the kernel much.
> 
> * the other problem (Spectre) appears to impact _all_ processors (including, 
> by some reports, SPARC and Power). This problem lacks a software solution
> 
> * the “problem” is only a problem if you are running on shared nodes - i.e., 
> if multiple users share a common OS instance as it allows a user to 
> potentially access the kernel information of the other user. So HPC 
> installations that allocate complete nodes to a single user might want to 
> take a closer look before installing the patches. Ditto for your desktop and 
> laptop - unless someone can gain access to the machine, it isn’t really a 
> “problem”.

Weren't there some PowerPC with strict in-order-execution which could 
circumvent this? I find a hint about an "EIEIO" command only. Sure, 
in-order-execution might slow down the system too.

-- Reuti


> 
> * containers and VMs don’t fully resolve the problem - the only solution 
> other than the patches is to limit allocations to single users on a node
> 
> HTH
> Ralph
> 
> 
>> On Jan 3, 2018, at 10:47 AM, r...@open-mpi.org wrote:
>> 
>> Well, it appears from that article that the primary impact comes from 
>> accessing kernel services. With an OS-bypass network, that shouldn’t happen 
>> all that frequently, and so I would naively expect the impact to be at the 
>> lower end of the reported scale for those environments. TCP-based systems, 
>> though, might be on the other end.
>> 
>> Probably something we’ll only really know after testing.
>> 
>>> On Jan 3, 2018, at 10:24 AM, Noam Bernstein <noam.bernst...@nrl.navy.mil> 
>>> wrote:
>>> 
>>> Out of curiosity, have any of the OpenMPI developers tested (or care to 
>>> speculate) how strongly affected OpenMPI based codes (just the MPI part, 
>>> obviously) will be by the proposed Intel CPU memory-mapping-related kernel 
>>> patches that are all the rage?
>>> 
>>>     
>>> https://arstechnica.com/gadgets/2018/01/whats-behind-the-intel-design-flaw-forcing-numerous-patches/
>>> 
>>>                                                                             
>>>         Noam
>>> _______________________________________________
>>> users mailing list
>>> users@lists.open-mpi.org
>>> https://lists.open-mpi.org/mailman/listinfo/users
>> 
>> _______________________________________________
>> users mailing list
>> users@lists.open-mpi.org
>> https://lists.open-mpi.org/mailman/listinfo/users
> 
> _______________________________________________
> users mailing list
> users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/users
> 

_______________________________________________
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users

Reply via email to