> On Nov 21, 2018, at 9:34 AM, Igor Stoppa <igor.sto...@gmail.com> wrote:
>
> Hi,
>
>>> On 13/11/2018 20:36, Andy Lutomirski wrote:
>>> On Tue, Nov 13, 2018 at 10:33 AM Igor Stoppa <igor.sto...@huawei.com> wrote:
>>>
>>> I forgot one sentence :-(
>>>
>>>>> On 13/11/2018 20:31, Igor Stoppa wrote:
>>>>> On 13/11/2018 19:47, Andy Lutomirski wrote:
>>>>>
>>>>> For general rare-writish stuff, I don't think we want IRQs running
>>>>> with them mapped anywhere for write. For AVC and IMA, I'm less sure.
>>>>
>>>> Why would these be less sensitive?
>>>>
>>>> But I see a big difference between my initial implementation and this one.
>>>>
>>>> In my case, by using a shared mapping, visible to all cores, freezing
>>>> the core that is performing the write would have exposed the writable
>>>> mapping to a potential attack run from another core.
>>>>
>>>> If the mapping is private to the core performing the write, even if it
>>>> is frozen, it's much harder to figure out what it had mapped and where,
>>>> from another core.
>>>>
>>>> To access that mapping, the attack should be performed from the ISR, I
>>>> think.
>>>
>>> Unless the secondary mapping is also available to other cores, through
>>> the shared mm_struct ?
>> I don't think this matters much. The other cores will only be able to
>> use that mapping when they're doing a rare write.
>
>
> I'm still mulling over this.
> There might be other reasons for replicating the mm_struct.
>
> If I understand correctly how the text patching works, it happens
> sequentially, because of the text_mutex used by arch_jump_label_transform
>
> Which might be fine for this specific case, but I think I shouldn't introduce
> a global mutex, when it comes to data.
> Most likely, if two or more cores want to perform a write rare operation,
> there is no correlation between them, they could proceed in parallel. And if
> there really is, then the user of the API should introduce own locking, for
> that specific case.
Text patching uses the same VA for different physical addresses, so it need a
mutex to avoid conflicts. I think that, for rare writes, you should just map
each rare-writable address at a *different* VA. You’ll still need a mutex
(mmap_sem) to synchronize allocation and freeing of rare-writable ranges, but
that shouldn’t have much contention.
>
> A bit unrelated question, related to text patching: I see that each patching
> operation is validated, but wouldn't it be more robust to first validate all
> of then, and only after they are all found to be compliant, to proceed with
> the actual modifications?
>
> And about the actual implementation of the write rare for the statically
> allocated variables, is it expected that I use Nadav's function?
> Or that I refactor the code?
I would either refactor it or create a new function to handle the write. The
main thing that Nadav is adding that I think you’ll want to use is the
infrastructure for temporarily switching mms from a non-kernel-thread context.
>
> The name, referring to text would definitely not be ok for data.
> And I would have to also generalize it, to deal with larger amounts of data.
>
> I would find it easier, as first cut, to replicate its behavior and refactor
> only later, once it has stabilized and possibly Nadav's patches have been
> acked.
>
Sounds reasonable. You’ll still want Nadav’s code for setting up the mm in the
first place, though.
> --
> igor