On Fri, Oct 12, 2018 at 8:02 AM Alan Cox <gno...@lxorguk.ukuu.org.uk> wrote: > > > My understanding is that the standard “breadcrumb” is that a cache line is > > fetched into L1D, and that the cacheline in question will go into L1D even > > if it was previously not cached at all. So flushing L1D will cause the > > timing from a probe to be different, but the breadcrumb is still there, and > > the attack will still work. > > Flush not write back. The L1D is empty (or full of other stuff the way > the prototype I tested did it as x86 lacked a true L1 flushing primitive)
I'm not sure I follow what you're saying. If an attacker is learning some information based on whether a given cacheline is in L1D, I'm asking why the attacker can't learn exactly the same information based on whether the cache line is in L2. Or using any of the other variants that Jann is talking about. Adding ~1600 cycles plus the slowdown due to the fact that the cache got flushed to a code path that we hope isn't hot to mitigate one particular means of exploiting potential bugs seems a bit dubious to me.