Sorry have to resend from normal mail client due to gmail stupid interface. I
am not able to find plain text button anywhere anymore...
>On Fri, 5 Aug 2016 at 13:49 Elena Reshetova
>wrote:
>The Checmate sample installs a policy barring new AF_INET connections
>to port 1. We install the hook, a
Sorry for delayed reply, but I was actually reading and trying to understand
all the involved notions, so it took a while...
> On Fri, Oct 27, 2017 at 06:49:55AM +0000, Reshetova, Elena wrote:
> > Could we possibly have a bit more elaborate discussion on this?
> >
> > Or alt
> On Fri, 20 Oct 2017 10:47:48 +0300
> Elena Reshetova wrote:
>
> > atomic_t variables are currently used to implement reference
> > counters with the following properties:
> > - counter is initialized to 1 using atomic_set()
> > - a resource is freed upon counter reaching zero
> > - once cou
> On Mon, Oct 23, 2017 at 02:09:44PM +0300, Elena Reshetova wrote:
> > Currently arch. independent implementation of refcount_t in
> > lib/refcount.c provides weak memory ordering guarantees
> > compare to its analog atomic_t implementations.
> > While it should not be a problem for most of the act
> Note that there's work done on better documents and updates to this one.
> One document that might be good to read (I have not in fact had time to
> read it myself yet :-():
>
> https://github.com/aparri/memory-
> model/blob/master/Documentation/explanation.txt
>
I have just finished readi
> On Mon, Nov 13, 2017 at 09:09:57AM +0000, Reshetova, Elena wrote:
> >
> >
> > > Note that there's work done on better documents and updates to this one.
> > > One document that might be good to read (I have not in fact had time to
> > > read it my
> On Mon, Nov 13, 2017 at 04:01:11PM +0000, Reshetova, Elena wrote:
> >
> > > On Mon, Nov 13, 2017 at 09:09:57AM +, Reshetova, Elena wrote:
> > > >
> > > >
> > > > > Note that there's work done on better documents and updates to
> [I missed this followup, other stuff]
>
> On Mon, Oct 23, 2017 at 03:41:49PM +0200, Peter Zijlstra wrote:
> > On Sat, Oct 21, 2017 at 10:21:11AM +1100, Dave Chinner wrote:
> > > On Fri, Oct 20, 2017 at 02:07:53PM +0300, Elena Reshetova wrote:
> > > IMO, that makes it way too hard to review san
> On Thu, Nov 02, 2017 at 11:04:53AM +0000, Reshetova, Elena wrote:
>
> > Well refcount_dec_and_test() is not the only function that has different
> > memory ordering specifics. So, the full answer then for any arbitrary case
> > according to your points above would b
> Elena Reshetova writes:
> > atomic_t variables are currently used to implement reference
> > counters with the following properties:
> > - counter is initialized to 1 using atomic_set()
> > - a resource is freed upon counter reaching zero
> > - once counter reaches zero, its further
> >i
> On Fri 20-10-17 13:26:02, Elena Reshetova wrote:
> > atomic_t variables are currently used to implement reference
> > counters with the following properties:
> > - counter is initialized to 1 using atomic_set()
> > - a resource is freed upon counter reaching zero
> > - once counter reaches ze
> On Fri, Jan 05, 2018 at 02:57:50PM +, Mark Rutland wrote:
> > Note: this patch is an *example* use of the nospec API. It is understood
> > that this is incomplete, etc.
> >
> > Under speculation, CPUs may mis-predict branches in bounds checks. Thus,
> > memory accesses under a bounds check ma
> On Wed 29-11-17 13:22:20, Elena Reshetova wrote:
> > atomic_t variables are currently used to implement reference
> > counters with the following properties:
> > - counter is initialized to 1 using atomic_set()
> > - a resource is freed upon counter reaching zero
> > - once counter reaches ze
==
ANNOUNCEMENT AND CALL FOR PARTICIPATION
LINUX SECURITY SUMMIT EUROPE 2018
25-26 October
Adding Theodore & Daniel since I guess they are the best positioned to comment
on
exact strengths of prandom. See my comments below.
> * Reshetova, Elena wrote:
>
> > > 4)
> > >
> > > But before you tweak the patch, a more fundamental question:
> > &
> So a couple of comments; I wasn't able to find the full context for
> this patch, and looking over the thread on kernel-hardening from late
> February still left me confused exactly what attacks this would help
> us protect against (since this isn't my area and I didn't take the
> time to read al
> On Tue, Apr 16, 2019 at 11:10:16AM +0000, Reshetova, Elena wrote:
> > >
> > > The kernel can execute millions of syscalls per second, I'm pretty sure
> > > there's a statistical attack against:
> > >
> > > * This is a maximally equid
Hi,
Sorry for the delay - Easter holidays + I was trying to arrange my brain around
proposed options.
Here what I think our options are with regards to the source of randomness:
1) rdtsc or variations based on it (David proposed some CRC-based variants for
example)
2) prandom-based options
3
> From: Reshetova, Elena
> > Sent: 24 April 2019 12:43
> >
> > Sorry for the delay - Easter holidays + I was trying to arrange my brain
> > around
> proposed options.
> > Here what I think our options are with regards to the source of randomness:
> >
>
> On Fri, Jan 18, 2019 at 02:27:25PM +0200, Elena Reshetova wrote:
> > Elena Reshetova (5):
> > sched: convert sighand_struct.count to refcount_t
> > sched: convert signal_struct.sigcnt to refcount_t
>
> These should really be seen by Oleg (bounced) and I'll await his reply.
>
> > sched: co
> On Mon, Jan 21, 2019 at 11:05:03AM -0500, Alan Stern wrote:
> > On Mon, 21 Jan 2019, Peter Zijlstra wrote:
>
> > > Any additional ordering; like the one you have above; are not strictly
> > > required for the proper functioning of the refcount. Rather, you rely on
> > > additional ordering and w
> On Mon, Mar 18, 2019 at 1:16 PM Andy Lutomirski wrote:
> > On Mon, Mar 18, 2019 at 2:41 AM Elena Reshetova
> > wrote:
> > > Performance:
> > >
> > > 1) lmbench: ./lat_syscall -N 100 null
> > > base: Simple syscall: 0.1774 microseconds
> > > random_offset (rdtsc):
> On Mon, Apr 08, 2019 at 09:13:58AM +0300, Elena Reshetova wrote:
> > diff --git a/arch/x86/entry/common.c b/arch/x86/entry/common.c
> > index 7bc105f47d21..38ddc213a5e9 100644
> > --- a/arch/x86/entry/common.c
> > +++ b/arch/x86/entry/common.c
> > @@ -35,6 +35,12 @@
> > #define CREATE_TRACE_POI
> * Josh Poimboeuf wrote:
>
> > On Mon, Apr 08, 2019 at 09:13:58AM +0300, Elena Reshetova wrote:
> > > diff --git a/arch/x86/entry/common.c b/arch/x86/entry/common.c
> > > index 7bc105f47d21..38ddc213a5e9 100644
> > > --- a/arch/x86/entry/common.c
> > > +++ b/arch/x86/entry/common.c
> > > @@ -35,
> > > On Mon, Apr 08, 2019 at 09:13:58AM +0300, Elena Reshetova wrote:
> > > > diff --git a/arch/x86/entry/common.c b/arch/x86/entry/common.c
> > > > index 7bc105f47d21..38ddc213a5e9 100644
> > > > --- a/arch/x86/entry/common.c
> > > > +++ b/arch/x86/entry/common.c
> > > > @@ -35,6 +35,12 @@
> >
> * Elena Reshetova wrote:
>
> > 2) Andy's tests, misc-tests: ./timing_test_64 10M sys_enosys
> > base:1000 loops in 1.62224s
> > = 162.22 nsec / loop
> > random_offset (prandom_u32() every syscall): 1000 loops in 1.64660s
> > =
> 166.2
On Mon, 11 Feb 2019 13:49:27 -0800
> Kees Cook wrote:
>
> > On Mon, Feb 11, 2019 at 12:28 PM Steven Rostedt wrote:
> > >
> > > On Mon, 11 Feb 2019 15:27:25 -0500
> > > Steven Rostedt wrote:
> > >
> > > > On Mon, 11 Feb 2019 12:21:32 -0800
> > > > Kees Cook wrote:
> > > >
> > > > > > > Looks g
Smth is really weird with my intel mail: it only now delivered
me all messages in one go and I was thinking that I don't get any feedback...
> > If CONFIG_RANDOMIZE_KSTACK_OFFSET is selected,
> > the kernel stack offset is randomized upon each
> > entry to a system call after fixed location of pt_
> On Mon, Mar 18, 2019 at 01:15:44PM -0700, Andy Lutomirski wrote:
> > On Mon, Mar 18, 2019 at 2:41 AM Elena Reshetova
> > wrote:
> > >
> > > If CONFIG_RANDOMIZE_KSTACK_OFFSET is selected,
> > > the kernel stack offset is randomized upon each
> > > entry to a system call after fixed location of pt
Ingo, Andy,
I want to summarize here the data (including the performance numbers)
and reasoning for the in-stack randomization feature. I have organized
it in a simple set of Q&A below.
Q: Why do we need in-stack per-syscall randomization when we already have
all known attack vectors covered with
> On 01/18, Elena Reshetova wrote:
> >
> > For the signal_struct.sigcnt it might make a difference
> > in following places:
> > - put_signal_struct(): decrement in refcount_dec_and_test() only
> >provides RELEASE ordering and control dependency on success
> >vs. fully ordered atomic counte
> On Tue, Jan 22, 2019 at 09:11:42AM +0000, Reshetova, Elena wrote:
> > Will you be able to take this and the other scheduler
> > patch to whatever tree/path it should normally go to get eventually
> > integrated?
>
> I've queeud them up.
Thank you!
Best Regards,
Elena.
> On Mon, Jan 28, 2019 at 03:29:10PM +0100, Andrea Parri wrote:
>
> > > diff --git a/arch/x86/include/asm/refcount.h
> > > b/arch/x86/include/asm/refcount.h
> > > index dbaed55..ab8f584 100644
> > > --- a/arch/x86/include/asm/refcount.h
> > > +++ b/arch/x86/include/asm/refcount.h
> > > @@ -67,1
> On Mon, Jan 28, 2019 at 02:27:26PM +0200, Elena Reshetova wrote:
> > diff --git a/kernel/events/core.c b/kernel/events/core.c
> > index 3cd13a3..a1e87d2 100644
> > --- a/kernel/events/core.c
> > +++ b/kernel/events/core.c
> > @@ -1171,7 +1171,7 @@ static void perf_event_ctx_deactivate(struct
> pe
> On Mon, Jan 28, 2019 at 1:10 PM Elena Reshetova
> wrote:
> >
> > This adds an smp_acquire__after_ctrl_dep() barrier on successful
> > decrease of refcounter value from 1 to 0 for refcount_dec(sub)_and_test
> > variants and therefore gives stronger memory ordering guarantees than
> > prior versi
> So, you are saying that ACQUIRE does not guarantee that "po-later stores
> > on the same CPU and all propagated stores from other CPUs
> > must propagate to all other CPUs after the acquire operation "?
> > I was reading about acquire before posting this and trying to understand,
> > and this wa
> Hi Elena,
Hi!
>
> [...]
>
> > **Important note for maintainers:
> >
> > Some functions from refcount_t API defined in lib/refcount.c
> > have different memory ordering guarantees than their atomic
> > counterparts.
> > The full comparison can be seen in
> > https://lkml.org/lkml/2017/11/15/57
> On Fri, Jan 18, 2019 at 02:27:25PM +0200, Elena Reshetova wrote:
> > Elena Reshetova (5):
> > sched: convert sighand_struct.count to refcount_t
> > sched: convert signal_struct.sigcnt to refcount_t
>
> These should really be seen by Oleg (bounced) and I'll await his reply.
>
> > sched:
> On Fri, Jan 18, 2019 at 02:27:25PM +0200, Elena Reshetova wrote:
> > I would really love finally to merge these old patches
> > (now rebased on top of linux-next/master as of last friday),
> > since as far as I remember none has raised any more concerns
> > on them.
> >
> > refcount_t has been no
> Just to check, has this been tested with CONFIG_REFCOUNT_FULL and
> > something poking kcov?
> >
> > Given lib/refcount.c is instrumented, the refcount_*() calls will
> > recurse back into the kcov code. It looks like that's fine, given these
> > are only manipulated in setup/teardown paths, but
> On Tue, Jan 29, 2019 at 01:55:32PM +0000, Reshetova, Elena wrote:
> > > On Mon, Jan 28, 2019 at 02:27:26PM +0200, Elena Reshetova wrote:
> > > > diff --git a/kernel/events/core.c b/kernel/events/core.c
> > > > index 3cd13a3..a1e87d2 100644
> > > >
> * Elena Reshetova [2019-01-16 13:20:27]:
>
> > atomic_t variables are currently used to implement reference
> > counters with the following properties:
> > - counter is initialized to 1 using atomic_set()
> > - a resource is freed upon counter reaching zero
> > - once counter reaches zero,
> > I suppose we can add smp_acquire__after_ctrl_dep() on the true branch.
> > Then it reall does become rel_acq.
> >
> > A wee something like so (I couldn't find an arm64 refcount, even though
> > I have distinct memories of talk about it).
>
> In the end, arm and arm64 chose to use REFCOUNT_FULL
..
> > rdrand (calling every 8 syscalls): Simple syscall: 0.0795 microseconds
>
> You could try something like:
> u64 rand_val = cpu_var->syscall_rand
>
> while (unlikely(rand_val == 0))
> rand_val = rdrand64();
>
> stack_offset = rand_val & 0xff;
> rand_val
> * Reshetova, Elena wrote:
>
> > CONFIG_PAGE_TABLE_ISOLATION=n:
> >
> > base: Simple syscall: 0.0510 microseconds
> > get_random_bytes(4096 bytes buffer): Simple syscall: 0.0597 microseconds
> >
> > So, pure speed w
> * Reshetova, Elena wrote:
>
> > > * Reshetova, Elena wrote:
> > >
> > > > CONFIG_PAGE_TABLE_ISOLATION=n:
> > > >
> > > > base: Simple syscall: 0.0510
> > > > microsecon
> > I find it ridiculous that even with 4K blocked get_random_bytes(), which
> > gives us 32k bits, which with 5 bits should amortize the RNG call to
> > something like "once per 6553 calls", we still see 17% overhead? It's
> > either a measurement artifact, or something doesn't compute.
>
> If
> On Thu, Mar 28, 2019 at 9:29 AM Andy Lutomirski wrote:
> > Doesn’t this just leak some of the canary to user code through side
> > channels?
>
> Erf, yes, good point. Let's just use prandom and be done with it.
And here I have some numbers on this. Actually prandom turned out to be pretty
fas
>> On Fri, May 3, 2019 at 9:40 AM David Laight wrote:
> >
> > That gives you 10 system calls per rdrand instruction
> > and mostly takes the latency out of line.
>
> Do we really want to do this? What is the attack scenario?
>
> With no VLA's, and the stackleak plugin, what's the upside? Are we
> From: Reshetova, Elena
> > Sent: 03 May 2019 17:17
> ...
> > rdrand (calling every 8 syscalls): Simple syscall: 0.0795 microseconds
>
> You could try something like:
> u64 rand_val = cpu_var->syscall_rand
>
> while (unlikely(rand_val == 0))
> * Andy Lutomirski wrote:
>
> > Or we decide that calling get_random_bytes() is okay with IRQs off and
> > this all gets a bit simpler.
>
> BTW., before we go down this path any further, is the plan to bind this
> feature to a real CPU-RNG capability, i.e. to the RDRAND instruction,
> which exc
> Hi,
>
> Sorry for the delay - Easter holidays + I was trying to arrange my brain
> around
> proposed options.
> Here what I think our options are with regards to the source of randomness:
>
> 1) rdtsc or variations based on it (David proposed some CRC-based variants for
> example)
> 2) prandom
> > On Apr 26, 2019, at 7:01 AM, Theodore Ts'o wrote:
> >
> >> On Fri, Apr 26, 2019 at 11:33:09AM +, Reshetova, Elena wrote:
> >> Adding Eric and Herbert to continue discussion for the chacha part.
> >> So, as a short summary I am trying to find
> On Fri, Apr 26, 2019 at 11:33:09AM +0000, Reshetova, Elena wrote:
> > Adding Eric and Herbert to continue discussion for the chacha part.
> > So, as a short summary I am trying to find out a fast (fast enough to be
> > used per
> syscall
> > invocation) source o
> On Fri, Apr 26, 2019 at 10:01:02AM -0400, Theodore Ts'o wrote:
> > On Fri, Apr 26, 2019 at 11:33:09AM +0000, Reshetova, Elena wrote:
> > > Adding Eric and Herbert to continue discussion for the chacha part.
> > > So, as a short summary I am trying to fi
>
> > On Apr 29, 2019, at 12:46 AM, Reshetova, Elena
> wrote:
> >
> >
> >>>> On Apr 26, 2019, at 7:01 AM, Theodore Ts'o wrote:
> >>>
> >
> >> It seems to me
> >> that we should be using the “fast-erasure” constructi
> From: Reshetova, Elena
> > Sent: 30 April 2019 18:51
> ...
> > I guess this is true, so I did a quick implementation now to estimate the
> > performance hits.
> > Here are the preliminary numbers (proper ones will take a bit more time):
> >
> >
From: Reshetova, Elena
> > Sent: 30 April 2019 18:51
> ...
> > +unsigned char random_get_byte(void)
> > +{
> > +struct rnd_buffer *buffer = &get_cpu_var(stack_rand_offset);
> > +unsigned char res;
> > +
> > +
> * David Laight wrote:
>
> > It has already been measured - it is far too slow.
>
> I don't think proper buffering was tested, was it? Only a per syscall
> RDRAND overhead which I can imagine being not too good.
>
Well, I have some numbers, but I am struggling to understand one
aspect there.
>> The in-stack randomization is really a very small change both code wise and
>> logic wise.
>> It does not affect real workloads and does not require enablement of other
>> features (such as GCC plugins).
>> So, I think we should really reconsider its inclusion.
>I'd agree: the code is tiny and
On Thu, Nov 17, 2016 at 07:30:29AM -0500, David Windsor wrote:
> On Thu, Nov 17, 2016 at 3:34 AM, Peter Zijlstra wrote:
> > No, its not a statistic. Also, I'm far from convinced stats_t is an
> > actually useful thing to have.
> >
>
> Regarding this, has there been any thought given as to how s
> Even if we now find all occurrences of atomic_t used as refcounter
> (which we cannot actually guarantee in any case unless someone
> manually reads every line) and convert it to refcount_t, we still have
> atomic_t type present and new usage of it as refount will crawl in. It
> is just a mat
> Even if we now find all occurrences of atomic_t used as refcounter
> (which we cannot actually guarantee in any case unless someone
> manually reads every line) and convert it to refcount_t, we still have
> atomic_t type present and new usage of it as refount will crawl in. It
> is just a ma
>Provide refcount_t, an atomic_t like primitive built just for refcounting.
>It provides overflow and underflow checks as well as saturation semantics such
>that when it overflows, we'll never attempt to free it again, ever.
Peter do you have the changes to the refcount_t interface compare to the
>Provide refcount_t, an atomic_t like primitive built just for refcounting.
>It provides overflow and underflow checks as well as saturation semantics such
>that when it overflows, we'll never attempt to free it again, ever.
>Peter do you have the changes to the refcount_t interface compare to th
> Could you please fix you mailer to not unwrap the emails?
I wish I understand what you mean by "unwrap"... ?
On Fri, Nov 18, 2016 at 10:47:40AM +0000, Reshetova, Elena wrote:
> >Provide refcount_t, an atomic_t like primitive built just for
> >refcounting. It provid
On Thu, Nov 17, 2016 at 09:53:42AM +0100, Peter Zijlstra wrote:
> On Wed, Nov 16, 2016 at 12:08:52PM -0800, Alexei Starovoitov wrote:
>
> > I prefer to avoid 'fixing' things that are not broken.
> > Note, prog->aux->refcnt already has explicit checks for overflow.
> > locked_vm is used for resourc
> On Fri, Nov 18, 2016 at 04:58:52PM +0000, Reshetova, Elena wrote:
> > > Could you please fix you mailer to not unwrap the emails?
> >
> > I wish I understand what you mean by "unwrap"... ?
>
> Where I always have lines wrapped at 78 characters, but of
> On Fri, Nov 18, 2016 at 05:33:35PM +0000, Reshetova, Elena wrote:
> > On Thu, Nov 17, 2016 at 09:53:42AM +0100, Peter Zijlstra wrote:
> > > On Wed, Nov 16, 2016 at 12:08:52PM -0800, Alexei Starovoitov wrote:
> > >
> > > > I prefer to avoid 'fixing
> By the way, there are several sites where the use of
> atomic_t/atomic_wrap_t as a counter ventures beyond the standard (inc,
> dec, add, sub, read, set) operations we're planning on implementing
> for both refcount_t and stats_t.
Speaking of non-fitting patterns. This one is quite common in ne
> On Mon, Nov 21, 2016 at 04:49:15PM +0100, Peter Zijlstra wrote:
> > > Speaking of non-fitting patterns. This one is quite common in
> > > networking code for refcounters:
> > >
> > > if (atomic_cmpxchg(&cur->refcnt, 1, 0) == 1) {} This is from
> > > net/netfilter/nfnetlink_acct.c, but there are s
> Hi all,
>
> After merging the netfilter-next tree, today's linux-next build (x86_64
> allmodconfig) produced this warning:
>
> net/netfilter/nfnetlink_acct.c: In function 'nfnl_acct_try_del':
> net/netfilter/nfnetlink_acct.c:329:15: warning: unused variable 'refcount' [-
> Wunused-variable]
>
> Hello!
>
> On 3/18/2017 3:58 PM, Elena Reshetova wrote:
>
> > refcount_t type and corresponding API should be
> > used instead of atomic_t when the variable is used as
> > a reference counter. This allows to avoid accidental
> > refcounter overflows that might lead to use-after-free
> > situati
> On Fri, 2017-03-17 at 09:02 -0400, Jeff Layton wrote:
> > On Fri, 2017-03-17 at 12:50 +, Trond Myklebust wrote:
> > > On Fri, 2017-03-17 at 14:10 +0200, Elena Reshetova wrote:
> > > > refcount_t type and corresponding API should be
> > > > used instead of atomic_t when the variable is used as
> On 03/20/2017 10:37 AM, Elena Reshetova wrote:
> [...]
> > diff --git a/net/core/filter.c b/net/core/filter.c
> > index ebaeaf2..389cb8d 100644
> > --- a/net/core/filter.c
> > +++ b/net/core/filter.c
> > @@ -928,7 +928,7 @@ static void sk_filter_release_rcu(struct rcu_head *rcu)
> >*/
> > s
> On Wed, Mar 15, 2017 at 01:10:38PM +0200, Elena Reshetova wrote:
> > This series, for the netfilter subsystem, replaces atomic_t reference
> > counters with the new refcount_t type and API (see
> > include/linux/refcount.h).
> > By doing this we prevent intentional or accidental
> > underflows
> On Tue, 2017-03-14 at 12:29 +0000, Reshetova, Elena wrote:
> > > Elena Reshetova writes:
> > >
> > > > refcount_t type and corresponding API should be
> > > > used instead of atomic_t when the variable is used as
> > > > a reference co
> From: Kees Cook
> Date: Thu, 16 Mar 2017 11:38:25 -0600
>
> > I am, of course, biased, but I think the evidence of actual
> > refcounting attacks outweighs the theoretical performance cost of
> > these changes.
>
> This is not theoretical at all.
>
> We count the nanoseconds that every packet
> On 03/16/2017 04:28 PM, Elena Reshetova wrote:
> > refcount_t type and corresponding API should be
> > used instead of atomic_t when the variable is used as
> > a reference counter. This allows to avoid accidental
> > refcounter overflows that might lead to use-after-free
> > situations.
> >
> >
> At 03/06/2017 05:43 PM, Reshetova, Elena wrote:
> >
> >> At 03/03/2017 04:55 PM, Elena Reshetova wrote:
> >>> Now when new refcount_t type and API are finally merged
> >>> (see include/linux/refcount.h), the following
> >>> patches co
> Hello.
>
> On 03/06/2017 05:20 PM, Elena Reshetova wrote:
>
> > refcount_t type and corresponding API should be
> > used instead of atomic_t when the variable is used as
> > a reference counter. This allows to avoid accidental
> > refcounter overflows that might lead to use-after-free
> > situa
> On 03/06/2017 03:21 PM, Elena Reshetova wrote:
> > refcount_t type and corresponding API should be
> > used instead of atomic_t when the variable is used as
> > a reference counter. This allows to avoid accidental
> > refcounter overflows that might lead to use-after-free
> > situations.
>
> The
> Hi Elena,
>
> On Mon, Mar 06, 2017 at 04:20:59PM +0200, Elena Reshetova wrote:
> > refcount_t type and corresponding API should be
> > used instead of atomic_t when the variable is used as
> > a reference counter. This allows to avoid accidental
> > refcounter overflows that might lead to use-a
> Hi Elena,
>
> On Mon, Mar 06, 2017 at 04:21:00PM +0200, Elena Reshetova wrote:
> > refcount_t type and corresponding API should be
> > used instead of atomic_t when the variable is used as
> > a reference counter. This allows to avoid accidental
> > refcounter overflows that might lead to use-af
> Hello,
>
> On Mon, Feb 20, 2017 at 12:19:01PM +0200, Elena Reshetova wrote:
> > @@ -134,10 +135,13 @@ static inline void put_css_set(struct css_set *cset)
> > * can see it. Similar to atomic_dec_and_lock(), but for an
> > * rwlock
> > */
> > - if (atomic_add_unless(&cset->refcou
> Signed-off-by: Elena Reshetova
> > Signed-off-by: Hans Liljestrand
> > Signed-off-by: Kees Cook
> > Signed-off-by: David Windsor
> > ---
> > fs/xfs/xfs_refcount_item.c | 4 ++--
> > fs/xfs/xfs_refcount_item.h | 4 +++-
> > 2 files changed, 5 insertions(+), 3 deletions(-)
> >
> > diff --git
> On Mon, Mar 06, 2017 at 04:20:57PM +0200, Elena Reshetova wrote:
> > refcount_t type and corresponding API should be
> > used instead of atomic_t when the variable is used as
> > a reference counter. This allows to avoid accidental
> > refcounter overflows that might lead to use-after-free
> > si
> On Mon, Mar 06, 2017 at 04:20:55PM +0200, Elena Reshetova wrote:
> > refcount_t type and corresponding API should be
> > used instead of atomic_t when the variable is used as
> > a reference counter. This allows to avoid accidental
> > refcounter overflows that might lead to use-after-free
> > si
> On 03/06/2017 03:21 PM, Elena Reshetova wrote:
> > refcount_t type and corresponding API should be
> > used instead of atomic_t when the variable is used as
> > a reference counter. This allows to avoid accidental
> > refcounter overflows that might lead to use-after-free
> > situations.
>
> The
> On 03/06/2017 09:21 AM, Elena Reshetova wrote:
> > refcount_t type and corresponding API should be
> > used instead of atomic_t when the variable is used as
> > a reference counter. This allows to avoid accidental
> > refcounter overflows that might lead to use-after-free
> > situations.
> >
> >
> On Wed, Mar 8, 2017 at 7:50 AM, Christoph Hellwig wrote:
> >> - ASSERT(atomic_read(&ticket->t_ref) > 0);
> >> - atomic_inc(&ticket->t_ref);
> >> + ASSERT(refcount_read(&ticket->t_ref) > 0);
> >> + refcount_inc(&ticket->t_ref);
> >
> > With strict refcount semantics refcount_inc
> On Mon, Mar 06, 2017 at 04:21:09PM +0200, Elena Reshetova wrote:
> > refcount_t type and corresponding API should be
> > used instead of atomic_t when the variable is used as
> > a reference counter. This allows to avoid accidental
> > refcounter overflows that might lead to use-after-free
> > si
> On 03/08/2017 08:49 AM, Reshetova, Elena wrote:
> >> On 03/06/2017 09:21 AM, Elena Reshetova wrote:
> >>> refcount_t type and corresponding API should be
> >>> used instead of atomic_t when the variable is used as
> >>> a reference counter.
> On 03/09/2017 08:18 AM, Reshetova, Elena wrote:
> >> On Mon, Mar 06, 2017 at 04:21:09PM +0200, Elena Reshetova wrote:
> >>> refcount_t type and corresponding API should be
> >>> used instead of atomic_t when the variable is used as
> >>> a re
> On Fri, Mar 03, 2017 at 10:55:09AM +0200, Elena Reshetova wrote:
> > Now when new refcount_t type and API are finally merged
> > (see include/linux/refcount.h), the following
> > patches convert various refcounters in the btrfs filesystem from atomic_t
> > to refcount_t. By doing this we prevent
> Elena Reshetova writes:
>
> > refcount_t type and corresponding API should be
> > used instead of atomic_t when the variable is used as
> > a reference counter. This allows to avoid accidental
> > refcounter overflows that might lead to use-after-free
> > situations.
> >
> > Signed-off-by: Elen
> On Mon, 20 Feb 2017 13:29:46 +0200 Elena Reshetova
> wrote:
>
> > Now when new refcount_t type and API are finally merged
> > (see include/linux/refcount.h), the following
> > patches convert various refcounters in the ipc susystem from atomic_t
> > to refcount_t. By doing this we prevent inten
> On Tue, Feb 21, 2017 at 05:34:54PM +0200, Elena Reshetova wrote:
> > Now when new refcount_t type and API are finally merged
> > (see include/linux/refcount.h), the following
> > patches convert various refcounters in the tools susystem from atomic_t
> > to refcount_t. By doing this we prevent i
> On Tue, Feb 21, 2017 at 05:49:03PM +0200, Elena Reshetova wrote:
> > refcount_t type and corresponding API should be
> > used instead of atomic_t when the variable is used as
> > a reference counter. This allows to avoid accidental
> > refcounter overflows that might lead to use-after-free
> > si
> On Tue, Feb 21, 2017 at 05:04:08PM +0100, Peter Zijlstra wrote:
> > On Tue, Feb 21, 2017 at 05:49:02PM +0200, Elena Reshetova wrote:
> > > @@ -1684,10 +1684,11 @@ xfs_buftarg_isolate(
> > >* zero. If the value is already zero, we need to reclaim the
> > >* buffer, otherwise it gets anothe
1 - 100 of 220 matches
Mail list logo