On Thu, 2016-09-29 at 07:49 -0700, Eric Dumazet wrote:
> On Thu, Sep 29, 2016 at 7:34 AM, Paolo Abeni wrote:
> > On Thu, 2016-09-29 at 07:13 -0700, Eric Dumazet wrote:
> >> On Thu, 2016-09-29 at 16:01 +0200, Paolo Abeni wrote:
> >>
> >> > When we reach __sk_mem_reduce_allocated() we are sure we ca
On Thu, Sep 29, 2016 at 7:34 AM, Paolo Abeni wrote:
> On Thu, 2016-09-29 at 07:13 -0700, Eric Dumazet wrote:
>> On Thu, 2016-09-29 at 16:01 +0200, Paolo Abeni wrote:
>>
>> > When we reach __sk_mem_reduce_allocated() we are sure we can free the
>> > specified amount of memory, so we only need to en
On Thu, 2016-09-29 at 07:13 -0700, Eric Dumazet wrote:
> On Thu, 2016-09-29 at 16:01 +0200, Paolo Abeni wrote:
>
> > When we reach __sk_mem_reduce_allocated() we are sure we can free the
> > specified amount of memory, so we only need to ensure consistent
> > sk_prot->memory_allocated updates. The
On Thu, 2016-09-29 at 16:01 +0200, Paolo Abeni wrote:
> When we reach __sk_mem_reduce_allocated() we are sure we can free the
> specified amount of memory, so we only need to ensure consistent
> sk_prot->memory_allocated updates. The current atomic operation suffices
> to this.
Then why are you u
On Thu, 2016-09-29 at 06:24 -0700, Eric Dumazet wrote:
> On Thu, 2016-09-29 at 11:31 +0200, Paolo Abeni wrote:
> > On Wed, 2016-09-28 at 18:42 -0700, Eric Dumazet wrote:
> > > On Wed, 2016-09-28 at 12:52 +0200, Paolo Abeni wrote:
> > >
> > > > +static void udp_rmem_release(struct sock *sk, int par
On Thu, 2016-09-29 at 11:31 +0200, Paolo Abeni wrote:
> On Wed, 2016-09-28 at 18:42 -0700, Eric Dumazet wrote:
> > On Wed, 2016-09-28 at 12:52 +0200, Paolo Abeni wrote:
> >
> > > +static void udp_rmem_release(struct sock *sk, int partial)
> > > +{
> > > + struct udp_sock *up = udp_sk(sk);
> > > +
On Wed, 2016-09-28 at 18:42 -0700, Eric Dumazet wrote:
> On Wed, 2016-09-28 at 12:52 +0200, Paolo Abeni wrote:
>
> > +static void udp_rmem_release(struct sock *sk, int partial)
> > +{
> > + struct udp_sock *up = udp_sk(sk);
> > + int fwd, amt;
> > +
> > + if (partial && !udp_under_memory_pre
Hi Eric,
On Wed, 2016-09-28 at 18:42 -0700, Eric Dumazet wrote:
> On Wed, 2016-09-28 at 12:52 +0200, Paolo Abeni wrote:
>
> > +static void udp_rmem_release(struct sock *sk, int partial)
> > +{
> > + struct udp_sock *up = udp_sk(sk);
> > + int fwd, amt;
> > +
> > + if (partial && !udp_under_
On Wed, 2016-09-28 at 12:52 +0200, Paolo Abeni wrote:
> +static void udp_rmem_release(struct sock *sk, int partial)
> +{
> + struct udp_sock *up = udp_sk(sk);
> + int fwd, amt;
> +
> + if (partial && !udp_under_memory_pressure(sk))
> + return;
> +
> + /* we can have con
Avoid usage of common memory accounting functions, since
the logic is pretty much different.
To account for forward allocation, a couple of new atomic_t
members are added to udp_sock: 'mem_alloced' and 'can_reclaim'.
The current forward allocation is estimated as 'mem_alloced'
minus 'sk_rmem_allo
10 matches
Mail list logo