ust enough. This patch converts
the per-hashtable lock into per-bucket spinlock, so that
contention can be decreased a lot.
Signed-off-by: Ming Lei
---
kernel/bpf/hashtab.c | 50 --
1 file changed, 32 insertions(+), 18 deletions(-)
diff --git a/
mic counter
V2:
- fix checking on buckets size
V1:
- fix the wrong 3/3 patch
kernel/bpf/hashtab.c | 64 +++-
1 file changed, 38 insertions(+), 26 deletions(-)
Thanks,
Ming Lei
--
To unsubscribe from this list: send the line "unsubs
Preparing for removing global per-hashtable lock, so
the counter need to be defined as aotmic_t first.
Acked-by: Daniel Borkmann
Signed-off-by: Ming Lei
---
kernel/bpf/hashtab.c | 12 ++--
1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/kernel/bpf/hashtab.c b/kernel/bpf
The spinlock is just used for protecting the per-bucket
hlist, so it isn't needed for selecting bucket.
Acked-by: Daniel Borkmann
Signed-off-by: Ming Lei
---
kernel/bpf/hashtab.c | 6 ++
1 file changed, 2 insertions(+), 4 deletions(-)
diff --git a/kernel/bpf/hashtab.c b/kerne
Hi,
This patchset tries to optimize ebpf hash map, and follows
the idea:
Both htab_map_update_elem() and htab_map_delete_elem()
can be called from eBPF program, and they may be in kernel
hot path, it isn't efficient to use a per-hashtable lock
in this two helpers,
The spinlock is just used for protecting the per-bucket
hlist, so it isn't needed for selecting bucket.
Acked-by: Daniel Borkmann
Signed-off-by: Ming Lei
---
kernel/bpf/hashtab.c | 6 ++
1 file changed, 2 insertions(+), 4 deletions(-)
diff --git a/kernel/bpf/hashtab.c b/kerne
Preparing for removing global per-hashtable lock, so
the counter need to be defined as aotmic_t first.
Acked-by: Daniel Borkmann
Signed-off-by: Ming Lei
---
kernel/bpf/hashtab.c | 12 ++--
1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/kernel/bpf/hashtab.c b/kernel/bpf
ust enough. This patch converts
the per-hashtable lock into per-bucket spinlock, so that
contention can be decreased a lot.
Signed-off-by: Ming Lei
---
kernel/bpf/hashtab.c | 46 ++
1 file changed, 30 insertions(+), 16 deletions(-)
diff --git a/
On Mon, Dec 28, 2015 at 5:13 PM, Daniel Borkmann wrote:
> On 12/26/2015 10:31 AM, Ming Lei wrote:
>>
>> From: Ming Lei
>>
>> Both htab_map_update_elem() and htab_map_delete_elem() can be
>> called from eBPF program, and they may be in kernel hot path,
>
The spinlock is just used for protecting the per-bucket
hlist, so it isn't needed for selecting bucket.
Signed-off-by: Ming Lei
---
kernel/bpf/hashtab.c | 6 ++
1 file changed, 2 insertions(+), 4 deletions(-)
diff --git a/kernel/bpf/hashtab.c b/kernel/bpf/hashtab.c
index 2615388..d8
Hi,
This patchset tries to optimize ebpf hash map, and follows
the idea:
Both htab_map_update_elem() and htab_map_delete_elem()
can be called from eBPF program, and they may be in kernel
hot path, it isn't efficient to use a per-hashtable lock
in this two helpers,
From: Ming Lei
Both htab_map_update_elem() and htab_map_delete_elem() can be
called from eBPF program, and they may be in kernel hot path,
so it isn't efficient to use a per-hashtable lock in this two
helpers.
The per-hashtable spinlock is used just for protecting bucket's
hlist, and
Preparing for removing global per-hashtable lock, so
the counter need to be defined as aotmic_t first.
Signed-off-by: Ming Lei
---
kernel/bpf/hashtab.c | 12 ++--
1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/kernel/bpf/hashtab.c b/kernel/bpf/hashtab.c
index 34777b3
On Fri, Dec 18, 2015 at 2:20 PM, Alexei Starovoitov
wrote:
> On Wed, Dec 16, 2015 at 02:58:08PM +0800, Ming Lei wrote:
>> On Wed, Dec 16, 2015 at 1:01 PM, Yang Shi wrote:
>>
>> >
>> > I recalled Steven confirmed raw_spin_lock has the lockdep benefit too i
On Wed, Dec 16, 2015 at 7:10 AM, Alexei Starovoitov
wrote:
> On Tue, Dec 15, 2015 at 07:21:03PM +0800, Ming Lei wrote:
>> kmalloc() is often a bit time-consuming, also
>> one atomic counter has to be used to track the total
>> allocated elements, which is also not good.
about the lockdep benifit, :-(
But for this lock, I think lockdep isn't such important, because it is the
intermost lock, and it can be used just for protecting the bucket list
and nothing else need to be covered.
Thanks,
Ming Lei
--
To unsubscribe from this list: send the line "unsubscr
Hi Alexei,
On Wed, Dec 16, 2015 at 6:51 AM, Alexei Starovoitov
wrote:
> On Tue, Dec 15, 2015 at 07:21:02PM +0800, Ming Lei wrote:
>> Both htab_map_update_elem() and htab_map_delete_elem() can be
>> called from eBPF program, and they may be in kernel hot path,
>> so it isn
cost can be decreased.
>From my test, at least 10% fio throughput is improved in block
I/O test when tools/biolatency of bcc(iovisor) is running.
Signed-off-by: Ming Lei
---
kernel/bpf/hashtab.c | 204 +--
1 file changed, 167 insertions(+),
should be enough. This patch converts
the per-hashtable lock into per-bucket bit spinlock, so that
contention can be decreased a lot, and no extra memory can be
consumed for these locks.
Signed-off-by: Ming Lei
---
kernel/bpf/hashtab.c | 38 ++
1 file c
Preparing for removing global per-hashtable lock, so
the counter need to be defined as aotmic_t first.
Signed-off-by: Ming Lei
---
kernel/bpf/hashtab.c | 12 ++--
1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/kernel/bpf/hashtab.c b/kernel/bpf/hashtab.c
index 34777b3
Hi,
This patchset tries to optimize ebpf hash map, and follows
the ideas:
1) Both htab_map_update_elem() and htab_map_delete_elem()
can be called from eBPF program, and they may be in kernel
hot path, so it isn't efficient to use a per-hashtable lock
in this two helpers, so this patch converts th
ose.
bit spinlock isn't efficient as spin lock, but the contention
shouldn't be very high for operating per-bucket hlist, so bit
spinlock should be OK.
Signed-off-by: Ming Lei
---
include/linux/rculist.h | 55 +
1 file changed, 55 insert
The spinlock is just used for protecting the per-bucket
hlist, so it isn't needed for selecting bucket.
Signed-off-by: Ming Lei
---
kernel/bpf/hashtab.c | 6 ++
1 file changed, 2 insertions(+), 4 deletions(-)
diff --git a/kernel/bpf/hashtab.c b/kernel/bpf/hashtab.c
index 2615388..d8
Lifetime for hash fields and liftime for kfree_rcu fields
can't be overlapped, so re-organizing them for better
readabilty.
Also one sizeof(void *) should be saved with this change,
and cache footprint can got improved too.
Signed-off-by: Ming Lei
---
kernel/bpf/hashtab.c
Hi Guys,
When booting from UEFI/ACPI, sometimes there is a crash[1]
from rx path, sometimes there isn't any rx packets comming.
Firmware version: 2.02.10
Thanks,
[1], crash log
Call trace:
skbuff: skb_over_panic: text:ffc00048094c len:85 put:60
head: data: (null) ta
On Wed, May 20, 2015 at 4:40 PM, Oliver Neukum wrote:
> On Wed, 2015-05-20 at 08:29 +0200, Takashi Iwai wrote:
>> The data is cached in RAM. More specifically, the former loaded
>> firmware files are reloaded and saved at suspend for each device
>> object. See fw_pm_notify() in firmware_class.c.
26 matches
Mail list logo