On Tue, May 01, 2018 at 09:39:02PM -0400, Steven Rostedt wrote:
> On Wed, 2 May 2018 09:33:39 +1000
> "Tobin C. Harding" wrote:
>
> > diff --git a/drivers/char/random.c b/drivers/char/random.c
> > index 031d18b31e0f..3a66507ea60b 100644
> > --- a/drivers
ly suggested by Kees).
- Add command line option to use cryptographically insecure hashing.
If debug_early_boot is enabled use hash_long() instead of siphash
(as requested by Steve, and solves original problem for Anna-Maria).
thanks,
Tobin.
Tobin C. Harding (4):
random: Fix whitespace p
There are a couple of whitespace issues around the function
get_random_bytes_arch(). In preparation for patching this function
let's clean them up.
Signed-off-by: Tobin C. Harding
Acked-by: Theodore Ts'o
---
drivers/char/random.c | 3 +--
1 file changed, 1 insertion(+), 2 deletion
ned-off-by: Tobin C. Harding
---
Documentation/admin-guide/kernel-parameters.txt | 8
lib/vsprintf.c | 18 ++
2 files changed, 26 insertions(+)
diff --git a/Documentation/admin-guide/kernel-parameters.txt
b/Documentation/admin-gu
Currently we must wait for enough entropy to become available before
hashed pointers can be printed. We can remove this wait by using the
hw RNG if available.
Use hw RNG to get keying material by default if available.
Suggested-by: Kees Cook
Signed-off-by: Tobin C. Harding
---
lib/vsprintf.c
bytes_arch().
Only get random bytes from the hw RNG, make function return the number
of bytes retrieved from the hw RNG.
Signed-off-by: Tobin C. Harding
Acked-by: Theodore Ts'o
---
drivers/char/random.c | 16 +---
include/linux/random.h | 2 +-
2 files changed, 10 insertions(
On Wed, May 02, 2018 at 02:56:45PM -0700, Andrew Morton wrote:
> On Wed, 2 May 2018 09:33:40 +1000 "Tobin C. Harding" wrote:
>
> > Currently if an attempt is made to print a pointer before there is
> > enough entropy then '(ptrval)' is printed.
On Wed, May 02, 2018 at 09:57:57PM -0700, Kees Cook wrote:
> On Wed, May 2, 2018 at 3:50 PM, Tobin C. Harding wrote:
> > Currently printing [hashed] pointers requires either a hw RNG or enough
> > entropy to be available. Early in the boot sequence these conditions
> > may
This code was a pleasure to read, super clean.
On Wed, May 02, 2018 at 11:59:31PM -0400, Pavel Tatashin wrote:
> When system is rebooted, halted or kexeced device_shutdown() is
> called.
>
> This function shuts down every single device by calling either:
> dev->bus->shutdown(dev)
> de
On Wed, Jun 06, 2018 at 03:08:25PM +0200, Anna-Maria Gleixner wrote:
> On Tue, 5 Jun 2018, Anna-Maria Gleixner wrote:
>
> > On Thu, 31 May 2018, Steven Rostedt wrote:
> >
> > > On Mon, 28 May 2018 11:46:38 +1000
> > > "Tobin C. Harding" wrot
On Wed, Jun 06, 2018 at 03:02:20PM +0200, Thomas Gleixner wrote:
> On Mon, 28 May 2018, Tobin C. Harding wrote:
>
> > Currently printing pointers early in the boot sequence can result in a
> > dummy string '(ptrval)' being printed. While resolving this
>
On Mon, May 28, 2018 at 09:59:15AM -0400, Theodore Y. Ts'o wrote:
> On Mon, May 28, 2018 at 11:46:38AM +1000, Tobin C. Harding wrote:
> >
> > During the versions of this set I have been totally confused about which
> > patches go through which tree. This versio
Currently we must wait for enough entropy to become available before
hashed pointers can be printed. We can remove this wait by using the
hw RNG if available.
Use hw RNG to get keying material.
Suggested-by: Kees Cook
Signed-off-by: Tobin C. Harding
---
lib/vsprintf.c | 19
bytes_arch().
Only get random bytes from the hw RNG, make function return the number
of bytes retrieved from the hw RNG.
Signed-off-by: Tobin C. Harding
Acked-by: Theodore Ts'o
Signed-off-by: Tobin C. Harding
---
drivers/char/random.c | 16 +---
include/linux/random.h | 2 +-
d command line option to use cryptographically insecure hashing.
If debug_early_boot is enabled use hash_long() instead of siphash
(as requested by Steve, and solves original problem for Anna-Maria).
Tobin C. Harding (3):
random: Fix whitespace pre random-bytes work
random: Return n
There are a couple of whitespace issues around the function
get_random_bytes_arch(). In preparation for patching this function
let's clean them up.
Signed-off-by: Tobin C. Harding
Acked-by: Theodore Ts'o
---
drivers/char/random.c | 3 +--
1 file changed, 1 insertion(+), 2 deletion
ent to use cryptographically secure hashing during debugging.
This enables debugging while keeping development/production kernel
behaviour the same.
If new command line option debug_boot_weak_hash is enabled use
cryptographically insecure hashing and hash pointer value immediately.
Signed-off-by: Tobin
On Tue, May 15, 2018 at 09:47:44AM -0400, Steven Rostedt wrote:
> On Tue, 15 May 2018 13:06:26 +1000
> "Tobin C. Harding" wrote:
>
> > Currently we must wait for enough entropy to become available before
> > hashed pointers can be printed. We can remove this
On Tue, May 15, 2018 at 09:37:05AM -0400, Steven Rostedt wrote:
> On Tue, 15 May 2018 13:06:25 +1000
> "Tobin C. Harding" wrote:
>
> > Currently the function get_random_bytes_arch() has return value 'void'.
> > If the hw RNG fails we currently fall b
On Tue, May 15, 2018 at 05:35:46PM -0400, Steven Rostedt wrote:
> On Wed, 16 May 2018 07:17:06 +1000
> "Tobin C. Harding" wrote:
>
> > > > -void get_random_bytes_arch(void *buf, int nbytes)
> > > > +int __must_check get_random_bytes_arch(void *buf, int
Currently we must wait for enough entropy to become available before
hashed pointers can be printed. We can remove this wait by using the
hw RNG if available.
Use hw RNG to get keying material.
Suggested-by: Kees Cook
Signed-off-by: Tobin C. Harding
Reviewed-by: Steven Rostedt (VMware
).
- Add command line option to use cryptographically insecure hashing.
If debug_early_boot is enabled use hash_long() instead of siphash
(as requested by Steve, and solves original problem for Anna-Maria).
- Added Acked-by tag from Ted (patch 1 and 2)
*** BLURB HERE ***
Tobin C. Harding (
bytes_arch().
Only get random bytes from the hw RNG, make function return the number
of bytes retrieved from the hw RNG.
Signed-off-by: Tobin C. Harding
Acked-by: Theodore Ts'o
Reviewed-by: Steven Rostedt (VMware)
---
drivers/char/random.c | 16 +---
include/linux/random.h |
There are a couple of whitespace issues around the function
get_random_bytes_arch(). In preparation for patching this function
let's clean them up.
Signed-off-by: Tobin C. Harding
Acked-by: Theodore Ts'o
---
drivers/char/random.c | 3 +--
1 file changed, 1 insertion(+), 2 deletion
On Wed, May 16, 2018 at 09:12:41AM -0400, Steven Rostedt wrote:
>
> Linus,
>
> The memory barrier usage in updating the random ptr hash for %p in
> vsprintf is incorrect. Instead of adding the read memory barrier
> into vsprintf() which will cause a slight degradation to a commonly
> used functio
On Tue, May 15, 2018 at 08:29:47AM -0700, Randy Dunlap wrote:
> On 05/14/2018 09:38 PM, Tobin C. Harding wrote:
>
> > Documentation/admin-guide/kernel-parameters.txt | 8
> > lib/vsprintf.c | 18 ++
> > 2 file
On Wed, May 16, 2018 at 11:13:48AM -0400, Steven Rostedt wrote:
>
> I think the series looks good, although if Linus takes my last patch,
> it may conflict badly with the third patch. I'll have to look into that.
I applied your PR patch ontop of mainline (imitating Linus pulling) any
yes patch 3
5. build
6. dissasemble object file `objdump -dr mm/slub.o > after.s
7. diff before.s after.s
Use slab_list list_head instead of the lru list_head for maintaining
lists of slabs.
Reviewed-by: Roman Gushchin
Signed-off-by: Tobin C. Harding
---
mm/slub.c | 40 --
atthew).
- Add extra explanation to the commit logs explaining why these changes
are safe to make (suggested by Roman).
- Remove stale comment (thanks Willy).
thanks,
Tobin.
Tobin C. Harding (5):
slub: Add comments to endif pre-processor macros
slub: Use slab_list instead of lru
s
We now use the slab_list list_head instead of the lru list_head. This
comment has become stale.
Remove stale comment from page struct slab_list list_head.
Signed-off-by: Tobin C. Harding
---
include/linux/mm_types.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/include
b.o > after.s
7. diff before.s after.s
Use slab_list list_head instead of the lru list_head for maintaining
lists of slabs.
Reviewed-by: Roman Gushchin
Signed-off-by: Tobin C. Harding
---
mm/slab.c | 49 +
1 file changed, 25 insertions(+), 24 dele
ead for maintaining
lists of slabs.
Reviewed-by: Roman Gushchin
Signed-off-by: Tobin C. Harding
---
mm/slob.c | 10 +-
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/mm/slob.c b/mm/slob.c
index 307c2c9feb44..ee68ff2a2833 100644
--- a/mm/slob.c
+++ b/mm/slob.c
@@ -1
#ifdef CONFIG_FOO
...
#endif /* CONFIG_FOO */
Add comments to endif pre-processor macros if ifdef/endif pair is not
immediately apparent.
Reviewed-by: Roman Gushchin
Signed-off-by: Tobin C. Harding
---
mm/slub.c | 20 ++--
1 file changed, 10 insertions(+), 10
On Wed, Mar 13, 2019 at 07:05:02PM +, Christopher Lameter wrote:
> On Wed, 13 Mar 2019, Tobin C. Harding wrote:
>
> > @@ -297,7 +297,7 @@ static void *slob_alloc(size_t size, gfp_t gfp, int
> > align, int node)
> > continue;
> >
>
ead for maintaining
lists of slabs.
Reviewed-by: Roman Gushchin
Signed-off-by: Tobin C. Harding
---
mm/slob.c | 8
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/mm/slob.c b/mm/slob.c
index 39ad9217ffea..94486c32e0ff 100644
--- a/mm/slob.c
+++ b/mm/slob.c
@@ -1
cognitive load required to read the code.
Use list_head functions to interact with lists thereby maintaining the
abstraction provided by the list_head structure.
Signed-off-by: Tobin C. Harding
---
I verified the comment pointing to Knuth, the page number may be out of
date but with this comment I was
#ifdef CONFIG_FOO
...
#endif /* CONFIG_FOO */
Add comments to endif pre-processor macros if ifdef/endif pair is not
immediately apparent.
Reviewed-by: Roman Gushchin
Acked-by: Christoph Lameter
Signed-off-by: Tobin C. Harding
---
mm/slub.c | 20 ++--
1 file changed
efore and after the patch set is
applied (suggested by Matthew).
- Add extra explanation to the commit logs explaining why these changes
are safe to make (suggested by Roman).
- Remove stale comment (thanks Willy).
thanks,
Tobin.
Tobin C. Harding (7):
list: Add function list_rotate_to_fr
lob allocatator.
Add function list_rotate_to_front() to rotate a list until the specified
item is at the front of the list.
Signed-off-by: Tobin C. Harding
---
include/linux/list.h | 18 ++
1 file changed, 18 insertions(+)
diff --git a/include/linux/list.h b/include/linux/list.h
ind
of
slabs.
Reviewed-by: Roman Gushchin
Acked-by: Christoph Lameter
Signed-off-by: Tobin C. Harding
---
mm/slub.c | 40
1 file changed, 20 insertions(+), 20 deletions(-)
diff --git a/mm/slub.c b/mm/slub.c
index b282e22885cd..d692b5e0163d 100644
--- a
of
slabs.
Reviewed-by: Roman Gushchin
Signed-off-by: Tobin C. Harding
---
mm/slab.c | 49 +
1 file changed, 25 insertions(+), 24 deletions(-)
diff --git a/mm/slab.c b/mm/slab.c
index 28652e4218e0..09cc64ef9613 100644
--- a/mm/slab.c
+++ b/mm/sla
We now use the slab_list list_head instead of the lru list_head. This
comment has become stale.
Remove stale comment from page struct slab_list list_head.
Reviewed-by: Roman Gushchin
Acked-by: Christoph Lameter
Signed-off-by: Tobin C. Harding
---
include/linux/mm_types.h | 2 +-
1 file
On Thu, Mar 14, 2019 at 06:52:25PM +, Roman Gushchin wrote:
> On Thu, Mar 14, 2019 at 04:31:31PM +1100, Tobin C. Harding wrote:
> > Currently we use the page->lru list for maintaining lists of slabs. We
> > have a list_head in the page structure (slab_list) that can be
On Fri, Mar 15, 2019 at 07:38:09AM +1100, Tobin C. Harding wrote:
> On Thu, Mar 14, 2019 at 06:52:25PM +, Roman Gushchin wrote:
> > On Thu, Mar 14, 2019 at 04:31:31PM +1100, Tobin C. Harding wrote:
> > > Currently we use the page->lru list for maintaining lists of sl
On Thu, Mar 14, 2019 at 06:52:25PM +, Roman Gushchin wrote:
> On Thu, Mar 14, 2019 at 04:31:31PM +1100, Tobin C. Harding wrote:
> > Currently we use the page->lru list for maintaining lists of slabs. We
> > have a list_head in the page structure (slab_list) that can be
On Mon, Feb 04, 2019 at 03:04:10PM -0800, Andrew Morton wrote:
> On Mon, 4 Feb 2019 11:57:10 +1100 "Tobin C. Harding"
> wrote:
>
> > Here is v2 of the comments fixes [to single SLUB header file]
>
> Thanks. I think I'll put these into a single patch.
Awesome, thank you.
bvious typos (lean towards not making changes so that
we don't introduce errors).
Edited as text files (obviously) and formatted as HTML to verify
rendering, no other formats verified.
thanks,
Tobin.
Tobin C. Harding (1):
docs: powerpc: Convert docs to RST format.
Documentation/index.
ited as text files (obviously) and formatted as HTML to verify
rendering, no other formats verified.
Convert docs to RST format, adding license.
Signed-off-by: Tobin C. Harding
---
Documentation/index.rst | 1 +
Documentation/powerpc/DAWR-POWER9.rst | 60
Doc
rnel test robot
Signed-off-by: Tobin C. Harding
---
mm/slob.c | 50 ++
1 file changed, 30 insertions(+), 20 deletions(-)
diff --git a/mm/slob.c b/mm/slob.c
index 21af3fdb457a..c543da10df45 100644
--- a/mm/slob.c
+++ b/mm/slob.c
@@ -213,10 +213,18 @@ st
r, as mentioned,
this method of testing did _not_ reproduce the 0day crash so if there
are better suggestions on how I should test these I'm happy to do so.
thanks,
Tobin.
Tobin C. Harding (1):
slob: Only use list functions when safe to do so
mm/slob.c | 50 ++---
On Mon, Apr 01, 2019 at 09:41:28PM -0700, Andrew Morton wrote:
> On Tue, 2 Apr 2019 14:29:57 +1100 "Tobin C. Harding"
> wrote:
>
> > Currently we call (indirectly) list_del() then we manually try to combat
> > the fact that the list may be in an undefined state
after the patch set is
applied (suggested by Matthew).
- Add extra explanation to the commit logs explaining why these changes
are safe to make (suggested by Roman).
- Remove stale comment (thanks Willy).
thanks,
Tobin.
Tobin C. Harding (7):
list: Add function list_rotate_to_front()
Use list_head API instead of reaching into the list_head structure to
check if sp is at the front of the list.
Signed-off-by: Tobin C. Harding
---
mm/slob.c | 51 +--
1 file changed, 37 insertions(+), 14 deletions(-)
diff --git a/mm/slob.c b/mm/slob.c
in
of
slabs.
Acked-by: Christoph Lameter
Signed-off-by: Tobin C. Harding
---
mm/slub.c | 40
1 file changed, 20 insertions(+), 20 deletions(-)
diff --git a/mm/slub.c b/mm/slub.c
index 8fbba4ff6c67..d17f117830a9 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -102
ead for maintaining
lists of slabs.
Reviewed-by: Roman Gushchin
Signed-off-by: Tobin C. Harding
---
mm/slob.c | 12 ++--
1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/mm/slob.c b/mm/slob.c
index 07356e9feaaa..84aefd9b91ee 100644
--- a/mm/slob.c
+++ b/mm/slob.c
@@ -1
lob allocatator.
Add function list_rotate_to_front() to rotate a list until the specified
item is at the front of the list.
Signed-off-by: Tobin C. Harding
---
include/linux/list.h | 18 ++
1 file changed, 18 insertions(+)
diff --git a/include/linux/list.h b/include/linux/list.h
ind
#ifdef CONFIG_FOO
...
#endif /* CONFIG_FOO */
Add comments to endif pre-processor macros if ifdef/endif pair is not
immediately apparent.
Acked-by: Christoph Lameter
Signed-off-by: Tobin C. Harding
---
mm/slub.c | 20 ++--
1 file changed, 10 insertions(+), 10
of
slabs.
Signed-off-by: Tobin C. Harding
---
mm/slab.c | 49 +
1 file changed, 25 insertions(+), 24 deletions(-)
diff --git a/mm/slab.c b/mm/slab.c
index 329bfe67f2ca..09e2a0131338 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -1710,8 +1710,8 @@ sta
We now use the slab_list list_head instead of the lru list_head. This
comment has become stale.
Remove stale comment from page struct slab_list list_head.
Acked-by: Christoph Lameter
Signed-off-by: Tobin C. Harding
---
include/linux/mm_types.h | 2 +-
1 file changed, 1 insertion(+), 1
On Tue, Apr 02, 2019 at 03:26:10PM -0700, Kees Cook wrote:
> This adjusts kselftest_module.sh to take an option "args" argument for
> modprobe arguments, removes bash-isms (since some system's /bin/sh may
> not be bash), and refactors the lib/ scripts into a shorter calling
> convention.
>
> Signe
On Tue, Apr 02, 2019 at 02:37:57PM -0700, Kees Cook wrote:
> On Wed, Mar 6, 2019 at 1:43 PM Tobin C. Harding wrote:
> > This set makes an attempt at adding a framework to kselftest for writing
> > kernel test modules. It also adds a script for use in creating script
>
Implement functions to migrate objects. This is based on
initial code by Matthew Wilcox and was modified to work with
slab object migration.
Cc: Matthew Wilcox
Co-developed-by: Christoph Lameter
Signed-off-by: Tobin C. Harding
---
lib/radix-tree.c | 13 +
lib/xarray.c | 46
remaining partial slab.
Signed-off-by: Tobin C. Harding
---
tools/testing/slab/Makefile | 2 +-
tools/testing/slab/slub_defrag_xarray.c | 211
2 files changed, 212 insertions(+), 1 deletion(-)
create mode 100644 tools/testing/slab/slub_defrag_xarray.c
di
g : Off Lpadd: 352
We can run the stress tests (with the default number of objects):
# cd /sys/kernel/debug/smo
# echo 'test' > callfn
[3.576617] smo: test using nr_objs: 1000 keep: 10
[3.580169] smo: Module tests completed successfully
Signed-off-by: Tobin
unused dentries.
Implement isolate and migrate functions for the dentry slab cache.
Signed-off-by: Tobin C. Harding
---
fs/dcache.c | 87 +
1 file changed, 87 insertions(+)
diff --git a/fs/dcache.c b/fs/dcache.c
index 606844ad5171..4387715b7ebb
In order to support object migration on the dentry cache we need to have
a determined object state at all times. Without a constructor the object
would have a random state after allocation.
Provide a dentry constructor.
Signed-off-by: Tobin C. Harding
---
fs/dcache.c | 37
node (from N1 -> to N2):
echo "N1 N2" > move
This also enables shrinking slabs on a specific node:
echo "N1 N1" > move
Signed-off-by: Tobin C. Harding
---
mm/Kconfig | 7 ++
mm/slub.c | 249 +
2 file
to the
recently added function:
void kmem_cache_setup_mobility(struct kmem_cache *,
kmem_cache_isolate_func,
kmem_cache_migrate_func);
Co-developed-by: Christoph Lameter
Signed-off-by: Tobin C. Hard
ctly?
Thanks for taking the time to look at this.
Tobin.
Tobin C. Harding (14):
slub: Add isolate() and migrate() methods
tools/vm/slabinfo: Add support for -C and -M options
slub: Sort slab cache list
slub: Slab defrag core
tools/vm/slabinfo: Add remote node defrag ratio output
too
rs balance, no other value accepted.
This feature relies on SMO being enable for the cache, this is done with
a call to, after the isolate/migrate functions have been defined.
kmem_cache_setup_mobility(s, isolate, migrate)
Signed-off-by: Tobin C. Harding
---
mm/sl
-C lists caches that use a ctor.
-M lists caches that support object migration.
Add command line options to show caches with a constructor and caches
that are movable (i.e. have migrate function).
Co-developed-by: Christoph Lameter
Signed-off-by: Tobin C. Harding
---
tools/vm/slabinfo.c | 40
It is advantageous to have all defragmentable slabs together at the
beginning of the list of slabs so that there is no need to scan the
complete list. Put defragmentable caches first when adding a slab cache
and others last.
Co-developed-by: Christoph Lameter
Signed-off-by: Tobin C. Harding
ng with 'isolate' and 'migrate'
callbacks.
Co-developed-by: Christoph Lameter
Signed-off-by: Tobin C. Harding
---
include/linux/slab.h | 70
include/linux/slub_def.h | 3 ++
mm/slub.c| 59 ++
ling movable objects ...
verified movable slabs are shrinkable
Removing module slub_defrag ...
Signed-off-by: Tobin C. Harding
---
tools/testing/slab/slub_defrag.c | 1 +
tools/testing/slab/slub_defrag.py | 451 ++
2 files changed, 452 insertions(+)
create mode 10075
Add output line for NUMA remote node defrag ratio.
Signed-off-by: Tobin C. Harding
---
tools/vm/slabinfo.c | 7 +++
1 file changed, 7 insertions(+)
diff --git a/tools/vm/slabinfo.c b/tools/vm/slabinfo.c
index cbfc56c44c2f..d2c22f9ee2d8 100644
--- a/tools/vm/slabinfo.c
+++ b/tools/vm
Add output for the newly added defrag_used_ratio sysfs knob.
Signed-off-by: Tobin C. Harding
---
tools/vm/slabinfo.c | 4
1 file changed, 4 insertions(+)
diff --git a/tools/vm/slabinfo.c b/tools/vm/slabinfo.c
index d2c22f9ee2d8..ef4ff93df4cc 100644
--- a/tools/vm/slabinfo.c
+++ b/tools/vm
0df1f1b86e1836051784b
> Author: Tobin C. Harding
> AuthorDate: Fri Mar 29 10:01:23 2019 +1100
> Commit: Stephen Rothwell
> CommitDate: Sat Mar 30 16:09:41 2019 +1100
>
> mm/slob.c: respect list_head abstraction layer
>
> Currently we reach inside the list_hea
On Wed, Apr 03, 2019 at 06:00:30PM +, Roman Gushchin wrote:
> On Wed, Apr 03, 2019 at 10:05:40AM +1100, Tobin C. Harding wrote:
> > Currently we reach inside the list_head. This is a violation of the
> > layer of abstraction provided by the list_head. It makes the code
>
On Wed, Apr 03, 2019 at 06:00:30PM +, Roman Gushchin wrote:
> On Wed, Apr 03, 2019 at 10:05:40AM +1100, Tobin C. Harding wrote:
> > Currently we reach inside the list_head. This is a violation of the
> > layer of abstraction provided by the list_head. It makes the code
>
On Wed, Apr 03, 2019 at 10:23:26AM -0700, Matthew Wilcox wrote:
> On Wed, Apr 03, 2019 at 03:21:22PM +1100, Tobin C. Harding wrote:
> > +void xa_object_migrate(struct xa_node *node, int numa_node)
> > +{
> > + struct xarray *xa = READ_ONCE(node->array);
> > + voi
On Wed, Apr 03, 2019 at 09:23:28PM +, Roman Gushchin wrote:
> On Thu, Apr 04, 2019 at 08:03:27AM +1100, Tobin C. Harding wrote:
> > On Wed, Apr 03, 2019 at 06:00:30PM +, Roman Gushchin wrote:
> > > On Wed, Apr 03, 2019 at 10:05:40AM +1100, Tobin C. Harding wrote:
> >
On Sun, Apr 07, 2019 at 07:35:34PM -1000, Linus Torvalds wrote:
> On Sat, Apr 6, 2019 at 12:59 PM Qian Cai wrote:
> >
> > The commit 510ded33e075 ("slab: implement slab_root_caches list")
> > changes the name of the list node within "struct kmem_cache" from
> > "list" to "root_caches_node", but le
On Tue, Apr 09, 2019 at 02:59:52PM +0200, Vlastimil Babka wrote:
> On 4/3/19 11:13 PM, Tobin C. Harding wrote:
>
> > According to 0day test robot this is triggering an error from
> > CHECK_DATA_CORRUPTION when the kernel is built with CONFIG_DEBUG_LIST.
>
> FWIW, that r
`make defconfig` (on x86_64 machine) followed by `make
kvmconfig`. Then do the same and manually select SLOB. Boot both
kernels in Qemu.
thanks,
Tobin.
Tobin C. Harding (1):
mm: Remove SLAB allocator
include/linux/slab.h | 26 -
kernel/cpu.c |5 -
mm/slab.c|
We have SLOB for embedded devices and SLUB for everyone else.
Signed-off-by: Tobin C. Harding
---
include/linux/slab.h | 26 -
kernel/cpu.c |5 -
mm/slab.c| 4493 --
mm/slab.h| 31 +-
mm/slab_common.c | 20
On Wed, Apr 10, 2019 at 10:02:36AM +0200, Vlastimil Babka wrote:
> On 4/10/19 4:47 AM, Tobin C. Harding wrote:
> > Recently a 2 year old bug was found in the SLAB allocator that crashes
> > the kernel. This seems to imply that not that many people are using the
> > SLAB alloc
ng with 'isolate' and 'migrate'
callbacks.
Co-developed-by: Christoph Lameter
Signed-off-by: Tobin C. Harding
---
include/linux/slab.h | 70
include/linux/slub_def.h | 3 ++
mm/slub.c| 59 ++
to the
recently added function:
void kmem_cache_setup_mobility(struct kmem_cache *,
kmem_cache_isolate_func,
kmem_cache_migrate_func);
Co-developed-by: Christoph Lameter
Signed-off-by: Tobin C. Hard
c/sys/vm/drop_caches
time find / -name fname-no-exist
real0m0.192s
user0m0.062s
sys 0m0.126s
I am not very experienced with benchmarking, if this is grossly
incorrect please do not hesitate to yell at me. Any suggestions on
more/better benchmarking
-C lists caches that use a ctor.
-M lists caches that support object migration.
Add command line options to show caches with a constructor and caches
that are movable (i.e. have migrate function).
Co-developed-by: Christoph Lameter
Signed-off-by: Tobin C. Harding
---
tools/vm/slabinfo.c | 40
It is advantageous to have all defragmentable slabs together at the
beginning of the list of slabs so that there is no need to scan the
complete list. Put defragmentable caches first when adding a slab cache
and others last.
Co-developed-by: Christoph Lameter
Signed-off-by: Tobin C. Harding
Add output for the newly added defrag_used_ratio sysfs knob.
Signed-off-by: Tobin C. Harding
---
tools/vm/slabinfo.c | 4
1 file changed, 4 insertions(+)
diff --git a/tools/vm/slabinfo.c b/tools/vm/slabinfo.c
index d2c22f9ee2d8..ef4ff93df4cc 100644
--- a/tools/vm/slabinfo.c
+++ b/tools/vm
node (from N1 -> to N2):
echo "N1 N2" > move
This also enables shrinking slabs on a specific node:
echo "N1 N1" > move
Signed-off-by: Tobin C. Harding
---
mm/Kconfig | 7 ++
mm/slub.c | 249 +
2 file
ache (thanks Matthew).
Co-developed-by: Christoph Lameter
Signed-off-by: Tobin C. Harding
---
lib/radix-tree.c | 13 +
lib/xarray.c | 49
2 files changed, 62 insertions(+)
diff --git a/lib/radix-tree.c b/lib/radix-tree.c
i
rs balance, no other value accepted.
This feature relies on SMO being enable for the cache, this is done with
a call to, after the isolate/migrate functions have been defined.
kmem_cache_setup_mobility(s, isolate, migrate)
Signed-off-by: Tobin C. Harding
---
mm/sl
Add output line for NUMA remote node defrag ratio.
Signed-off-by: Tobin C. Harding
---
tools/vm/slabinfo.c | 7 +++
1 file changed, 7 insertions(+)
diff --git a/tools/vm/slabinfo.c b/tools/vm/slabinfo.c
index cbfc56c44c2f..d2c22f9ee2d8 100644
--- a/tools/vm/slabinfo.c
+++ b/tools/vm
remaining partial slab.
Signed-off-by: Tobin C. Harding
---
tools/testing/slab/Makefile | 2 +-
tools/testing/slab/slub_defrag_xarray.c | 211
2 files changed, 212 insertions(+), 1 deletion(-)
create mode 100644 tools/testing/slab/slub_defrag_xarray.c
di
g : Off Lpadd: 352
We can run the stress tests (with the default number of objects):
# cd /sys/kernel/debug/smo
# echo 'test' > callfn
[3.576617] smo: test using nr_objs: 1000 keep: 10
[3.580169] smo: Module tests completed successfully
Signed-off-by: Tobin
ling movable objects ...
verified movable slabs are shrinkable
Removing module slub_defrag ...
Signed-off-by: Tobin C. Harding
---
tools/testing/slab/slub_defrag.c | 1 +
tools/testing/slab/slub_defrag.py | 451 ++
2 files changed, 452 insertions(+)
create mode 10075
obility is enabled and the isolate/migrate functions are built in.
Add CONFIG_DCACHE_SMO to guard the partial shrinking of the dcache via
Slab Movable Objects infrastructure.
Signed-off-by: Tobin C. Harding
---
fs/dcache.c | 4
mm/Kconfig | 7 +++
2 files changed, 11 insertions(+)
diff
201 - 300 of 971 matches
Mail list logo