On Thu, Jul 20, 2023 at 1:12 PM Christoph Hellwig wrote:
> > Suggested-by: Christoph Hellwig
>
> Hmm, I'm not sure I suggested adding copy offload..
>
We meant for request based design, we will remove it.
> > static inline unsigned int blk_rq_get_max_segments(struct request *rq)
> > {
> >
If kernel workqueues have higher overhead per item for the lightweight
work VDO currently does in each step, perhaps the dual of the current
scheme would let more work get done per fixed queuing overhead, and
thus perform better? VIOs could take locks on sections of structures,
and operate on
Use new APIs to dynamically allocate the thp-zero and thp-deferred_split
shrinkers.
Signed-off-by: Qi Zheng
---
mm/huge_memory.c | 69 +++-
1 file changed, 45 insertions(+), 24 deletions(-)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index e37150
The following functions are only used inside the mm subsystem, so it's
better to move their declarations to the mm/internal.h file.
1. shrinker_debugfs_add()
2. shrinker_debugfs_detach()
3. shrinker_debugfs_remove()
Signed-off-by: Qi Zheng
---
include/linux/shrinker.h | 19 ---
In preparation for implementing lockless slab shrink, use new APIs to
dynamically allocate the dm-zoned-meta shrinker, so that it can be freed
asynchronously using kfree_rcu(). Then it doesn't need to wait for RCU
read-side critical section when releasing the struct dmz_metadata.
Signed-off-by: Qi
On 2023/7/27 16:04, Qi Zheng wrote:
In preparation for implementing lockless slab shrink, use new APIs to
dynamically allocate the dm-bufio shrinker, so that it can be freed
asynchronously using kfree_rcu(). Then it doesn't need to wait for RCU
read-side critical section when releasing the str
Use new APIs to dynamically allocate the gfs2-glock shrinker.
Signed-off-by: Qi Zheng
Reviewed-by: Muchun Song
---
fs/gfs2/glock.c | 20 +++-
1 file changed, 11 insertions(+), 9 deletions(-)
diff --git a/fs/gfs2/glock.c b/fs/gfs2/glock.c
index 1438e7465e30..8d582ba7514f 100644
Use new APIs to dynamically allocate the android-binder shrinker.
Signed-off-by: Qi Zheng
---
drivers/android/binder_alloc.c | 31 +++
1 file changed, 19 insertions(+), 12 deletions(-)
diff --git a/drivers/android/binder_alloc.c b/drivers/android/binder_alloc.c
index
In preparation for implementing lockless slab shrink, use new APIs to
dynamically allocate the xfs-inodegc shrinker, so that it can be freed
asynchronously using kfree_rcu(). Then it doesn't need to wait for RCU
read-side critical section when releasing the struct xfs_mount.
Signed-off-by: Qi Zhen
Use new APIs to dynamically allocate the xen-backend shrinker.
Signed-off-by: Qi Zheng
Reviewed-by: Muchun Song
---
drivers/xen/xenbus/xenbus_probe_backend.c | 18 +++---
1 file changed, 11 insertions(+), 7 deletions(-)
diff --git a/drivers/xen/xenbus/xenbus_probe_backend.c
b/driv
Use new APIs to dynamically allocate the nfsd-filecache shrinker.
Signed-off-by: Qi Zheng
Reviewed-by: Muchun Song
---
fs/nfsd/filecache.c | 22 --
1 file changed, 12 insertions(+), 10 deletions(-)
diff --git a/fs/nfsd/filecache.c b/fs/nfsd/filecache.c
index ee9c923192e0..8
Now there are no readers of shrinker_rwsem, so we can simply replace it
with mutex lock.
Signed-off-by: Qi Zheng
---
drivers/md/dm-cache-metadata.c | 2 +-
fs/super.c | 2 +-
mm/shrinker.c | 28 ++--
mm/shrinker_debug.c|
On 2023/7/27 16:04, Qi Zheng wrote:
Use new APIs to dynamically allocate the nfs-acl shrinker.
Signed-off-by: Qi Zheng
Reviewed-by: Muchun Song
---
fs/nfs/super.c | 20
1 file changed, 12 insertions(+), 8 deletions(-)
diff --git a/fs/nfs/super.c b/fs/nfs/super.c
ind
On 2023/7/27 18:20, Damien Le Moal wrote:
On 7/27/23 17:55, Qi Zheng wrote:
goto err;
}
+ zmd->mblk_shrinker->count_objects = dmz_mblock_shrinker_count;
+ zmd->mblk_shrinker->scan_objects = dmz_mblock_shrinker_scan;
+ zmd->mblk_shrinker->seeks = DEFAULT_SEEKS;
+
Use new APIs to dynamically allocate the drm-ttm_pool shrinker.
Signed-off-by: Qi Zheng
Reviewed-by: Muchun Song
---
drivers/gpu/drm/ttm/ttm_pool.c | 23 +++
1 file changed, 15 insertions(+), 8 deletions(-)
diff --git a/drivers/gpu/drm/ttm/ttm_pool.c b/drivers/gpu/drm/ttm/t
Use new APIs to dynamically allocate the ubifs-slab shrinker.
Signed-off-by: Qi Zheng
Reviewed-by: Muchun Song
---
fs/ubifs/super.c | 22 --
1 file changed, 12 insertions(+), 10 deletions(-)
diff --git a/fs/ubifs/super.c b/fs/ubifs/super.c
index b08fb28d16b5..c690782388a8 1
In preparation for implementing lockless slab shrink, use new APIs to
dynamically allocate the md-raid5 shrinker, so that it can be freed
asynchronously using kfree_rcu(). Then it doesn't need to wait for RCU
read-side critical section when releasing the struct r5conf.
Signed-off-by: Qi Zheng
Rev
Use new APIs to dynamically allocate the mm-shadow shrinker.
Signed-off-by: Qi Zheng
---
mm/workingset.c | 27 ++-
1 file changed, 14 insertions(+), 13 deletions(-)
diff --git a/mm/workingset.c b/mm/workingset.c
index da58a26d0d4d..3c53138903a7 100644
--- a/mm/workingset
Hi,
On 2023/7/27 16:30, Damien Le Moal wrote:
On 7/27/23 17:04, Qi Zheng wrote:
In preparation for implementing lockless slab shrink, use new APIs to
dynamically allocate the dm-zoned-meta shrinker, so that it can be freed
asynchronously using kfree_rcu(). Then it doesn't need to wait for RCU
r
In preparation for implementing lockless slab shrink, use new APIs to
dynamically allocate the i915_gem_mm shrinker, so that it can be freed
asynchronously using kfree_rcu(). Then it doesn't need to wait for RCU
read-side critical section when releasing the struct drm_i915_private.
Signed-off-by:
Now no users are using the old APIs, just remove them.
Signed-off-by: Qi Zheng
Reviewed-by: Muchun Song
---
include/linux/shrinker.h | 7 --
mm/shrinker.c| 143 ---
2 files changed, 150 deletions(-)
diff --git a/include/linux/shrinker.h b/inclu
Use new APIs to dynamically allocate the dquota-cache shrinker.
Signed-off-by: Qi Zheng
Reviewed-by: Muchun Song
---
fs/quota/dquot.c | 18 ++
1 file changed, 10 insertions(+), 8 deletions(-)
diff --git a/fs/quota/dquot.c b/fs/quota/dquot.c
index e8232242dd34..8883e6992f7c 1006
Hi all,
1. Background
=
We used to implement the lockless slab shrink with SRCU [1], but then kernel
test robot reported -88.8% regression in stress-ng.ramfs.ops_per_sec test
case [2], so we reverted it [3].
This patch series aims to re-implement the lockless slab shrink using the
re
For now, reparent_shrinker_deferred() is the only holder of read lock of
shrinker_rwsem. And it already holds the global cgroup_mutex, so it will
not be called in parallel.
Therefore, in order to convert shrinker_rwsem to shrinker_mutex later,
here we change to hold the write lock of shrinker_rwse
Currently, we maintain two linear arrays per node per memcg, which are
shrinker_info::map and shrinker_info::nr_deferred. And we need to resize
them when the shrinker_nr_max is exceeded, that is, allocate a new array,
and then copy the old array to the new array, and finally free the old
array by R
In preparation for implementing lockless slab shrink, use new APIs to
dynamically allocate the md-bcache shrinker, so that it can be freed
asynchronously using kfree_rcu(). Then it doesn't need to wait for RCU
read-side critical section when releasing the struct cache_set.
Signed-off-by: Qi Zheng
In preparation for implementing lockless slab shrink, use new APIs to
dynamically allocate the ext4-es shrinker, so that it can be freed
asynchronously using kfree_rcu(). Then it doesn't need to wait for RCU
read-side critical section when releasing the struct ext4_sb_info.
Signed-off-by: Qi Zheng
In preparation for implementing lockless slab shrink, use new APIs to
dynamically allocate the dm-bufio shrinker, so that it can be freed
asynchronously using kfree_rcu(). Then it doesn't need to wait for RCU
read-side critical section when releasing the struct dm_bufio_client.
Signed-off-by: Qi Z
In preparation for implementing lockless slab shrink, use new APIs to
dynamically allocate the xfs-buf shrinker, so that it can be freed
asynchronously using kfree_rcu(). Then it doesn't need to wait for RCU
read-side critical section when releasing the struct xfs_buftarg.
Signed-off-by: Qi Zheng
On 7/27/23 17:55, Qi Zheng wrote:
>>> goto err;
>>> }
>>> + zmd->mblk_shrinker->count_objects = dmz_mblock_shrinker_count;
>>> + zmd->mblk_shrinker->scan_objects = dmz_mblock_shrinker_scan;
>>> + zmd->mblk_shrinker->seeks = DEFAULT_SEEKS;
>>> + zmd->mblk_shrinker->priv
On 2023/7/27 16:04, Qi Zheng wrote:
In preparation for implementing lockless slab shrink, use new APIs to
dynamically allocate the md-raid5 shrinker, so that it can be freed
asynchronously using kfree_rcu(). Then it doesn't need to wait for RCU
read-side critical section when releasing the str
In preparation for implementing lockless slab shrink, use new APIs to
dynamically allocate the xfs-qm shrinker, so that it can be freed
asynchronously using kfree_rcu(). Then it doesn't need to wait for RCU
read-side critical section when releasing the struct xfs_quotainfo.
Signed-off-by: Qi Zheng
In preparation for implementing lockless slab shrink, use new APIs to
dynamically allocate the mbcache shrinker, so that it can be freed
asynchronously using kfree_rcu(). Then it doesn't need to wait for RCU
read-side critical section when releasing the struct mb_cache.
Signed-off-by: Qi Zheng
Re
The shrinker_rwsem is a global read-write lock in shrinkers subsystem,
which protects most operations such as slab shrink, registration and
unregistration of shrinkers, etc. This can easily cause problems in the
following cases.
1) When the memory pressure is high and there are many filesystems
In preparation for implementing lockless slab shrink, use new APIs to
dynamically allocate the nfsd-reply shrinker, so that it can be freed
asynchronously using kfree_rcu(). Then it doesn't need to wait for RCU
read-side critical section when releasing the struct nfsd_net.
Signed-off-by: Qi Zheng
In preparation for implementing lockless slab shrink, use new APIs to
dynamically allocate the virtio-balloon shrinker, so that it can be freed
asynchronously using kfree_rcu(). Then it doesn't need to wait for RCU
read-side critical section when releasing the struct virtio_balloon.
Signed-off-by:
Use new APIs to dynamically allocate the f2fs-shrinker.
Signed-off-by: Qi Zheng
Reviewed-by: Muchun Song
---
fs/f2fs/super.c | 32
1 file changed, 24 insertions(+), 8 deletions(-)
diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
index a123f1378d57..9200b67aa745 1
Like global slab shrink, this commit also uses refcount+RCU method to make
memcg slab shrink lockless.
Use the following script to do slab shrink stress test:
```
DIR="/root/shrinker/memcg/mnt"
do_create()
{
mkdir -p /sys/fs/cgroup/memory/test
echo 4G > /sys/fs/cgroup/memory/test/memory
On 2023/7/27 16:04, Qi Zheng wrote:
Use new APIs to dynamically allocate the nfsd-filecache shrinker.
Signed-off-by: Qi Zheng
Reviewed-by: Muchun Song
---
fs/nfsd/filecache.c | 22 --
1 file changed, 12 insertions(+), 10 deletions(-)
diff --git a/fs/nfsd/filecache.c
In preparation for implementing lockless slab shrink, use new APIs to
dynamically allocate the mm-zspool shrinker, so that it can be freed
asynchronously using kfree_rcu(). Then it doesn't need to wait for RCU
read-side critical section when releasing the struct zs_pool.
Signed-off-by: Qi Zheng
R
Hi Linus,
The following changes since commit fdf0eaf11452d72945af31804e2a1048ee1b574c:
Linux 6.5-rc2 (2023-07-16 15:10:37 -0700)
are available in the Git repository at:
git://git.kernel.org/pub/scm/linux/kernel/git/device-mapper/linux-dm.git
tags/for-6.5/dm-fixes
for you to fetch changes
41 matches
Mail list logo