kernel crash on the zswapin path in our internal kernel,
> which went undetected because of a lack of test coverage for this path.
>
> Add a selftest to verify that when memory.zswap.max = 0, no pages can go
> to the zswap pool for the cgroup.
>
> Suggested-by: Rik van Rie
> > > +{
> > > + size_t size = (size_t)arg;
> > > + char *mem = (char *)malloc(size);
> > > + int ret = 0;
> > > +
> > > + if (!mem)
> > > + return -1;
> > > + for (int i = 0; i < size; i += 4095)
> > > + mem[i] = 'a';
> >
> > cgroup_util.h defines PAGE_S
ory.zswap.max = 0, no pages can go
> to the zswap pool for the cgroup.
>
> Suggested-by: Rik van Riel
> Suggested-by: Yosry Ahmed
> Signed-off-by: Nhat Pham
> ---
> tools/testing/selftests/cgroup/test_zswap.c | 97 +
> 1 file changed, 97 insertions
On Tue, Jan 30, 2024 at 10:37:15AM -0800, Nhat Pham wrote:
> On Mon, Jan 29, 2024 at 5:02 PM Yosry Ahmed wrote:
> >
> > On Mon, Jan 29, 2024 at 02:45:40PM -0800, Nhat Pham wrote:
> > > Make it easier for contributors to find the zswap maintainers when they
>
On Tue, Jan 30, 2024 at 10:31:24AM -0800, Nhat Pham wrote:
> On Mon, Jan 29, 2024 at 5:24 PM Yosry Ahmed wrote:
> >
[..]
> > > -static int allocate_bytes(const char *cgroup, void *arg)
> > > +static int allocate_bytes_and_read(const char *cgroup,
On Mon, Jan 29, 2024 at 02:45:42PM -0800, Nhat Pham wrote:
> We recently encountered a kernel crash on the zswapin path in our
> internal kernel, which went undetected because of a lack of test
> coverage for this path. Add a selftest to cover this code path,
> allocating more memories than the cgr
ha...@gmail.com/
>
> Fixes: a697dc2be925 ("selftests: cgroup: update per-memcg zswap writeback
> selftest")
Looks like this should go into v6.8 too.
> Signed-off-by: Nhat Pham
Acked-by: Yosry Ahmed
On Mon, Jan 29, 2024 at 02:45:40PM -0800, Nhat Pham wrote:
> Make it easier for contributors to find the zswap maintainers when they
> update the zswap tests.
>
> Signed-off-by: Nhat Pham
I guess I had to check the zswap tests at some point :)
Acked-by: Yosry Ahmed
> ---
>
On Wed, Dec 6, 2023 at 11:47 AM Nhat Pham wrote:
>
> [...]
> >
> > Hmm so how should we proceed from here? How about this:
> >
> > a) I can send a fixlet to move the enablement check above the stats
> > flushing + use mem_cgroup_flush_stats
> > b) Then maybe, you can send a fixlet to update this n
On Tue, Dec 5, 2023 at 10:43 PM Chengming Zhou wrote:
>
> On 2023/12/6 13:59, Yosry Ahmed wrote:
> > [..]
> >>> @@ -526,6 +582,102 @@ static struct zswap_entry
> >>> *zswap_entry_find_get(struct rb_
[..]
> > @@ -526,6 +582,102 @@ static struct zswap_entry
> > *zswap_entry_find_get(struct rb_root *root,
> > return entry;
> > }
> >
> > +/*
> > +* shrinker functions
> > +**/
> > +static enum lru_status shrink_memcg_cb(struct
On Tue, Dec 5, 2023 at 11:33 AM Nhat Pham wrote:
>
> Rename ZSWP_WB to ZSWPWB to better match the existing counters naming
> scheme.
>
> Suggested-by: Johannes Weiner
> Signed-off-by: Nhat Pham
For the original patch + this fix:
Reviewed-by: Yosry Ahmed
>
[..]
> > > static void shrink_worker(struct work_struct *w)
> > > {
> > > struct zswap_pool *pool = container_of(w, typeof(*pool),
> > > shrink_work);
> > > + struct mem_cgroup *memcg;
> > > int ret, failures = 0;
> > >
> > > +
On Thu, Nov 30, 2023 at 11:40 AM Nhat Pham wrote:
>
> From: Domenico Cerasuolo
>
> Since zswap now writes back pages from memcg-specific LRUs, we now need a
> new stat to show writebacks count for each memcg.
>
> Suggested-by: Nhat Pham
> Signed-off-by: Domenico Cerasuolo
> Signed-off-by: Nhat
On Thu, Nov 30, 2023 at 11:40 AM Nhat Pham wrote:
>
> From: Domenico Cerasuolo
>
> Currently, we only have a single global LRU for zswap. This makes it
> impossible to perform worload-specific shrinking - an memcg cannot
> determine which pages in the pool it owns, and often ends up writing
> pag
the onlineness check. In the !CONFIG_MEMCG case, it always returns
> true, analogous to mem_cgroup_tryget(). This is useful for e.g to the
> new zswap writeback scheme, where we need to select the next online
> memcg as a candidate for the global limit reclaim.
>
> Signed-off-by: N
On Fri, Nov 17, 2023 at 8:23 AM Nhat Pham wrote:
>
> On Thu, Nov 16, 2023 at 4:57 PM Chris Li wrote:
> >
> > Hi Nhat,
> >
> > I want want to share the high level feedback we discussed here in the
> > mailing list as well.
> >
> > It is my observation that each memcg LRU list can't compare the pag
> >
> > This lock is only needed to synchronize updating pool->next_shrink,
> > right? Can we just use atomic operations instead? (e.g. cmpxchg()).
>
> I'm not entirely sure. I think in the pool destroy path, we have to also
> put the next_shrink memcg, so there's that.
We can use xchg() to replac
On Mon, Nov 6, 2023 at 10:32 AM Nhat Pham wrote:
>
> From: Domenico Cerasuolo
>
> Currently, we only have a single global LRU for zswap. This makes it
> impossible to perform worload-specific shrinking - an memcg cannot
> determine which pages in the pool it owns, and often ends up writing
> page
> > [..]
> > > +/*
> > > +* lru functions
> > > +**/
> > > +static bool zswap_lru_add(struct list_lru *list_lru, struct zswap_entry
> > > *entry)
> > > +{
> > > + struct mem_cgroup *memcg = get_mem_cgroup_from_entry(entry);
> >
On Tue, Oct 24, 2023 at 1:33 PM Nhat Pham wrote:
>
> From: Domenico Cerasuolo
>
> Currently, we only have a single global LRU for zswap. This makes it
> impossible to perform worload-specific shrinking - an memcg cannot
> determine which pages in the pool it owns, and often ends up writing
> page
On Thu, Oct 19, 2023 at 10:12 AM Andrew Morton
wrote:
>
> On Tue, 17 Oct 2023 16:21:47 -0700 Nhat Pham wrote:
>
> > Subject: [PATCH v3 0/5] workload-specific and memory pressure-driven zswap
> > writeback
>
> We're at -rc6 and I'd prefer to drop this series from mm.git, have
> another go during
On Thu, Oct 19, 2023 at 5:47 AM Domenico Cerasuolo
wrote:
>
> On Thu, Oct 19, 2023 at 3:12 AM Yosry Ahmed wrote:
> >
> > On Wed, Oct 18, 2023 at 4:47 PM Nhat Pham wrote:
> > >
> > > On Wed, Oct 18, 2023 at 4:20 PM Yosry Ahmed wrote:
> > > >
&
[..]
> > >
> > > +/*
> > > +* lru functions
> > > +**/
> > > +static bool zswap_lru_add(struct list_lru *list_lru, struct zswap_entry
> > > *entry)
> > > +{
> > > + struct mem_cgroup *memcg = get_mem_cgroup_from_entry(entry);
>
On Wed, Oct 18, 2023 at 4:47 PM Nhat Pham wrote:
>
> On Wed, Oct 18, 2023 at 4:20 PM Yosry Ahmed wrote:
> >
> > On Tue, Oct 17, 2023 at 4:21 PM Nhat Pham wrote:
> > >
> > > From: Domenico Cerasuolo
> > >
> > > Currently, we
On Tue, Oct 17, 2023 at 4:21 PM Nhat Pham wrote:
>
> Currently, we only shrink the zswap pool when the user-defined limit is
> hit. This means that if we set the limit too high, cold data that are
> unlikely to be used again will reside in the pool, wasting precious
> memory. It is hard to predict
On Tue, Oct 17, 2023 at 4:21 PM Nhat Pham wrote:
>
> From: Domenico Cerasuolo
>
> Since zswap now writes back pages from memcg-specific LRUs, we now need a
> new stat to show writebacks count for each memcg.
>
> Suggested-by: Nhat Pham
> Signed-off-by: Domenico Cerasuolo
> Signed-off-by: Nhat P
On Tue, Oct 17, 2023 at 4:21 PM Nhat Pham wrote:
>
> From: Domenico Cerasuolo
>
> Currently, we only have a single global LRU for zswap. This makes it
> impossible to perform worload-specific shrinking - an memcg cannot
> determine which pages in the pool it owns, and often ends up writing
> page
On Tue, Oct 17, 2023 at 4:21 PM Nhat Pham wrote:
>
> The interface of list_lru is based on the assumption that objects are
> allocated on the correct node/memcg, with this change it is introduced the
> possibility to explicitly specify numa node and memcgroup when adding and
> removing objects. Th
29 matches
Mail list logo