On Sun, Aug 9, 2020 at 10:06 PM Dmitry Monakhov
wrote:
>
> call_cond_resched_before=Y call cond_resched with resource before wait
> call_cond_resched_after=Y call cond_resched with resource after wait
> measure_cond_resched=Y measure maximum cond_resched time inside loop
Do you really nee
Copy comment from net/ipv6/tcp_ipv6.c to help future readers.
Signed-off-by: Konstantin Khlebnikov
---
net/ipv4/route.c |1 +
1 file changed, 1 insertion(+)
diff --git a/net/ipv4/route.c b/net/ipv4/route.c
index a01efa062f6b..303fe706cbd2 100644
--- a/net/ipv4/route.c
+++ b/net/ipv4
On Fri, Jul 3, 2020 at 8:09 AM Alex Shi wrote:
>
> Hugh Dickins' found a memcg change bug on original version:
> If we want to change the pgdat->lru_lock to memcg's lruvec lock, we have
> to serialize mem_cgroup_move_account during pagevec_lru_move_fn. The
> possible bad scenario would like:
>
>
Map old corporate email address @yandex-team.ru to stable private address.
Signed-off-by: Konstantin Khlebnikov
---
.mailmap |1 +
1 file changed, 1 insertion(+)
diff --git a/.mailmap b/.mailmap
index c69d9c734fb5..b15c836ea7fe 100644
--- a/.mailmap
+++ b/.mailmap
@@ -146,6 +146,7 @@ Kamil
rocessor.h:398)
After:
$ echo 'xxx+0x0/0x0' | ./scripts/decode_stacktrace.sh vmlinux ""
xxx+0x0/0x0
Signed-off-by: Konstantin Khlebnikov
---
scripts/decode_stacktrace.sh |6 +-
1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/scripts/decode_sta
rivers/net/tap.c:502) tap
Signed-off-by: Konstantin Khlebnikov
---
scripts/decode_stacktrace.sh | 29 -
1 file changed, 24 insertions(+), 5 deletions(-)
diff --git a/scripts/decode_stacktrace.sh b/scripts/decode_stacktrace.sh
index 7f18ac10af03..4bdcb6d8c605 10
oot/vmlinux-5.4.0-37-generic
WARNING! Modules path isn't set, but is needed to parse this symbol
tap_open+0x0/0x0 tap
After:
$ echo 'tap_open+0x0/0x0 [tap]' |
./scripts/decode_stacktrace.sh /usr/lib/debug/boot/vmlinux-5.4.0-37-generic
tap_open (drivers/net/tap.c:502) tap
Signed-off-by:
'vfs_open+0x0/0x0' | ./scripts/decode_stacktrace.sh vmlinux
vfs_open (fs/open.c:912)
Signed-off-by: Konstantin Khlebnikov
---
scripts/decode_stacktrace.sh | 14 +++---
1 file changed, 11 insertions(+), 3 deletions(-)
diff --git a/scripts/decode_stacktrace.sh b/scripts/decode_stacktrace.sh
On 04/06/2020 12.30, Karel Zak wrote:
On Mon, Jun 01, 2020 at 10:21:34PM +0300, Konstantin Khlebnikov wrote:
Timestamps in kernel log comes from monotonic clocksource which does not
tick when system suspended. Suspended time easily sums into hours and days
rendering human readable timestamps in
On 03/06/2020 07.58, Christoph Hellwig wrote:
On Mon, Jun 01, 2020 at 03:37:09PM +0300, Konstantin Khlebnikov wrote:
Add flag for marking bio-based queues which support REQ_NOWAIT.
Set for all request based (mq) devices.
Stacking device should set it after blk_set_stacking_limits() if method
On 02/06/2020 15.10, Matthew Wilcox wrote:
On Tue, Jun 02, 2020 at 07:50:33PM +0800, Wang Hai wrote:
syzkaller reports for memory leak when kobject_init_and_add()
returns an error in the function sysfs_slab_add() [1]
When this happened, the function kobject_put() is not called for the
correspon
produces accurate timestamps for messages printed
since last resume. Which are supposed to be most interesting.
Signed-off-by: Konstantin Khlebnikov
---
include/monotonic.h |2 ++
lib/monotonic.c | 12
sys-utils/dmesg.1 |2 ++
sys-utils/dmesg.c | 22
sure
that pages have no extra reference from per-cpu vectors.
The memory barriers around the sequence and the lock come together
to remove waiters without their drain works bandoned.
Cc: Sebastian Andrzej Siewior
Cc: Konstantin Khlebnikov
Signed-off-by: Hillf Danton
---
This is inspired by
Set limits.nowait_requests = 1 before stacking limits.
Raid itself does not delay bio in raid0_make_request().
Signed-off-by: Konstantin Khlebnikov
---
drivers/md/raid0.c |3 +++
1 file changed, 3 insertions(+)
diff --git a/drivers/md/raid0.c b/drivers/md/raid0.c
index 322386ff5d22
example second/third patches add support into md-raid0 and dm-linear.
---
Konstantin Khlebnikov (3):
block: add flag 'nowait_requests' into queue limits
md/raid0: enable REQ_NOWAIT
dm: add support for REQ_NOWAIT and enable for target dm-linear
drivers/md/dm-l
Add dm target feature flag DM_TARGET_NOWAIT which tells that target
has no problem with REQ_NOWAIT.
Set limits.nowait_requests if all targets and backends handle REQ_NOWAIT.
Signed-off-by: Konstantin Khlebnikov
---
drivers/md/dm-linear.c|5 +++--
drivers/md/dm-table.c
Add flag for marking bio-based queues which support REQ_NOWAIT.
Set for all request based (mq) devices.
Stacking device should set it after blk_set_stacking_limits() if method
make_request() itself doesn't delay requests or handles REQ_NOWAIT.
Signed-off-by: Konstantin Khlebnikov
---
bloc
On 31/05/2020 19.33, Konstantin Khlebnikov wrote:
Commit 7b6620d7db56 ("block: remove REQ_NOWAIT_INLINE") removed it,
but some pieces left. Probably something went wrong with git merge.
Nevermind. As I see in block/for-next, Christoph have removed REQ_NOWAIT_INLINE.
But BLK_QC_T
Commit 7b6620d7db56 ("block: remove REQ_NOWAIT_INLINE") removed it,
but some pieces left. Probably something went wrong with git merge.
Signed-off-by: Konstantin Khlebnikov
---
include/linux/blk_types.h |7 ++-
1 file changed, 2 insertions(+), 5 deletions(-)
diff --git a/inc
b
On 25/05/2020 19.03, Konstantin Khlebnikov wrote:
On 25/05/2020 18.33, Matthew Wilcox wrote:
On Mon, May 25, 2020 at 05:19:11PM +0300, Konstantin Khlebnikov wrote:
Tool 'page-types' could list pages mapped by process or file cache pages,
but it shows only limited amount of stat
On 25/05/2020 18.33, Matthew Wilcox wrote:
On Mon, May 25, 2020 at 05:19:11PM +0300, Konstantin Khlebnikov wrote:
Tool 'page-types' could list pages mapped by process or file cache pages,
but it shows only limited amount of state exported via procfs.
Let's employ existing h
On 25/05/2020 18.35, Vlastimil Babka wrote:
On 5/25/20 4:19 PM, Konstantin Khlebnikov wrote:
Tool 'page-types' could list pages mapped by process or file cache pages,
but it shows only limited amount of state exported via procfs.
Let's employ existing helper dump_page() to
x70/0x140
do_exit+0x33f/0xc40
do_group_exit+0x3a/0xa0
__x64_sys_exit_group+0x14/0x20
do_syscall_64+0x48/0x130
entry_SYSCALL_64_after_hwframe+0x44/0xa9
Signed-off-by: Konstantin Khlebnikov
---
Documentation/admin-guide/mm/pagemap.rst |3 +++
Documentation/vm/page_owner.rst
: Konstantin Khlebnikov
On 25/05/2020 14.29, Christoph Hellwig wrote:
Add two new helpers to simplify I/O accounting for bio based drivers.
Currently these drivers use the generic_start_io_acct and
generic_end_io_acct helpers which have very cumbersome calling
conventions, don't actually return the time they started acc
This race has been predicted in 2015 by Vlastimil Babka (see link below).
Signed-off-by: Konstantin Khlebnikov
Fixes: 1d148e218a0d ("mm: add VM_BUG_ON_PAGE() to page_mapcount()")
Link: https://lore.kernel.org/lkml/557710e1.6060...@suse.cz/
Link:
https://lore.kernel.org/linux-mm/158937872515.47
On 24/05/2020 04.01, Hugh Dickins wrote:
On Wed, 13 May 2020, Konstantin Khlebnikov wrote:
Function isolate_migratepages_block() runs some checks out of lru_lock
when choose pages for migration. After checking PageLRU() it checks extra
page references by comparing page_count() and
On Sat, May 23, 2020 at 4:34 AM Andrew Morton wrote:
>
> On Wed, 13 May 2020 17:05:25 +0300 Konstantin Khlebnikov
> wrote:
>
> > Function isolate_migratepages_block() runs some checks out of lru_lock
> > when choose pages for migration. After checking PageLRU()
On 20/05/2020 00.45, Ahmed S. Darwish wrote:
Commit eef1a429f234 ("mm/swap.c: piggyback lru_add_drain_all() calls")
implemented an optimization mechanism to exit the to-be-started LRU
drain operation (name it A) if another drain operation *started and
finished* while (A) was blocked on the LRU dr
On 20/05/2020 01.53, Thomas Gleixner wrote:
Konstantin Khlebnikov writes:
Userspace implementations of mutexes (including glibc) in some cases
retries operation without checking error code from syscall futex.
This is good for performance because most errors are impossible when
locking code
for calling syscall futex for
unaligned address. This may be only bug in user space. Let's help
and handle this gracefully without adding extra code on fast path.
This patch sends SIGBUS signal to slay task and break busy-loop.
Signed-off-by: Konstantin Khlebnikov
Reported-by: Maxim Sam
On 13/05/2020 21.32, Andrew Morton wrote:
On Wed, 13 May 2020 17:05:25 +0300 Konstantin Khlebnikov
wrote:
Function isolate_migratepages_block() runs some checks out of lru_lock
when choose pages for migration. After checking PageLRU() it checks extra
page references by comparing page_count
when page cannot escape from lru.
Also add checking extra references for file pages and swap cache.
Signed-off-by: Konstantin Khlebnikov
Fixes: 119d6d59dcc0 ("mm, compaction: avoid isolating pinned pages")
Fixes: 1d148e218a0d ("mm: add VM_BUG_ON_PAGE() to page_mapcount()")
On 13/05/2020 06.18, Paul E. McKenney wrote:
On Wed, May 13, 2020 at 11:32:38AM +1000, Dave Chinner wrote:
On Sat, May 09, 2020 at 09:09:00AM -0700, Paul E. McKenney wrote:
On Sat, May 09, 2020 at 11:54:40AM +0300, Konstantin Khlebnikov wrote:
On 08/05/2020 17.46, Paul E. McKenney wrote
On 11/05/2020 11.39, Michal Hocko wrote:
On Fri 08-05-20 17:16:29, Konstantin Khlebnikov wrote:
Starting from v4.19 commit 29ef680ae7c2 ("memcg, oom: move out_of_memory
back to the charge path") cgroup oom killer is no longer invoked only from
page faults. Now it implement
This reserved space isn't committed yet but cannot be used for
allocations. For userspace it has no difference from used space.
See the same fix in ext4 commit f06925c73942 ("ext4: report delalloc
reserve as non-free in statfs for project quota").
Signed-off-by: Konstantin Kh
On 08/05/2020 17.46, Paul E. McKenney wrote:
On Fri, May 08, 2020 at 12:00:28PM +0300, Konstantin Khlebnikov wrote:
On 07/05/2020 22.09, Paul E. McKenney wrote:
On Thu, May 07, 2020 at 02:31:02PM -0400, Johannes Weiner wrote:
On Thu, May 07, 2020 at 10:09:03AM -0700, Paul E. McKenney wrote
On 08/05/2020 22.05, Waiman Long wrote:
On 5/8/20 12:16 PM, Konstantin Khlebnikov wrote:
On 08/05/2020 17.49, Waiman Long wrote:
On 5/8/20 8:23 AM, Konstantin Khlebnikov wrote:
Count of buckets is required for estimating average length of hash chains.
Size of hash table depends on memory
On 08/05/2020 17.56, Matthew Wilcox wrote:
On Fri, May 08, 2020 at 03:23:33PM +0300, Konstantin Khlebnikov wrote:
This patch implements heuristic which detects such scenarios and prevents
unbounded growth of completely unneeded negative dentries. It keeps up to
three latest negative dentry in
On 08/05/2020 17.49, Waiman Long wrote:
On 5/8/20 8:23 AM, Konstantin Khlebnikov wrote:
Count of buckets is required for estimating average length of hash chains.
Size of hash table depends on memory size and printed once at boot.
Let's expose nr_buckets as sixth number in sysctl fs.d
il success.
Signed-off-by: Konstantin Khlebnikov
---
Documentation/admin-guide/cgroup-v2.rst | 17 -
1 file changed, 8 insertions(+), 9 deletions(-)
diff --git a/Documentation/admin-guide/cgroup-v2.rst
b/Documentation/admin-guide/cgroup-v2.rst
index bcc80269bb6a..1bb9a8f6ebe1 10
The following commit has been merged into the perf/core branch of tip:
Commit-ID: bb629484d924118e3b1d8652177040115adcba01
Gitweb:
https://git.kernel.org/tip/bb629484d924118e3b1d8652177040115adcba01
Author:Konstantin Khlebnikov
AuthorDate:Wed, 29 Apr 2020 19:23:41 +03:00
The following commit has been merged into the perf/core branch of tip:
Commit-ID: 846de4371fdfddfa49481e3d04884539870dc127
Gitweb:
https://git.kernel.org/tip/846de4371fdfddfa49481e3d04884539870dc127
Author:Konstantin Khlebnikov
AuthorDate:Wed, 29 Apr 2020 19:19:47 +03:00
This lets skip remaining siblings at seeing d_is_tail_negative().
Signed-off-by: Konstantin Khlebnikov
---
fs/dcache.c |7 +++
1 file changed, 7 insertions(+)
diff --git a/fs/dcache.c b/fs/dcache.c
index 743255773cc7..44c6832d21d6 100644
--- a/fs/dcache.c
+++ b/fs/dcache.c
@@ -1303,12
This is preparation for the next patch.
Signed-off-by: Konstantin Khlebnikov
---
fs/dcache.c | 12 +---
1 file changed, 9 insertions(+), 3 deletions(-)
diff --git a/fs/dcache.c b/fs/dcache.c
index 0fd2e02e507b..60158065891e 100644
--- a/fs/dcache.c
+++ b/fs/dcache.c
@@ -636,15
Most walkers are interested only in positive dentries.
Changes in simple_* libfs helpers are mostly cosmetic: it shouldn't cache
negative dentries unless uses d_delete other than always_delete_dentry().
Signed-off-by: Konstantin Khlebnikov
---
fs/dcache.c | 10 ++
fs/libfs.c |
67 99.9%
inotify time: 0.10 seconds
Negative dentries no longer slow down inotify op at parent directory.
Signed-off-by: Konstantin Khlebnikov
---
fs/notify/fsnotify.c |6 +-
1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/fs/notify/fsnotify.c b/fs/notify/fsnotify
Count of buckets is required for estimating average length of hash chains.
Size of hash table depends on memory size and printed once at boot.
Let's expose nr_buckets as sixth number in sysctl fs.dentry-state
Signed-off-by: Konstantin Khlebnikov
---
Documentation/admin-guide/sysctl/f
This tool fills dcache with negative dentries. Between iterations it prints
statistics and measures time of inotify operation which might degrade.
Signed-off-by: Konstantin Khlebnikov
---
tools/testing/selftests/filesystems/Makefile |1
.../testing/selftests/filesystems
o.
Reverse operation is required before instantiating negative dentry.
Signed-off-by: Konstantin Khlebnikov
---
fs/dcache.c| 63 ++--
include/linux/dcache.h |6 +
2 files changed, 66 insertions(+), 3 deletions(-)
diff --git a/fs/
es all of these problems:
Move negative denries to the end of sliblings list, thus walkers could
skip them at first sight (patches 3-6).
Keep in dcache at most three unreferenced negative denties in row in each
hash bucket (patches 7-8).
---
Konstantin Khlebnikov (8):
dcache: show count of
e = 24600351 99.9%
This heuristic isn't bulletproof and solves only most practical case.
It's easy to deceive: just touch same random name twice.
Signed-off-by: Konstantin Khlebnikov
---
fs/dcache.c | 54 ++
1 file changed, 54 insertion
On 07/05/2020 22.09, Paul E. McKenney wrote:
On Thu, May 07, 2020 at 02:31:02PM -0400, Johannes Weiner wrote:
On Thu, May 07, 2020 at 10:09:03AM -0700, Paul E. McKenney wrote:
On Thu, May 07, 2020 at 01:00:06PM -0400, Johannes Weiner wrote:
On Wed, May 06, 2020 at 05:55:35PM -0700, Andrew Mort
On 06/05/2020 14.56, Vlastimil Babka wrote:
On 5/4/20 6:07 PM, Konstantin Khlebnikov wrote:
To get exact count of free and used objects slub have to scan list of
partial slabs. This may take at long time. Scanning holds spinlock and
blocks allocations which move partial slabs to per-cpu lists
On 07/05/2020 06.01, Qian Cai wrote:
On May 6, 2020, at 3:06 PM, Qian Cai wrote:
On May 4, 2020, at 12:07 PM, Konstantin Khlebnikov
wrote:
To get exact count of free and used objects slub have to scan list of
partial slabs. This may take at long time. Scanning holds spinlock and
On 05/05/2020 00.19, David Rientjes wrote:
On Mon, 4 May 2020, Konstantin Khlebnikov wrote:
To get exact count of free and used objects slub have to scan list of
partial slabs. This may take at long time. Scanning holds spinlock and
blocks allocations which move partial slabs to per-cpu lists
On 04/05/2020 22.56, Andrew Morton wrote:
On Mon, 04 May 2020 19:07:39 +0300 Konstantin Khlebnikov
wrote:
To get exact count of free and used objects slub have to scan list of
partial slabs. This may take at long time. Scanning holds spinlock and
blocks allocations which move partial slabs
On 04/05/2020 19.00, Christoph Hellwig wrote:
On Mon, May 04, 2020 at 06:54:53PM +0300, Konstantin Khlebnikov wrote:
This is required to avoid waiting in lower layers.
Signed-off-by: Konstantin Khlebnikov
This looks sensible. Did you run this through xfstests?
Nope. It seems xfstests
of partials is longer than limit.
Nobody should notice difference.
Signed-off-by: Konstantin Khlebnikov
---
mm/slub.c | 15 ++-
1 file changed, 14 insertions(+), 1 deletion(-)
diff --git a/mm/slub.c b/mm/slub.c
index 9bf44955c4f1..86a366f7acb6 100644
--- a/mm/slub.c
+++ b/mm/slu
This is required to avoid waiting in lower layers.
Signed-off-by: Konstantin Khlebnikov
---
fs/iomap/direct-io.c |2 ++
1 file changed, 2 insertions(+)
diff --git a/fs/iomap/direct-io.c b/fs/iomap/direct-io.c
index 20dde5aadcdd..9b53fa7651e3 100644
--- a/fs/iomap/direct-io.c
+++ b/fs/iomap
For some reason NOWAIT currently is passed only for writes.
Signed-off-by: Konstantin Khlebnikov
Fixes: 03a07c92a9ed ("block: return on congested block device")
---
fs/direct-io.c |4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/fs/direct-io.c b/fs/direct-
FLAGS,... by all events.
Exact constants are not part of ABI thus could be easily added/removed.
Keep 'rwbs' for backward compatibility.
Signed-off-by: Konstantin Khlebnikov
---
include/trace/events/block.h | 178 +-
1 file changed, 156 insertions(+
Define BLK_RWBS_LEN in blktrace_api.h
Bcache events use shorter 6 char buffer which could overflow.
Also remove unsed "bytes" argument.
Signed-off-by: Konstantin Khlebnikov
---
include/linux/blktrace_api.h |4 +-
include/trace/events/bcache.h | 20 +-
include/tr
On 04/05/2020 17.03, Christoph Hellwig wrote:
+#define __part_stat_add(part, field, addnd)\
+ (part_stat_get(part, field) += (addnd))
Just open coding part_stat_get for the UP side would seems a little
easier to read.
If rewrite filed definition as
#ifdef C
Also rename blk_account_io_merge() into blk_account_io_merge_request() to
distinguish it from merging request and bio.
Signed-off-by: Konstantin Khlebnikov
---
block/blk-merge.c |7 +--
1 file changed, 5 insertions(+), 2 deletions(-)
diff --git a/block/blk-merge.c b/block/blk-merge.c
Signed-off-by: Konstantin Khlebnikov
---
block/blk-core.c | 39 ++-
block/blk-exec.c |2 +-
block/blk-mq.c |2 +-
block/blk.h |2 +-
4 files changed, 25 insertions(+), 20 deletions(-)
diff --git a/block/blk-core.c b/block/blk-core.c
ind
Most architectures have fast path to access percpu for current cpu.
Required preempt_disable() is provided by part_stat_lock().
Signed-off-by: Konstantin Khlebnikov
---
include/linux/part_stat.h |9 ++---
1 file changed, 6 insertions(+), 3 deletions(-)
diff --git a/include/linux
. Previously that was provided by rcu_read_lock().
Signed-off-by: Konstantin Khlebnikov
---
block/blk-core.c |3 +++
include/linux/part_stat.h |7 +++
2 files changed, 6 insertions(+), 4 deletions(-)
diff --git a/block/blk-core.c b/block/blk-core.c
index 7f11560bfddb
or "thread_siblings_list"
and simply check presence of ',' or '-' in it.
Signed-off-by: Konstantin Khlebnikov
Fixes: de5077c4e38f ("perf tools: Add utility function to detect SMT status")
---
tools/perf/util/smt.c | 39 ---
The following commit has been merged into the x86/urgent branch of tip:
Commit-ID: fdc63ff0e49c54992b4b2656345a5e878b32
Gitweb:
https://git.kernel.org/tip/fdc63ff0e49c54992b4b2656345a5e878b32
Author:Konstantin Khlebnikov
AuthorDate:Wed, 08 Apr 2020 21:13:10 +03:00
On Wed, Apr 29, 2020 at 9:16 PM Arnaldo Carvalho de Melo
wrote:
>
> Em Wed, Apr 29, 2020 at 07:22:43PM +0300, Konstantin Khlebnikov escreveu:
> > Cpu bitmap is split into 32 bit words. For system with more than 32 cores
> > threads are always in different words thus first word
SMT now could be disabled via "/sys/devices/system/cpu/smt/control".
Status shown in "/sys/devices/system/cpu/smt/active" simply as "0" / "1".
If this knob isn't here then fallback to checking topology as before.
Signed-off-by: Konstantin Khlebnik
or "thread_siblings_list"
and simply check presence of ',' or '-' in it.
Signed-off-by: Konstantin Khlebnikov
Fixes: de5077c4e38f ("perf tools: Add utility function to detect SMT status")
---
tools/perf/util/smt.c | 37 +
Check access("devices/system/cpu/cpu%d/topology/core_cpus", F_OK) fails,
unless current directory is "/sys". Simply try read this file first.
Signed-off-by: Konstantin Khlebnikov
Fixes: 0ccdb8407a46 ("perf tools: Apply new CPU topology sysfs attributes")
---
00100,0001", cpu 79: "8000,0080,".
Instead of parsing bitmap read "core_cpus_list" or "thread_siblings_list"
and simply check presence of ',' or '-' in it.
Signed-off-by: Konstantin Khlebnikov
Fixes: de5077c4e38f ("
On 23/10/2019 15.07, Konstantin Khlebnikov wrote:
commit 1ed95e52d902035e39a715ff3a314a893a96e5b7 upstream.
Commit d96d87834d5b870402a4a5b565706a4869ebc020 in v4.4.190 which is
backport of upstream commit 1ed95e52d902035e39a715ff3a314a893a96e5b7
removed only HPET access from vdso but leaved
could read HPET directly and confuse hardware.
This patch removes mapping HPET page into userspace.
Fixes: d96d87834d5b ("x86/vdso: Remove direct HPET access through the vDSO") #
v4.4.190
Signed-off-by: Konstantin Khlebnikov
Link:
https://lore.kernel.org/lkml/6fd42b2b-e29a-1fd6-03d1
kernel.org
Cc: b...@alien8.de
Link:
https://lkml.kernel.org/r/20190401114045.7280-1-zhang@linux.alibaba.com
Signed-off-by: Ingo Molnar
Signed-off-by: Konstantin Khlebnikov
---
arch/x86/entry/vdso/vdso2c.c |3 ---
arch/x86/include/asm/vdso.h |1 -
2 files changed, 4 deletions(-)
di
On 20/05/2019 18.04, Konstantin Khlebnikov wrote:
On 18.05.2019 21:26, Thomas Gleixner wrote:
On Sat, 18 May 2019, Konstantin Khlebnikov wrote:
On 18.05.2019 18:17, Thomas Gleixner wrote:
On Wed, 15 May 2019, Konstantin Khlebnikov wrote:
Timekeeping watchdog verifies doubtful clocksources
Strings from vmstat_text[] will be used for printing memory cgroup
statistics which exists even if CONFIG_VM_EVENT_COUNTERS=n.
This should be applied before patch "mm/memcontrol: use vmstat names
for printing statistics".
Signed-off-by: Konstantin Khlebnikov
Link:
https://lore.kernel
On Sat, Oct 19, 2019 at 8:40 AM wrote:
>
> The mm-of-the-moment snapshot 2019-10-18-22-40 has been uploaded to
>
>http://www.ozlabs.org/~akpm/mmotm/
>
> mmotm-readme.txt says
>
> README for mm-of-the-moment:
>
> http://www.ozlabs.org/~akpm/mmotm/
>
> This is a snapshot of my -mm patch queue.
On 15/10/2019 17.31, Johannes Weiner wrote:
On Tue, Oct 15, 2019 at 01:04:01PM +0200, Michal Hocko wrote:
On Tue 15-10-19 13:49:14, Konstantin Khlebnikov wrote:
On 15/10/2019 13.36, Michal Hocko wrote:
On Tue 15-10-19 11:44:22, Konstantin Khlebnikov wrote:
On 15/10/2019 11.20, Michal Hocko
On 15/10/2019 16.53, Johannes Weiner wrote:
On Tue, Oct 15, 2019 at 11:09:59AM +0300, Konstantin Khlebnikov wrote:
Mapped, dirty and writeback pages are also counted in per-lruvec stats.
These counters needs update when page is moved between cgroups.
Fixes: 00f3ca2c2d66 ("mm: memcontrol
On 15/10/2019 13.36, Michal Hocko wrote:
On Tue 15-10-19 11:44:22, Konstantin Khlebnikov wrote:
On 15/10/2019 11.20, Michal Hocko wrote:
On Tue 15-10-19 11:09:59, Konstantin Khlebnikov wrote:
Mapped, dirty and writeback pages are also counted in per-lruvec stats.
These counters needs update
: Konstantin Khlebnikov
---
include/linux/vmstat.h |4 ++--
mm/memcontrol.c| 52
mm/vmstat.c|9 +---
3 files changed, 29 insertions(+), 36 deletions(-)
diff --git a/include/linux/vmstat.h b/include/linux/vmstat.h
of following items.
Also this patch reuses piece of node stat names for lru list names:
const char *lru_list_name(enum lru_list lru);
This returns common lru list names: "inactive_anon", "active_anon",
"inactive_file", "active_file", "unevic
On 15/10/2019 11.20, Michal Hocko wrote:
On Tue 15-10-19 11:09:59, Konstantin Khlebnikov wrote:
Mapped, dirty and writeback pages are also counted in per-lruvec stats.
These counters needs update when page is moved between cgroups.
Please describe the user visible effect.
Surprisingly I
Mapped, dirty and writeback pages are also counted in per-lruvec stats.
These counters needs update when page is moved between cgroups.
Fixes: 00f3ca2c2d66 ("mm: memcontrol: per-lruvec stats infrastructure")
Signed-off-by: Konstantin Khlebnikov
---
mm/memcontrol.c | 18 +++
1m53.600s
Patched:
real2m39.933s
user17m1.835s
sys 1m38.802s
real2m39.321s
user17m1.634s
sys 1m39.206s
real2m39.575s
user17m1.420s
sys 1m38.845s
Signed-off-by: Wei Yang
Acked-by: Konstantin Khlebnikov
---
mm/rmap.c | 13 +
1 file ch
fork.
But this isn't strictly necessary - any vmas could share anon_vma.
For example all vmas in system could be linked with single anon_vma.
Acked-by: Konstantin Khlebnikov
---
v4:
* check dst->anon_vma in each iteration
v3:
* use dst->anon_vma and src->anon_vma to get reuse
On 10/10/2019 16.58, Wei Yang wrote:
Before commit 7a3ef208e662 ("mm: prevent endless growth of anon_vma
hierarchy"), anon_vma_clone() doesn't change dst->anon_vma. While after
this commit, anon_vma_clone() will try to reuse an exist one on forking.
But this commit go a little bit further for th
On 10/10/2019 06.15, Wei Yang wrote:
On Thu, Oct 10, 2019 at 10:36:01AM +0800, Wei Yang wrote:
Hi, Qian, Shakeel
Thanks for testing.
Sounds I missed some case to handle. anon_vma_clone() now would be called in
vma_adjust, which is a different case when it is introduced.
Well, I have to corr
On Sat, Oct 5, 2019 at 10:35 PM Andrew Morton wrote:
>
> On Fri, 04 Oct 2019 16:09:22 +0300 Konstantin Khlebnikov
> wrote:
>
> > This is very slow operation. There is no reason to do it again if somebody
> > else already drained all per-cpu vectors while we waited for lo
change, kernel build test reduces 20% anon_vma allocation.
Makes sense. This might have much bigger effect for scenarios when task
unmaps holes in huge vma as red-zones between allocations and then forks.
Acked-by: Konstantin Khlebnikov
Signed-off-by: Wei Yang
---
mm/rmap.c | 11
On 04/10/2019 16.39, Michal Hocko wrote:
On Fri 04-10-19 16:32:39, Konstantin Khlebnikov wrote:
On 04/10/2019 16.12, Michal Hocko wrote:
On Fri 04-10-19 16:09:22, Konstantin Khlebnikov wrote:
This is very slow operation. There is no reason to do it again if somebody
else already drained all
On 04/10/2019 16.12, Michal Hocko wrote:
On Fri 04-10-19 16:09:22, Konstantin Khlebnikov wrote:
This is very slow operation. There is no reason to do it again if somebody
else already drained all per-cpu vectors while we waited for lock.
Piggyback on drain started and finished while we waited
POSIX_FADV_DONTNEED retry their operations once after
draining per-cpu vectors when pages have unexpected references.
Signed-off-by: Konstantin Khlebnikov
---
mm/swap.c | 16 +++-
1 file changed, 15 insertions(+), 1 deletion(-)
diff --git a/mm/swap.c b/mm/swap.c
index 38c3fa4308e2
On 04/10/2019 15.27, Michal Hocko wrote:
On Fri 04-10-19 05:10:17, Matthew Wilcox wrote:
On Fri, Oct 04, 2019 at 01:11:06PM +0300, Konstantin Khlebnikov wrote:
This is very slow operation. There is no reason to do it again if somebody
else already drained all per-cpu vectors after we waited
On 04/10/2019 15.10, Matthew Wilcox wrote:
On Fri, Oct 04, 2019 at 01:11:06PM +0300, Konstantin Khlebnikov wrote:
This is very slow operation. There is no reason to do it again if somebody
else already drained all per-cpu vectors after we waited for lock.
+ seq
This is very slow operation. There is no reason to do it again if somebody
else already drained all per-cpu vectors after we waited for lock.
Signed-off-by: Konstantin Khlebnikov
---
mm/swap.c | 13 -
1 file changed, 12 insertions(+), 1 deletion(-)
diff --git a/mm/swap.c b/mm
1 - 100 of 1037 matches
Mail list logo