ow us the whole lifespan of this packet. But we
could also implement that with pid as these functions are executed in
process context.
Signed-off-by: Yafang Shao
---
v2 -> v3: use sock_gen_cookie in tcp_event_sk as well.
Maybe we could init sk_cookie in the stack then in ot
ow us the whole lifespan of this packet. But we
could also implement that with pid as these functions are executed in
process context.
Signed-off-by: Yafang Shao
---
v2 -> v3: use sock_gen_cookie in tcp_event_sk as well.
Maybe we could init sk_cookie in the stack then in ot
On Fri, Apr 20, 2018 at 11:21 PM, David Miller wrote:
>
> Why are you sending this same patch twice?
>
> Thank you.
Some mistake.
Sorry about that.
Pls. use the second patch.
Thanks
Yafang
With sk_cookie we can identify a socket, that is very helpful for
traceing and statistic, i.e. tcp tracepiont and ebpf.
So we'd better init it by default for inet socket.
When using it, we just need call atomic64_read(&sk->sk_cookie).
Signed-off-by: Yafang Shao
---
include/linux/soc
ead/recv* tracepoint, and
finally that could show us the whole lifespan of this packet. But we
could also implement that with pid as these functions are executed in
process context.
Signed-off-by: Yafang Shao
---
include/trace/events/tcp.h | 21 +++--
net/ipv4/tcp_input.c | 2 ++
ead/recv* tracepoint, and
finally that could show us the whole lifespan of this packet. But we
could also implement that with pid as these functions are executed in
process context.
Signed-off-by: Yafang Shao
---
include/trace/events/tcp.h | 21 +++--
net/ipv4/tcp_input.c | 2 ++
On Mon, Apr 16, 2018 at 11:43 PM, Eric Dumazet wrote:
>
>
> On 04/16/2018 08:33 AM, Yafang Shao wrote:
>> tcp_rcv_space_adjust is called every time data is copied to user space,
>> introducing a tcp tracepoint for which could show us when the packet is
>> copied to
int with epoll/read/recv* tracepoints, and
finally that could show us the whole lifespan of this packet. But we
could also implement that with pid as these functions are executed in
process context.
Signed-off-by: Yafang Shao
---
v1 -> v2: use sk_cookie as key suggested by Eric.
---
include/t
On Wed, Apr 18, 2018 at 1:27 AM, Eric Dumazet wrote:
>
>
> On 04/17/2018 09:36 AM, Yafang Shao wrote:
>> tcp_rcv_space_adjust is called every time data is copied to user space,
>> introducing a tcp tracepoint for which could show us when the packet is
>> copied to
On Wed, Apr 18, 2018 at 1:38 AM, Song Liu wrote:
>
>
>> On Apr 17, 2018, at 9:36 AM, Yafang Shao wrote:
>>
>> tcp_rcv_space_adjust is called every time data is copied to user space,
>> introducing a tcp tracepoint for which could show us when the packet is
>>
On Wed, Apr 18, 2018 at 7:44 AM, Alexei Starovoitov
wrote:
> On Mon, Apr 16, 2018 at 08:43:31AM -0700, Eric Dumazet wrote:
>>
>>
>> On 04/16/2018 08:33 AM, Yafang Shao wrote:
>> > tcp_rcv_space_adjust is called every time data is copied to user space,
>> >
On Thu, Jun 18, 2020 at 8:37 PM Chris Down wrote:
>
> Yafang Shao writes:
> >On Thu, Jun 18, 2020 at 5:09 AM Chris Down wrote:
> >>
> >> Naresh Kamboju writes:
> >> >After this patch applied the reported issue got fixed.
> >>
> >>
From: Yafang Shao
A cgroup can have both memory protection and a memory limit to isolate
it from its siblings in both directions - for example, to prevent it
from being shrunk below 2G under high pressure from outside, but also
from growing beyond 4G under low pressure.
Commit 9783aa9917f8 (&qu
patch has effectively no overhead unless tracepoints are enabled at
> runtime. If tracepoints are enabled, there is a performance impact, but
> how much depends on exactly what e.g. the BPF program does.
>
> Signed-off-by: Axel Rasmussen
Acked-by: Yafang Shao
> ---
&
st keep it as-is.
Signed-off-by: Yafang Shao
---
include/linux/trace_events.h | 1 +
kernel/trace/trace.c | 1 +
kernel/trace/trace_events.c | 1 +
kernel/trace/trace_output.c | 2 +-
4 files changed, 4 insertions(+), 1 deletion(-)
diff --git a/include/linux/trace_events.h b/include/linu
On Tue, Oct 13, 2020 at 9:05 PM Steven Rostedt wrote:
>
> On Tue, 13 Oct 2020 13:54:54 +0800
> Yafang Shao wrote:
>
> > --- a/include/linux/trace_events.h
> > +++ b/include/linux/trace_events.h
> > @@ -67,6 +67,7 @@ struct trace_entry {
> > unsigned char
On Fri, May 29, 2020 at 12:41 AM Chris Down wrote:
>
> Naresh Kamboju writes:
> >On Thu, 28 May 2020 at 20:33, Michal Hocko wrote:
> >>
> >> On Fri 22-05-20 02:23:09, Naresh Kamboju wrote:
> >> > My apology !
> >> > As per the test results history this problem started happening from
> >> > Bad :
On Fri, May 8, 2020 at 9:38 PM Johannes Weiner wrote:
>
> On Fri, May 08, 2020 at 06:25:14AM -0700, Shakeel Butt wrote:
> > On Fri, May 8, 2020 at 3:34 AM Yafang Shao wrote:
> > >
> > > On Fri, May 8, 2020 at 4:49 AM Shakeel Butt wrote:
> > > >
&g
xposing root cgroup's memory.stat? The reason is
> the benefit of having metrics exposing the activity that happens
> purely due to machine capacity rather than localized activity that
> happens due to the limits throughout the cgroup tree. Additionally
> there are userspace tools
ff-by: Chris Down
Suggested-by: Johannes Weiner
Acked-by: Johannes Weiner
Acked-by: Michal Hocko
Cc: Roman Gushchin
Signed-off-by: Yafang Shao
---
include/linux/memcontrol.h | 43 --
mm/memcontrol.c| 28 +++--
mm/vms
):
mm, memcg: Decouple e{low,min} state mutations from protection checks
Yafang Shao (1):
mm, memcg: Avoid stale protection values when cgroup is above
protection
include/linux/memcontrol.h | 85 --
mm/memcontrol.c| 36 +++-
mm
nd.
[han...@cmpxchg.org - large part of the changelog]
[mho...@suse.com - workaround explanation]
[ch...@chrisdown.name - retitle]
Fixes: 9783aa9917f8 ("mm, memcg: proportional memory.{low,min} reclaim")
Signed-off-by: Yafang Shao
Acked-by: Michal Hocko
Acked-by: Johannes Weiner
Acked-
On Fri, May 22, 2020 at 7:01 PM Naresh Kamboju
wrote:
>
> On Tue, 5 May 2020 at 14:12, Yafang Shao wrote:
> >
> > From: Chris Down
> >
> > mem_cgroup_protected currently is both used to set effective low and min
> > and return a mem_cgroup_protection b
On Fri, May 22, 2020 at 11:52 PM Naresh Kamboju
wrote:
>
> On Fri, 22 May 2020 at 17:49, Yafang Shao wrote:
> >
> > On Fri, May 22, 2020 at 7:01 PM Naresh Kamboju
> > wrote:
> > >
> > > On Tue, 5 May 2020 at 14:12, Yafang Sha
On Sat, May 23, 2020 at 12:07 AM Chris Down wrote:
>
> Chris Down writes:
> >Yafang Shao writes:
> >>I will do it.
> >>If no one has objection to my proposal, I will send it tomorrow.
> >
> >If the fixup patch works, just send that. Otherwise, sure.
>
On Fri, Jun 19, 2020 at 6:43 AM Axel Rasmussen wrote:
>
> The goal is to be able to collect a latency histogram for contended
> mmap_lock acquisitions. This will be used to diagnose slowness observed
> in production workloads, as well as to measure the effect of upcoming
> mmap_lock optimizations
t;statistics.wait_start will be 0.
> > So it will let the (rq_of(cfs_rq)) - se->statistics.wait_start)
> > wrong. We need to avoid this scenario.
> >
> > Signed-off-by: jun qian
> > Signed-off-by: Yafang Shao
>
> This SoB chain isn't valid. Did Yafang
On Thu, Jul 9, 2020 at 2:26 PM Michal Hocko wrote:
>
> From: Michal Hocko
>
> The exported value includes oom_score_adj so the range is no [0, 1000]
> as described in the previous section but rather [0, 2000]. Mention that
> fact explicitly.
>
> Signed-off-by: Michal Hocko
> ---
> Documentation
On Thu, Jul 9, 2020 at 4:18 PM Michal Hocko wrote:
>
> On Thu 09-07-20 15:41:11, Yafang Shao wrote:
> > On Thu, Jul 9, 2020 at 2:26 PM Michal Hocko wrote:
> > >
> > > From: Michal Hocko
> > >
> > > The exported value includes oom_score_adj so the
On Thu, Jul 9, 2020 at 5:58 PM Michal Hocko wrote:
>
> On Thu 09-07-20 17:01:06, Yafang Shao wrote:
> > On Thu, Jul 9, 2020 at 4:18 PM Michal Hocko wrote:
> > >
> > > On Thu 09-07-20 15:41:11, Yafang Shao wrote:
> > > > On Thu, Jul 9, 2020 at 2:26 PM Mic
1ec96710
> C reproducer: https://syzkaller.appspot.com/x/repro.c?x=14a7700090
>
> The issue was bisected to:
>
> commit e642d9be463d02c735cd99a9a904063324ee03d6
> Author: Yafang Shao
> Date: Fri Jul 10 04:58:08 2020 +
>
> mm, oom: make the calculation of oom badness mo
On Thu, Jul 16, 2020 at 12:36 AM Shakeel Butt wrote:
>
> Hi Yafang,
>
> On Tue, Mar 31, 2020 at 3:05 AM Yafang Shao wrote:
> >
> > PSI gives us a powerful way to anaylze memory pressure issue, but we can
> > make it more powerful with the help of tracepoint, kprobe
On Fri, Jul 10, 2020 at 9:07 PM Michal Hocko wrote:
>
> On Fri 10-07-20 14:58:54, Michal Hocko wrote:
> [...]
> > I will have a closer look. Is the full dmesg available somewhere?
>
> Ups, I have missed this:
> diff --git a/mm/oom_kill.c b/mm/oom_kill.c
> index 2dd5a90f2f81..7f01835862f4 100644
>
.
[c...@lca.pw: reported a issue in the previous version]
[mho...@suse.com: fixed the issue reported by Cai]
[mho...@suse.com: add the comment in proc_oom_score()]
Signed-off-by: Yafang Shao
Acked-by: Michal Hocko
Cc: David Rientjes
Cc: Qian Cai
---
v2 -> v3:
- fix the type of variable
vmstat.
>
> Signed-off-by: Shakeel Butt
Acked-by: Yafang Shao
> ---
> mm/vmscan.c | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index 5215840ee217..4167b0cc1784 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
On Mon, Jul 13, 2020 at 2:34 AM Naresh Kamboju
wrote:
>
> On Fri, 10 Jul 2020 at 21:28, Yafang Shao wrote:
> >
> > Recently we found an issue on our production environment that when memcg
> > oom is triggered the oom killer doesn't chose the process with largest
>
On Fri, Jul 17, 2020 at 1:04 AM Shakeel Butt wrote:
>
> On Wed, Jul 15, 2020 at 8:19 PM Yafang Shao wrote:
> >
> > On Thu, Jul 16, 2020 at 12:36 AM Shakeel Butt wrote:
> > >
> > > Hi Yafang,
> > >
> > > On Tue, Mar 31, 2020 at 3:05 AM Yafan
On Fri, Sep 18, 2020 at 2:13 AM Axel Rasmussen wrote:
>
> The goal of these tracepoints is to be able to debug lock contention
> issues. This lock is acquired on most (all?) mmap / munmap / page fault
> operations, so a multi-threaded process which does a lot of these can
> experience significant
On Mon, Sep 21, 2020 at 4:12 PM Michal Hocko wrote:
>
> On Mon 21-09-20 16:02:55, zangchun...@bytedance.com wrote:
> > From: Chunxin Zang
> >
> > In the cgroup v1, we have 'force_mepty' interface. This is very
> > useful for userspace to actively release memory. But the cgroup
> > v2 does not.
>
On Mon, Sep 21, 2020 at 7:05 PM Michal Hocko wrote:
>
> On Mon 21-09-20 18:55:40, Yafang Shao wrote:
> > On Mon, Sep 21, 2020 at 4:12 PM Michal Hocko wrote:
> > >
> > > On Mon 21-09-20 16:02:55, zangchun...@bytedance.com wrote:
> > > > From: Chunxin
On Tue, Sep 22, 2020 at 12:53 AM Axel Rasmussen
wrote:
>
> On Sun, Sep 20, 2020 at 9:58 PM Yafang Shao wrote:
> >
> > On Fri, Sep 18, 2020 at 2:13 AM Axel Rasmussen
> > wrote:
> > >
> > > The goal of these tracepoints is to be able to debug lock cont
On Mon, Sep 21, 2020 at 7:36 PM Michal Hocko wrote:
>
> On Mon 21-09-20 19:23:01, Yafang Shao wrote:
> > On Mon, Sep 21, 2020 at 7:05 PM Michal Hocko wrote:
> > >
> > > On Mon 21-09-20 18:55:40, Yafang Shao wrote:
> > > > On Mon, S
On Thu, Sep 24, 2020 at 12:09 AM Steven Rostedt wrote:
>
> On Wed, 23 Sep 2020 18:04:17 +0800
> Yafang Shao wrote:
>
> > > What you can do, and what we have done is the following:
> > >
> > > (see include/linux/page_ref.h)
> > >
> > >
&
On Tue, Sep 22, 2020 at 3:27 PM Michal Hocko wrote:
>
> On Tue 22-09-20 12:20:52, Yafang Shao wrote:
> > On Mon, Sep 21, 2020 at 7:36 PM Michal Hocko wrote:
> > >
> > > On Mon 21-09-20 19:23:01, Yafang Shao wrote:
> > > > On Mon, S
On Wed, Sep 23, 2020 at 12:51 AM Steven Rostedt wrote:
>
> On Tue, 22 Sep 2020 12:09:19 +0800
> Yafang Shao wrote:
>
> > > > Are there any methods to avoid un-inlining these wrappers ?
> > > >
> > > > For example,
> >
On Wed, May 29, 2019 at 9:08 PM Tony Lu wrote:
>
> This removes '\n' from trace event class tcp_event_sk_skb to avoid
> redundant new blank line and make output compact.
>
> Signed-off-by: Tony Lu
Acked-by: Yafang Shao
> ---
> include/trace/events/tcp.h | 2 +-
will take effect in next check interval.
For example, when the kernel is printing the hung task messages, the
user can't set it to 0 to stop the printing, but I don't think this will
happen in the real world. (If that happens, then sys_hung_task_warnings
must be protected by a lock)
Signed
On Thu, Jun 20, 2019 at 6:03 PM Tetsuo Handa
wrote:
>
> On 2019/06/20 14:55, Yafang Shao wrote:
> > When sys_hung_task_warnings reaches 0, the hang task messages will not
> > be reported any more.
>
> It is a common mistake that sys_hung_task_warnings is already 0 whe
On Thu, Jun 20, 2019 at 6:23 PM Tetsuo Handa
wrote:
>
> On 2019/06/20 19:10, Yafang Shao wrote:
> >>> With this patch, hung task warnings will be reset with
> >>> sys_hung_task_warnings setting in evenry check interval.
> >>
> >> Since it is uncommo
On Thu, Jun 18, 2020 at 5:09 AM Chris Down wrote:
>
> Naresh Kamboju writes:
> >After this patch applied the reported issue got fixed.
>
> Great! Thank you Naresh and Michal for helping to get to the bottom of this
> :-)
>
> I'll send out a new version tomorrow with the fixes applied and both of
On Wed, May 13, 2020 at 5:29 AM Johannes Weiner wrote:
>
> On Tue, Feb 11, 2020 at 12:55:07PM -0500, Johannes Weiner wrote:
> > The VFS inode shrinker is currently allowed to reclaim inodes with
> > populated page cache. As a result it can drop gigabytes of hot and
> > active page cache on the flo
On Fri, May 8, 2020 at 4:49 AM Shakeel Butt wrote:
>
> One way to measure the efficiency of memory reclaim is to look at the
> ratio (pgscan+pfrefill)/pgsteal. However at the moment these stats are
> not updated consistently at the system level and the ratio of these are
> not very meaningful. The
TCPF_ macro depends on the definition of TCP_ macro.
So it is better to define them with TCP_ marco.
Signed-off-by: Yafang Shao
---
include/net/tcp_states.h | 26 +-
1 file changed, 13 insertions(+), 13 deletions(-)
diff --git a/include/net/tcp_states.h b/include/net
sk is already allocated in inet_create/inet6_create, hence when
BPF_CGROUP_RUN_PROG_INET_SOCK is executed sk will never be NULL.
The logic is as bellow,
sk = sk_alloc();
if (!sk)
goto out;
BPF_CGROUP_RUN_PROG_INET_SOCK(sk);
Signed-off-by: Yafang Shao
On Sun, Dec 31, 2017 at 6:33 AM, Brendan Gregg
wrote:
> On Tue, Dec 19, 2017 at 7:12 PM, Yafang Shao wrote:
>> As sk_state is a common field for struct sock, so the state
>> transition tracepoint should not be a TCP specific feature.
>> Currently it traces all AF_INET s
wap is
off.
So when we mount tmpfs in a memcg, the default size should be limited by
the memcg memory.limit.
Signed-off-by: Yafang Shao
---
include/linux/memcontrol.h | 1 +
mm/memcontrol.c| 2 +-
mm/shmem.c | 20 +++-
3 files changed, 21 insert
2017-11-17 12:43 GMT+08:00 Shakeel Butt :
> On Thu, Nov 16, 2017 at 7:09 PM, Yafang Shao wrote:
>> Currently the default tmpfs size is totalram_pages / 2 if mount tmpfs
>> without "-o size=XXX".
>> When we mount tmpfs in a container(i.e. docker), it is also
>>
2017-11-17 23:55 GMT+08:00 Roman Gushchin :
> On Thu, Nov 16, 2017 at 08:43:17PM -0800, Shakeel Butt wrote:
>> On Thu, Nov 16, 2017 at 7:09 PM, Yafang Shao wrote:
>> > Currently the default tmpfs size is totalram_pages / 2 if mount tmpfs
>> > without "-o size=XXX
2017-11-18 0:45 GMT+08:00 Roman Gushchin :
> On Sat, Nov 18, 2017 at 12:20:40AM +0800, Yafang Shao wrote:
>> 2017-11-17 23:55 GMT+08:00 Roman Gushchin :
>> > On Thu, Nov 16, 2017 at 08:43:17PM -0800, Shakeel Butt wrote:
>> >> On Thu, Nov 16, 2017 at 7:09 PM, Yafang
2017-12-05 3:28 GMT+08:00 Marcelo Ricardo Leitner :
> On Sat, Dec 02, 2017 at 09:36:41AM +0000, Yafang Shao wrote:
>> The TCP/IP transition from TCP_LISTEN to TCP_SYN_RECV and some other
>> transitions are not traced with tcp_set_state tracepoint.
>>
>> In order to tr
);
When do TCP/IP state transition, we should use these two helpers or use
tcp_set_state() other than assigning a value to sk_state directly.
Signed-off-by: Yafang Shao
Acked-by: Song Liu
---
v2->v3: Per suggestion from Marcelo Ricardo Leitner, inverting __
to sk_state_st
With changes in inet_ files, DCCP state transitions are traced with
inet_sock_set_state tracepoint.
Signed-off-by: Yafang Shao
---
net/dccp/proto.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/net/dccp/proto.c b/net/dccp/proto.c
index 9d43c1f..7a75a1d 100644
--- a/net
;TCP_TIME_WAIT" },
{ 7, "TCP_CLOSE" },
{ 8, "TCP_CLOSE_WAIT" },
{ 9, "TCP_LAST_ACK" },
{ 10, "TCP_LISTEN" },
{ 11, "TCP_CLOSING" },
{ 12, "TCP_NEW_SYN_RECV" })
Signed-off-by: Steven Rostedt (VMware)
Acked
r protocol should be traced, I will modify the
code to trace it.
I just want to make the code easy and not output useless information.
Steven Rostedt (VMware) (1):
tcp: Export to userspace the TCP state names for the trace events
Yafang Shao (4):
net: tracepoint: replace tcp_set_state trace
: Yafang Shao
---
net/sctp/endpointola.c | 2 +-
net/sctp/sm_sideeffect.c | 4 ++--
net/sctp/socket.c| 12 ++--
3 files changed, 9 insertions(+), 9 deletions(-)
diff --git a/net/sctp/endpointola.c b/net/sctp/endpointola.c
index ee1e601..8b31468 100644
--- a/net/sctp
included in other header files,
so they are defined in sock.c.
The protocol such as SCTP maybe compiled as a ko, hence export
inet_sk_set_state().
Signed-off-by: Yafang Shao
---
include/net/inet_sock.h | 2 +
include/trace/events/sock.h | 107
sk_state_load is only used by AF_INET/AF_INET6, so rename it to
inet_sk_state_load and move it into inet_sock.h.
sk_state_store is removed as it is not used any more.
Signed-off-by: Yafang Shao
---
include/net/inet_sock.h | 25 -
include/net/sock.h
the status of all processes, that's a little
expensive.
Hence export the nr_running.
Signed-off-by: Yafang Shao
---
kernel/sched/core.c | 7 +++
1 file changed, 7 insertions(+)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 644fa2e..926575a 100644
--- a/kernel/sched/core.c
On Wed, Jan 3, 2018 at 3:46 AM, Brendan Gregg wrote:
> On Sat, Dec 30, 2017 at 7:06 PM, Yafang Shao wrote:
>> On Sun, Dec 31, 2017 at 6:33 AM, Brendan Gregg
>> wrote:
>>> On Tue, Dec 19, 2017 at 7:12 PM, Yafang Shao wrote:
>>>> As sk_state is a common
patch is included in this series.
Steven Rostedt:
tcp: Export to userspace the TCP state names for the trace events
Yafang Shao (3):
net: tracepoint: using sock_set_state tracepoint to trace SCTP state
transition
net: tracepoint: replace tcp_set_state tracepoint with sock_set_
With changes in inet_ files, DCCP state transitions are traced with
sock_set_state tracepoint.
Signed-off-by: Yafang Shao
---
net/dccp/proto.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/net/dccp/proto.c b/net/dccp/proto.c
index 9d43c1f..2874faf 100644
--- a/net/dccp
;TCP_TIME_WAIT" },
{ 7, "TCP_CLOSE" },
{ 8, "TCP_CLOSE_WAIT" },
{ 9, "TCP_LAST_ACK" },
{ 10, "TCP_LISTEN" },
{ 11, "TCP_CLOSING" },
{ 12, "TCP_NEW_SYN_RECV" })
Signed-off-by: Steven Rostedt (VMware)
Acked
With changes in inet_ files, SCTP state transitions are traced with
sockt_set_state tracepoint.
Signed-off-by: Yafang Shao
---
net/sctp/endpointola.c | 2 +-
net/sctp/sm_sideeffect.c | 4 ++--
net/sctp/socket.c| 12 ++--
3 files changed, 9 insertions(+), 9 deletions(-)
diff
: Yafang Shao
---
include/net/sock.h | 15 +-
include/trace/events/sock.h | 106
include/trace/events/tcp.h | 91 --
net/core/sock.c | 13 +
net/ipv4/inet_connection_sock.c | 4
2017-12-16 1:43 GMT+08:00 David Miller :
>
> Your Subject line here is incomplete, "replace tcp_set_state
> tracepoint with" what?
Oh Sorry.
The subject should be
"replace tcp_set_state tracepoint with sock_set_state tracepoint"
Thanks
Yafang
;TCP_TIME_WAIT" },
{ 7, "TCP_CLOSE" },
{ 8, "TCP_CLOSE_WAIT" },
{ 9, "TCP_LAST_ACK" },
{ 10, "TCP_LISTEN" },
{ 11, "TCP_CLOSING" },
{ 12, "TCP_NEW_SYN_RECV" })
Signed-off-by: Steven Rostedt (VMware)
Acked
n's patch is included in this series.
Steven Rostedt (1):
tcp: Export to userspace the TCP state names for the trace events
Yafang Shao (3):
net: tracepoint: replace tcp_set_state tracepoint with sock_set_state
tracepoint
net: tracepoint: using sock_set_state tracepoint to trace SC
: Yafang Shao
---
include/net/sock.h | 15 +-
include/trace/events/sock.h | 106
include/trace/events/tcp.h | 91 --
net/core/sock.c | 13 +
net/ipv4/inet_connection_sock.c | 4
With changes in inet_ files, DCCP state transitions are traced with
sock_set_state tracepoint.
Signed-off-by: Yafang Shao
---
net/dccp/proto.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/net/dccp/proto.c b/net/dccp/proto.c
index 9d43c1f..2874faf 100644
--- a/net/dccp
With changes in inet_ files, SCTP state transitions are traced with
sockt_set_state tracepoint.
Signed-off-by: Yafang Shao
---
net/sctp/endpointola.c | 2 +-
net/sctp/sm_sideeffect.c | 4 ++--
net/sctp/socket.c| 12 ++--
3 files changed, 9 insertions(+), 9 deletions(-)
diff
2017-12-16 6:47 GMT+08:00 Song Liu :
>
>> On Dec 15, 2017, at 9:56 AM, Yafang Shao wrote:
>>
>> As sk_state is a common field for struct sock, so the state
>> transition should not be a TCP specific feature.
>> So I rename tcp_set_state tracepoint to sock_set_st
2017-12-13 9:19 GMT+08:00 Song Liu :
>
>> On Dec 10, 2017, at 7:31 AM, Yafang Shao wrote:
>>
>> As sk_state is a common field for struct sock, so the state
>> transition should not be a TCP specific feature.
>> So I rename tcp_set_state tracepoint to sock_set_st
2017-08-10 0:42 GMT+08:00 Peter Zijlstra :
> On Wed, Aug 09, 2017 at 05:26:14PM +0800, Yafang Shao wrote:
>> 2017-08-09 17:09 GMT+08:00 Peter Zijlstra :
>> > On Wed, Aug 09, 2017 at 04:01:49PM +0800, Yafang Shao wrote:
>> >> 2017-08-09 15:43 GMT+08:00 Peter Zijlstra
tio successfully. This behavior may mislead us.
So we should do this sanity check at the beginning.
Signed-off-by: Yafang Shao
---
Documentation/sysctl/vm.txt | 5 +++
mm/page-writeback.c | 84 -
2 files changed, 81 insertions(+), 8 deletions(-)
diff --gi
2017-09-18 18:22 GMT+08:00 Jan Kara :
> On Mon 18-09-17 01:39:28, Yafang Shao wrote:
>> we can find the logic in domain_dirty_limits() that
>> when dirty bg_thresh is bigger than dirty thresh,
>> bg_thresh will be set as thresh * 1 / 2.
>> if (bg_thresh >= thre
ess successfully. This behavior may mislead us.
We'd better do this validity check at the beginning.
Signed-off-by: Yafang Shao
---
Documentation/sysctl/vm.txt | 5 +++
mm/page-writeback.c | 86 -
2 files changed, 83 insertions(+), 8 deletions(
2017-09-19 16:35 GMT+08:00 Jan Kara :
> On Tue 19-09-17 06:53:00, Yafang Shao wrote:
>> we can find the logic in domain_dirty_limits() that
>> when dirty bg_thresh is bigger than dirty thresh,
>> bg_thresh will be set as thresh * 1 / 2.
>> if (bg_thresh >= thre
ess successfully. This behavior may mislead us.
We'd better do this validity check at the beginning.
Signed-off-by: Yafang Shao
---
Documentation/sysctl/vm.txt | 6 +++
mm/page-writeback.c | 92 +
2 files changed, 90 insertions(+), 8 deletions(
2017-10-10 6:42 GMT+08:00 Andrew Morton :
> On Sat, 7 Oct 2017 06:58:04 +0800 Yafang Shao wrote:
>
>> After disable periodic writeback by writing 0 to
>> dirty_writeback_centisecs, the handler wb_workfn() will not be
>> entered again until the dirty background limit reach
2017-10-10 16:48 GMT+08:00 Jan Kara :
> On Tue 10-10-17 16:00:29, Yafang Shao wrote:
>> 2017-10-10 6:42 GMT+08:00 Andrew Morton :
>> > On Sat, 7 Oct 2017 06:58:04 +0800 Yafang Shao
>> > wrote:
>> >
>> >> After disable periodic writeback by w
2017-10-10 17:33 GMT+08:00 Jan Kara :
> On Tue 10-10-17 17:14:48, Yafang Shao wrote:
>> 2017-10-10 16:48 GMT+08:00 Jan Kara :
>> > On Tue 10-10-17 16:00:29, Yafang Shao wrote:
>> >> 2017-10-10 6:42 GMT+08:00 Andrew Morton :
>> >> > On Sat, 7 Oct 20
2017-09-26 18:25 GMT+08:00 Michal Hocko :
> On Wed 20-09-17 06:43:35, Yafang Shao wrote:
>> we can find the logic in domain_dirty_limits() that
>> when dirty bg_thresh is bigger than dirty thresh,
>> bg_thresh will be set as thresh * 1 / 2.
>>
2017-09-26 19:26 GMT+08:00 Michal Hocko :
> On Tue 26-09-17 19:06:37, Yafang Shao wrote:
>> 2017-09-26 18:25 GMT+08:00 Michal Hocko :
>> > On Wed 20-09-17 06:43:35, Yafang Shao wrote:
>> >> we can find the logic in domain_dirty_limits() that
>> >> when d
2017-09-20 23:33 GMT+08:00 Jan Kara :
> On Tue 19-09-17 19:48:00, Yafang Shao wrote:
>> 2017-09-19 16:35 GMT+08:00 Jan Kara :
>> > On Tue 19-09-17 06:53:00, Yafang Shao wrote:
>> >> + if (vm_dirty_bytes == 0 && vm_dirty_ratio == 0 &&
&g
ess successfully. This behavior may mislead us.
We'd better do this validity check at the beginning.
Signed-off-by: Yafang Shao
---
Documentation/sysctl/vm.txt | 6
kernel/sysctl.c | 4 +--
mm/page-writeback.c | 80 -
3 files cha
an is triggered.
- When the tunable was set to one hour and is reset to one second, the
new setting will not take effect for up to one hour.
Kicking the flusher threads immediately fixes these issues.
Signed-off-by: Yafang Shao
---
mm/page-writeback.c | 19 +--
1 file change
usher threads immediately fixes it.
Cc: Jens Axboe
Cc: Jan Kara
Cc: Andrew Morton
Signed-off-by: Yafang Shao
---
mm/page-writeback.c | 11 ++-
1 file changed, 10 insertions(+), 1 deletion(-)
diff --git a/mm/page-writeback.c b/mm/page-writeback.c
index 3969e69..768fe4e 100644
--- a/mm
abled by writing a non-zero
value to dirty_writeback_centisecs
As it can be disabled by sysctl, it should be able to enable by
sysctl as well.
Signed-off-by: Yafang Shao
---
mm/page-writeback.c | 8 +++-
1 file changed, 7 insertions(+), 1 deletion(-)
diff --git a/mm/page-writeback.c b/mm
2017-10-09 17:56 GMT+08:00 Jan Kara :
> On Sat 07-10-17 06:58:04, Yafang Shao wrote:
>> After disable periodic writeback by writing 0 to
>> dirty_writeback_centisecs, the handler wb_workfn() will not be
>> entered again until the dirty background limit reaches or
>> syn
2017-10-09 19:03 GMT+08:00 Jan Kara :
> On Mon 09-10-17 18:44:23, Yafang Shao wrote:
>> 2017-10-09 17:56 GMT+08:00 Jan Kara :
>> > On Sat 07-10-17 06:58:04, Yafang Shao wrote:
>> >> After disable periodic writeback by writing 0 to
>> >> dirty_writeback_c
201 - 300 of 365 matches
Mail list logo