On Mon, Jun 04, 2018 at 11:08:35AM +0200, Daniel Borkmann wrote:
> On 06/04/2018 12:59 AM, Yonghong Song wrote:
> > bpf has been used extensively for tracing. For example, bcc
> > contains an almost full set of bpf-based tools to trace kernel
> > and user functions/events. Most tracing tools are currently
> > either filtered based on pid or system-wide.
> > 
> > Containers have been used quite extensively in industry and
> > cgroup is often used together to provide resource isolation
> > and protection. Several processes may run inside the same
> > container. It is often desirable to get container-level tracing
> > results as well, e.g. syscall count, function count, I/O
> > activity, etc.
> > 
> > This patch implements a new helper, bpf_get_current_cgroup_id(),
> > which will return cgroup id based on the cgroup within which
> > the current task is running.
> > 
> > The later patch will provide an example to show that
> > userspace can get the same cgroup id so it could
> > configure a filter or policy in the bpf program based on
> > task cgroup id.
> > 
> > The helper is currently implemented for tracing. It can
> > be added to other program types as well when needed.
> > 
> > Acked-by: Alexei Starovoitov <a...@kernel.org>
> > Signed-off-by: Yonghong Song <y...@fb.com>
> > ---
> >  include/linux/bpf.h      |  1 +
> >  include/uapi/linux/bpf.h |  8 +++++++-
> >  kernel/bpf/core.c        |  1 +
> >  kernel/bpf/helpers.c     | 15 +++++++++++++++
> >  kernel/trace/bpf_trace.c |  2 ++
> >  5 files changed, 26 insertions(+), 1 deletion(-)
> > 
> > diff --git a/include/linux/bpf.h b/include/linux/bpf.h
> > index bbe2974..995c3b1 100644
> > --- a/include/linux/bpf.h
> > +++ b/include/linux/bpf.h
> > @@ -746,6 +746,7 @@ extern const struct bpf_func_proto 
> > bpf_get_stackid_proto;
> >  extern const struct bpf_func_proto bpf_get_stack_proto;
> >  extern const struct bpf_func_proto bpf_sock_map_update_proto;
> >  extern const struct bpf_func_proto bpf_sock_hash_update_proto;
> > +extern const struct bpf_func_proto bpf_get_current_cgroup_id_proto;
> >  
> >  /* Shared helpers among cBPF and eBPF. */
> >  void bpf_user_rnd_init_once(void);
> > diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
> > index f0b6608..18712b0 100644
> > --- a/include/uapi/linux/bpf.h
> > +++ b/include/uapi/linux/bpf.h
> > @@ -2070,6 +2070,11 @@ union bpf_attr {
> >   *                 **CONFIG_SOCK_CGROUP_DATA** configuration option.
> >   *         Return
> >   *                 The id is returned or 0 in case the id could not be 
> > retrieved.
> > + *
> > + * u64 bpf_get_current_cgroup_id(void)
> > + *         Return
> > + *                 A 64-bit integer containing the current cgroup id based
> > + *                 on the cgroup within which the current task is running.
> >   */
> >  #define __BPF_FUNC_MAPPER(FN)              \
> >     FN(unspec),                     \
> > @@ -2151,7 +2156,8 @@ union bpf_attr {
> >     FN(lwt_seg6_action),            \
> >     FN(rc_repeat),                  \
> >     FN(rc_keydown),                 \
> > -   FN(skb_cgroup_id),
> > +   FN(skb_cgroup_id),              \
> > +   FN(get_current_cgroup_id),
> >  
> >  /* integer value in 'imm' field of BPF_CALL instruction selects which 
> > helper
> >   * function eBPF program intends to call
> > diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
> > index 527587d..9f14937 100644
> > --- a/kernel/bpf/core.c
> > +++ b/kernel/bpf/core.c
> > @@ -1765,6 +1765,7 @@ const struct bpf_func_proto 
> > bpf_get_current_uid_gid_proto __weak;
> >  const struct bpf_func_proto bpf_get_current_comm_proto __weak;
> >  const struct bpf_func_proto bpf_sock_map_update_proto __weak;
> >  const struct bpf_func_proto bpf_sock_hash_update_proto __weak;
> > +const struct bpf_func_proto bpf_get_current_cgroup_id_proto __weak;
> >  
> >  const struct bpf_func_proto * __weak bpf_get_trace_printk_proto(void)
> >  {
> > diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
> > index 3d24e23..73065e2 100644
> > --- a/kernel/bpf/helpers.c
> > +++ b/kernel/bpf/helpers.c
> > @@ -179,3 +179,18 @@ const struct bpf_func_proto bpf_get_current_comm_proto 
> > = {
> >     .arg1_type      = ARG_PTR_TO_UNINIT_MEM,
> >     .arg2_type      = ARG_CONST_SIZE,
> >  };
> > +
> > +#ifdef CONFIG_CGROUPS
> > +BPF_CALL_0(bpf_get_current_cgroup_id)
> > +{
> > +   struct cgroup *cgrp = task_dfl_cgroup(current);
> > +
> > +   return cgrp->kn->id.id;
> > +}
> > +
> > +const struct bpf_func_proto bpf_get_current_cgroup_id_proto = {
> > +   .func           = bpf_get_current_cgroup_id,
> > +   .gpl_only       = false,
> > +   .ret_type       = RET_INTEGER,
> > +};
> > +#endif
> 
> Nit: why not moving this function directly to bpf_trace.c?

my preference would be to keep it in helpers.c as-is.
imo bpf_trace.c is only for things that depend on kernel/trace/

> > diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
> > index 752992c..e2ab5b7 100644
> > --- a/kernel/trace/bpf_trace.c
> > +++ b/kernel/trace/bpf_trace.c
> > @@ -564,6 +564,8 @@ tracing_func_proto(enum bpf_func_id func_id, const 
> > struct bpf_prog *prog)
> >             return &bpf_get_prandom_u32_proto;
> >     case BPF_FUNC_probe_read_str:
> >             return &bpf_probe_read_str_proto;
> > +   case BPF_FUNC_get_current_cgroup_id:
> > +           return &bpf_get_current_cgroup_id_proto;
> 
> When you have !CONFIG_CGROUPS, then it relies on the weak definition of the
> bpf_get_current_cgroup_id_proto, which I would think at latest in 
> fixup_bpf_calls()
> bails out with 'kernel subsystem misconfigured func' due to func being NULL.
> 
> Can't we just do the #ifdef CONFIG_CGROUPS around 
> BPF_FUNC_get_current_cgroup_id
> case instead? Then we bail out normally with 'unknown func' when cgroups are
> not configured?

good idea.

Reply via email to