On 03/14/2018 07:11 PM, Alexei Starovoitov wrote:
> On Wed, Mar 14, 2018 at 03:37:01PM +0100, Daniel Borkmann wrote:
>>> --- a/include/uapi/linux/bpf.h
>>> +++ b/include/uapi/linux/bpf.h
>>> @@ -133,6 +133,8 @@ enum bpf_prog_type {
>>>     BPF_PROG_TYPE_SOCK_OPS,
>>>     BPF_PROG_TYPE_SK_SKB,
>>>     BPF_PROG_TYPE_CGROUP_DEVICE,
>>> +   BPF_PROG_TYPE_CGROUP_INET4_BIND,
>>> +   BPF_PROG_TYPE_CGROUP_INET6_BIND,
>>
>> Could those all be merged into BPF_PROG_TYPE_SOCK_OPS? I'm slowly getting
>> confused by the many sock_*/sk_* prog types we have. The attach hook could
>> still be something like BPF_CGROUP_BIND/BPF_CGROUP_CONNECT. Potentially
>> storing some prog-type specific void *private_data in prog's aux during
>> verification could be a way (similarly as you mention) which can later be
>> retrieved at attach time to reject with -ENOTSUPP or such.
> 
> that's exacly what I mentioned in the cover letter,
> but we need to extend attach cmd with verifier-like log_buf+log_size.
> since simple enotsupp will be impossible to debug.

Hmm, adding verifier-like log_buf + log_size feels a bit of a kludge just
for this case, but it's the usual problem where getting a std error code
is like looking into a crystal ball to figure where it's coming from. I'd see
only couple of other alternatives: distinct error code like ENAVAIL for such
mismatch, or telling the verifier upfront where this is going to be attached
to - same as we do with the dev for offloading or as you did with splitting
the prog types or some sort of notion of sub-prog types; or having a query
API that returns possible/compatible attach types for this loaded prog via
e.g. bpf_prog_get_info_by_fd() that loader can precheck or check when error
occurs. All nothing really nice, though.

Making verifier-like log_buf + log_size generic meaning not just for the case
of BPF_PROG_ATTACH specifically might be yet another option, meaning you'd
have a new BPF_GET_ERROR command to pick up the log for the last failed bpf(2)
cmd, but either that might require adding a BPF related pointer to task struct
for this or any other future BPF feature (maybe not really an option); or to
have some sort of bpf cmd to config and obtain an fd for error queue/log once,
where this can then be retrieved from and used for all cmds generically.

[...]
>>> +struct bpf_sock_addr {
>>> +   __u32 user_family;      /* Allows 4-byte read, but no write. */
>>> +   __u32 user_ip4;         /* Allows 1,2,4-byte read and 4-byte write.
>>> +                            * Stored in network byte order.
>>> +                            */
>>> +   __u32 user_ip6[4];      /* Allows 1,2,4-byte read an 4-byte write.
>>> +                            * Stored in network byte order.
>>> +                            */
>>> +   __u32 user_port;        /* Allows 4-byte read and write.
>>> +                            * Stored in network byte order
>>> +                            */
>>> +   __u32 family;           /* Allows 4-byte read, but no write */
>>> +   __u32 type;             /* Allows 4-byte read, but no write */
>>> +   __u32 protocol;         /* Allows 4-byte read, but no write */
>>
>> I recall bind to prefix came up from time to time in BPF context in the sense
>> to let the app itself be more flexible to bind to BPF prog. Do you see also 
>> app
>> to be able to add a BPF prog into the array itself?
> 
> I'm not following. In this case the container management framework
> will attach bpf progs to cgroups and apps inside the cgroups will
> get their bind/connects overwritten when necessary.

Was mostly just thinking whether it could also cover the use case that was
brought up from time to time e.g.:

  https://www.mail-archive.com/netdev@vger.kernel.org/msg100914.html

Seems like it would potentially be on top of that, plus having an option to
attach from within the app instead of orchestrator.

Reply via email to