> On Oct 28, 2020, at 5:58 PM, Andrii Nakryiko <and...@kernel.org> wrote:
> 
> Add support for deduplication split BTFs. When deduplicating split BTF, base
> BTF is considered to be immutable and can't be modified or adjusted. 99% of
> BTF deduplication logic is left intact (module some type numbering 
> adjustments).
> There are only two differences.
> 
> First, each type in base BTF gets hashed (expect VAR and DATASEC, of course,
> those are always considered to be self-canonical instances) and added into
> a table of canonical table candidates. Hashing is a shallow, fast operation,
> so mostly eliminates the overhead of having entire base BTF to be a part of
> BTF dedup.
> 
> Second difference is very critical and subtle. While deduplicating split BTF
> types, it is possible to discover that one of immutable base BTF BTF_KIND_FWD
> types can and should be resolved to a full STRUCT/UNION type from the split
> BTF part.  This is, obviously, can't happen because we can't modify the base
> BTF types anymore. So because of that, any type in split BTF that directly or
> indirectly references that newly-to-be-resolved FWD type can't be considered
> to be equivalent to the corresponding canonical types in base BTF, because
> that would result in a loss of type resolution information. So in such case,
> split BTF types will be deduplicated separately and will cause some
> duplication of type information, which is unavoidable.
> 
> With those two changes, the rest of the algorithm manages to deduplicate split
> BTF correctly, pointing all the duplicates to their canonical counter-parts in
> base BTF, but also is deduplicating whatever unique types are present in split
> BTF on their own.
> 
> Also, theoretically, split BTF after deduplication could end up with either
> empty type section or empty string section. This is handled by libbpf
> correctly in one of previous patches in the series.
> 
> Signed-off-by: Andrii Nakryiko <and...@kernel.org>

Acked-by: Song Liu <songliubrav...@fb.com>

With some nits:

> ---

[...]

> 
>       /* remap string offsets */
>       err = btf_for_each_str_off(d, strs_dedup_remap_str_off, d);
> @@ -3553,6 +3582,63 @@ static bool btf_compat_fnproto(struct btf_type *t1, 
> struct btf_type *t2)
>       return true;
> }
> 

An overview comment about bpf_deup_prep() will be great. 

> +static int btf_dedup_prep(struct btf_dedup *d)
> +{
> +     struct btf_type *t;
> +     int type_id;
> +     long h;
> +
> +     if (!d->btf->base_btf)
> +             return 0;
> +
> +     for (type_id = 1; type_id < d->btf->start_id; type_id++)
> +     {

Move "{" to previous line? 

> +             t = btf_type_by_id(d->btf, type_id);
> +
> +             /* all base BTF types are self-canonical by definition */
> +             d->map[type_id] = type_id;
> +
> +             switch (btf_kind(t)) {
> +             case BTF_KIND_VAR:
> +             case BTF_KIND_DATASEC:
> +                     /* VAR and DATASEC are never hash/deduplicated */
> +                     continue;

[...]

>       /* we are going to reuse hypot_map to store compaction remapping */
>       d->hypot_map[0] = 0;
> -     for (i = 1; i <= d->btf->nr_types; i++)
> -             d->hypot_map[i] = BTF_UNPROCESSED_ID;
> +     /* base BTF types are not renumbered */
> +     for (id = 1; id < d->btf->start_id; id++)
> +             d->hypot_map[id] = id;
> +     for (i = 0, id = d->btf->start_id; i < d->btf->nr_types; i++, id++)
> +             d->hypot_map[id] = BTF_UNPROCESSED_ID;

We don't really need i in the loop, shall we just do 
        for (id = d->btf->start_id; id < d->btf->start_id + d->btf->nr_types; 
id++)
?

> 
>       p = d->btf->types_data;
> 
> -     for (i = 1; i <= d->btf->nr_types; i++) {
> -             if (d->map[i] != i)
> +     for (i = 0, id = d->btf->start_id; i < d->btf->nr_types; i++, id++) {

ditto

> +             if (d->map[id] != id)
>                       continue;
> 
[...]

Reply via email to