From: Eric Dumazet <eric.duma...@gmail.com>
Date: Sat, 23 Sep 2017 12:39:12 -0700

> From: Eric Dumazet <eduma...@google.com>
> 
> As measured in my prior patch ("sch_netem: faster rb tree removal"),
> rbtree_postorder_for_each_entry_safe() is nice looking but much slower
> than using rb_next() directly, except when tree is small enough
> to fit in CPU caches (then the cost is the same)
> 
> Also note that there is not even an increase of text size :
> $ size net/core/skbuff.o.before net/core/skbuff.o
>    text          data     bss     dec     hex filename
>   40711          1298       0   42009    a419 net/core/skbuff.o.before
>   40711          1298       0   42009    a419 net/core/skbuff.o
> 
> 
> From: Eric Dumazet <eduma...@google.com>

Applied.

Reply via email to