Saeed Mahameed <sa...@kernel.org> wrote: >On Tue, 2021-01-12 at 16:37 +0200, Vladimir Oltean wrote: >> On Mon, Jan 11, 2021 at 03:38:49PM -0800, Saeed Mahameed wrote: >> > GFP_ATOMIC is a little bit aggressive especially when user daemons >> > are >> > periodically reading stats. This can be avoided. >> > >> > You can pre-allocate with GFP_KERNEL an array with an "approximate" >> > size. >> > then fill the array up with whatever slaves the the bond has at >> > that >> > moment, num_of_slaves can be less, equal or more than the array >> > you >> > just allocated but we shouldn't care .. >> > >> > something like: >> > rcu_read_lock() >> > nslaves = bond_get_num_slaves(); >> > rcu_read_unlock()
Can be nslaves = READ_ONCE(bond->slave_cnt), or, for just active slaves: struct bond_up_slave *slaves; slaves = rcu_dereference(bond->slave_arr); nslaves = slaves ? READ_ONCE(slaves->count) : 0; >> > sarray = kcalloc(nslaves, sizeof(struct bonding_slave_dev), >> > GFP_KERNEL); >> > rcu_read_lock(); >> > bond_fill_slaves_array(bond, sarray); // also do: dev_hold() >> > rcu_read_unlock(); >> > >> > >> > bond_get_slaves_array_stats(sarray); >> > >> > bond_put_slaves_array(sarray); >> >> I don't know what to say about acquiring RCU read lock twice and >> traversing the list of interfaces three or four times. > >You can optimize this by tracking #num_slaves. I think that the set of active slaves changing between the two calls will be a rare exception, and that the number of slaves is generally small (more than 2 is uncommon in my experience). >> On the other hand, what's the worst that can happen if the GFP_ATOMIC >> memory allocation fails. It's not like there is any data loss. >> User space will retry when there is less memory pressure. > >Anyway Up to you, i just don't like it when we use GFP_ATOMIC when it >can be avoided, especially for periodic jobs, like stats polling.. And, for the common case, I suspect that an array allocation will have lower overhead than a loop that allocates once per slave. -J --- -Jay Vosburgh, jay.vosbu...@canonical.com