On Wed, May 20, 2015 at 3:32 PM, Simon Horman <simon.hor...@netronome.com> wrote: > > On Wed, May 20, 2015 at 10:37:52AM -0700, Scott Feldman wrote: > > On Wed, May 20, 2015 at 5:57 AM, Simon Horman > > <simon.hor...@netronome.com> wrote: > > > On Wed, May 20, 2015 at 01:17:36PM +0200, Jiri Pirko wrote: > > >> Wed, May 20, 2015 at 10:46:26AM CEST, simon.hor...@netronome.com wrote: > > >> >On Wed, May 20, 2015 at 05:36:06PM +0900, Toshiaki Makita wrote: > > >> >> On 2015/05/20 16:48, Simon Horman wrote: > > >> >> > On Wed, May 20, 2015 at 03:15:23PM +0900, Toshiaki Makita wrote: > > >> >> >> On 2015/05/20 14:48, Simon Horman wrote: > > >> >> ... > > >> >> >>> static void _rocker_neigh_add(struct rocker *rocker, > > >> >> >>> + enum switchdev_trans trans, > > >> >> >>> struct rocker_neigh_tbl_entry *entry) > > >> >> >>> { > > >> >> >>> + if (trans == SWITCHDEV_TRANS_PREPARE) > > >> >> >>> + return; > > >> >> >>> entry->index = rocker->neigh_tbl_next_index++; > > >> >> >> > > >> >> >> Isn't index needed here? It looks to be used in later function > > >> >> >> call and > > >> >> >> logging. > > >> >> > > > >> >> > Thanks, that does not follow the usual model of setting values > > >> >> > during the PREPARE (and all other) transaction phase(s). > > >> >> > > > >> >> >> How about setting index like this? > > >> >> >> > > >> >> >> entry->index = rocker->neigh_tbl_next_index; > > >> >> >> if (trans == PREPARE) > > >> >> >> return; > > >> >> >> rocker->neigh_tbl_next_index++; > > >> >> >> ... > > >> >> > > > >> >> > I am concerned that _rocker_neigh_add() may be called by some other > > >> >> > caller while a transaction is in process and thus entry->index will > > >> >> > be inconsistent across callers. > > >> >> > > > >> >> > Perhaps we can convince ourselves that all the bases are covered. > > >> >> > So far my testing has drawn a blank. But the logic seems difficult > > >> >> > to > > >> >> > reason about. > > >> >> > > > >> >> > As we are basically allocating an index I suppose what is really > > >> >> > needed for > > >> >> > a correct implementation is a transaction aware index allocator, > > >> >> > like we > > >> >> > have for memory (rocker_port_kzalloc etc...). But that does seem > > >> >> > like > > >> >> > overkill. > > >> >> > > > >> >> > I think that we can make entry->index consistent across > > >> >> > calls in the same transaction at the expense of breaking the > > >> >> > rule that per-transaction data should be set during all transaction > > >> >> > phases. > > >> >> > > > >> >> > Something like this: > > >> >> > > > >> >> > > > >> >> > if (trans != SWITCHDEV_TRANS_COMMIT) > > >> >> > /* Avoid index being set to different values across calls > > >> >> > * to this function by the same caller within the same > > >> >> > * transaction. > > >> >> > */ > > >> >> > entry->index = rocker->neigh_tbl_next_index++; > > >> >> > if (trans == SWITCHDEV_TRANS_PREPARE) > > >> >> > return; > > >> >> > > > >> >> > > > >> >> > > >> >> As long as it is guraded by rtnl lock, no worries about this race? It > > >> >> seems to be assumed that prepare-commit is guarded by rtnl lock, > > >> >> according to commit c4f20321 ("rocker: support prepare-commit > > >> >> transaction model"). > > >> >> > > >> >> But as you are concerned, it seems to be able to be called by another > > >> >> caller, specifically, neigh_timer_handler() in interrupt context > > >> >> without > > >> >> rtnl lock. IMHO, it should be fixed rather than avoiding the race > > >> >> here. > > >> > > > >> >Yes, I believe that is the case I was seeing. > > >> > > > >> >Scott, Jiri, how would you like to resolve this? > > >> > > >> > > >> I believe that you can depend on rtnl being held - in > > >> switchdev_port_obj_add > > >> there is ASSERT_RTNL assection at the very beginning of the function. > > > > > > In the prepare-commit scenario, yes, I agree that is the case. > > > But it does not seem to always be the case when the transaction phase is > > > none. > > > > > > What I am seeing is: > > > > > > 1. rocker_port_ipv4_nh() is called via switchdev_port_obj_add() > > > with trans = SWITCHDEV_TRANS_PREPARE > > > > > > 2. rocker_port_ipv4_neigh() is called by rocker_neigh_update() > > > with trans = SWITCHDEV_TRANS_NONE. > > > > > > The call chain goes up to arp_process() via neigh_update(). > > > > > > 3. rocker_port_ipv4_nh() is called via switchdev_port_obj_add() > > > with trans = SWITCHDEV_TRANS_COMMIT > > > > > > I believe #2 is not guarded by rtnl. > > > > Looks like rocker->neigh_tbl_next_index was a problem even before the > > transaction model was introduced, due to no protection for concurrent > > processes in diff contexts. > > > > We'll need to turn the NETEVENT_NEIGH_UPDATE into process context and > > hold rtnl_lock, similar to what we do in > > rocker_event_mac_vlan_seen_work(). That, plus Toshiaki's suggested > > change for _rocker_neigh_add() should do it. > > Thanks, > > what I suggest is that we modify this patch as per Makita-san's suggestion > and proceed with it and the rest of the series. And then come back to the > neigh_tbl_next_index problem.
Agreed. I'll address the neigh_tbl_next_index problem once your series goes in. Thanks Simon and Makita-san. -- To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html