On Wed, Apr 16, 2014 at 6:09 PM, Joe Stringer <j...@wand.net.nz> wrote:

> As we discussed offline, it would be good to see the refactoring changes
> separated from the functional changes. That makes it easier to see what's
> changed and refer back in future.
>


Definitely, hope i can master the art of separating patches,~




> On 12 April 2014 06:20, Alex Wang <al...@nicira.com> wrote:
>>
>> @@ -2450,8 +2373,40 @@ bridge_run(void)
>>          iface_stats_timer = time_msec() + IFACE_STATS_INTERVAL;
>>      }
>>
>> +    /* Check the need to update status. */
>> +    seq = seq_read(connectivity_seq_get());
>>
>
>
>>  +    if (seq != connectivity_seqno) {
>>
>
> I think that in the previous version, this if statement...
>
>
>>  +        enum ovsdb_idl_txn_status status;
>> +
>> +        if (!status_txn) {
>>
>
> And this one are in the opposite order. More to follow...
>


No, there is no "+        if (!status_txn) {" in the previous version.

This check of 'status_txn' is inside the original 'instant_stats_run()'.  I
add this back
to handle the case when 'status == TXN_INCOMPLETE' in the previous database
update.




>
>
>> ...
>> +        status = ovsdb_idl_txn_commit(status_txn);
>> +        /* Do not destroy "status_txn" if the transaction is
>> +         * "TXN_INCOMPLETE". */
>> +        if (status != TXN_INCOMPLETE) {
>> +            ovsdb_idl_txn_destroy(status_txn);
>> +            status_txn = NULL;
>> +        }
>>
>
> The if statement order above makes this code not get run unless there was
> a connectivity change. Is this an optimisation?
>


Correct,  so even if there is connectivity change, as long as the previous
update is incomplete, we will backoff another 500ms.



>
>
>> @@ -2481,8 +2436,17 @@ bridge_wait(void)
>>          poll_timer_wait_until(iface_stats_timer);
>>      }
>>
>> +    /* If the status database transaction is "TXN_INCOMPLETE" in this
>> run,
>> +     * register a timeout in "STATUS_CHECK_AGAIN_MSEC".  Else, wait on
>> the
>> +     * global connectivity sequence number.  Note, this also helps batch
>> +     * multiple status changes into one transaction. */
>> +    if (status_txn) {
>> +        poll_timer_wait_until(time_msec() + STATUS_CHECK_AGAIN_MSEC);
>> +    } else {
>> +        seq_wait(connectivity_seq_get(), connectivity_seqno);
>> +    }
>>
>
> So, this is like a backoff? If there's so much database update happening
> that we can't immediately transact (I equate this with TXN_INCOMPLETE),
> then don't try again for another 500ms?
> Otherwise, respond immediately to any changes to connectivity?
>


Yes, this is like a backoff.   we are doing the same backoff in current
master code using the 'INSTANT_INTERVAL_MSEC'.

Upon further thinking, I decide to go back to using the 100ms backoff
interval.

The reason is that

1. the main thread will always be waken up immediately after connectivity
change (since ofproto_wait() always wait on it).  So,  when connectivity seq
    is changing fast, the backoff will be of no use and the main thread
will keep checking the 'TXN_INCOMPLETE' transaction.
2. backoff is only useful when the previous update status is
'TXN_INCOMPLETE' and now we only have infrequent connectivity changes.  we
need
    to wakeup periodically and check for completion of previous update.

So, the backoff interval should not make a big difference.

My previous experiment showed that there is slight reduction of backlog
when the interval is 500ms.  (10K tunnel, flap the forwarding flag every
0.3 sec)
I'll experiment again to confirm it.

For my next version, I'll only include the refactoring changes.
_______________________________________________
dev mailing list
dev@openvswitch.org
http://openvswitch.org/mailman/listinfo/dev

Reply via email to