> On 26 Feb 2015, at 18:08, Jarno Rajahalme <jrajaha...@nicira.com> wrote:
> 
>> 
>> On Feb 26, 2015, at 7:07 AM, Daniele Di Proietto <diproiet...@vmware.com> 
>> wrote:
>> 
>>> 
>>> On 25 Feb 2015, at 19:08, Jarno Rajahalme <jrajaha...@nicira.com> wrote:
>>> 
>>> 
>>>> On Feb 25, 2015, at 8:43 AM, Daniele Di Proietto <diproiet...@vmware.com> 
>>>> wrote:
>>>> 
>>>> 
>>> 
>>> (snip)
>>> 
>>>> 
>>>> +enum pmd_info_type {
>>>> +    PMD_INFO_SHOW_STATS,  /* show how cpu cycles are spent */
>>>> +    PMD_INFO_CLEAR_STATS  /* set the cycles count to 0 */
>>>> +};
>>>> +
>>>> +static void
>>>> +dpif_netdev_pmd_info(struct unixctl_conn *conn, int argc, const char 
>>>> *argv[],
>>>> +                     void *aux)
>>>> +{
>>>> +    struct ds reply = DS_EMPTY_INITIALIZER;
>>>> +    struct dp_netdev_pmd_thread *pmd;
>>>> +    struct dp_netdev *dp = NULL;
>>>> +    enum pmd_info_type type = (enum pmd_info_type) aux;
>>>> +
>>>> +    ovs_mutex_lock(&dp_netdev_mutex);
>>>> +
>>>> +    if (argc == 2) {
>>>> +        dp = shash_find_data(&dp_netdevs, argv[1]);
>>>> +    } else if (shash_count(&dp_netdevs) == 1) {
>>>> +        /* There's only one datapath */
>>>> +        dp = shash_random_node(&dp_netdevs)->data;
>>>> +    }
>>>> +
>>>> +    if (!dp) {
>>>> +        ovs_mutex_unlock(&dp_netdev_mutex);
>>>> +        unixctl_command_reply_error(conn,
>>>> +                                    "please specify an existing 
>>>> datapath");
>>>> +        return;
>>>> +    }
>>>> +
>>>> +    CMAP_FOR_EACH (pmd, node, &dp->poll_threads) {
>>>> +        struct dp_netdev_pmd_stats s;
>>>> +        struct dp_netdev_pmd_cycles t;
>>>> +        unsigned long long total_packets = 0;
>>>> +        uint64_t total_cycles = 0;
>>>> +        int i;
>>>> +
>>>> +        if (type == PMD_INFO_CLEAR_STATS) {
>>>> +            /* We cannot write 'stats' and 'cycles' (because they're 
>>>> written
>>>> +             * by other threads) and we shouldn't change 'stats' (because
>>>> +             * ther're used to count datapath stats, which must not be 
>>>> cleared
>>>> +             * here).
>>>> +             * Insted, we save the current values and subtract them from 
>>>> the
>>> 
>>> “instead”
>> 
>> Oops, I’ll fix that
>> 
>>> 
>>>> +             * values to be displayed in the future */
>>>> +            pmd->cycles_zero = pmd->cycles;
>>>> +            pmd->stats_zero = pmd->stats;
>>> 
>>> It seems to me that this access should be protected with a similar 
>>> mechanism we use in cmaps, as any concurrent write by the pmd thread may 
>>> result the “zero” versions to be inconsistent, a mixture of the old and 
>>> new. Or is it the case that this does not matter? I guess that the cycles 
>>> and stats need not be consistent with each other, though.
>>> 
>> 
>> I’ve thought of that, but I didn’t want to add much overhead to the pmd 
>> threads. I concluded it was fine to “sync” on RCU quiesce only. We’re doing 
>> the same in dpif_netdev_get_stats() after all. If you think it’s ok I would 
>> prefer to leave it as it is. I’m waiting for your feedback
>> 
> 
> I do not know what you mean by “sync on RCU quiesce only”. For what I can see 
> the dip_netdev_get_stats() has the same problem, but this affects only 32-bit 
> compiled OVS, as on 64-bit systems the access to the properly aligned 64-bit 
> integers should be atomic anyway. Maybe add overheads for 32-bit builds only?
> 
> I would also be fine with us supporting userspace datapath/DPDK only for 
> 64-bit builds.

Oh, I see what you mean now (64-bit non atomic access). We could add the 
mechanism we use in cmaps and make it a no-op on 64 bits.
Thanks for pointing that out!

I’ll work on that and fix dpif_netdev_get_stats()


>>>> +            continue;
>>>> +        }
>>>> +        t = pmd->cycles;
>>>> +        s = pmd->stats;
>>>> +
>>>> +        for (i = 0; i < ARRAY_SIZE(t.n); i++) {
>>>> +            t.n[i] -= pmd->cycles_zero.n[i];
>>>> +            total_cycles += t.n[i];
>>>> +        }
>>>> +        for (i = 0; i < ARRAY_SIZE(s.n); i++) {
>>>> +            s.n[i] -= pmd->stats_zero.n[i];
>>>> +            if (i != DP_STAT_LOST) {
>>>> +                total_packets += s.n[i];
>>>> +            }
>>>> +        }
>>>> +
>>> 
>>> Similar concern up here.
>> 
>> The *_zero members are read and written only by the main thread (the one 
>> that handles the appctl command). There’s even the dp_netdev_mutex to 
>> enforce that.
> 
> Oops, I missed that, thanks!
> 
>  Jarno
> 
>> 
>> Thanks,
>> 
>> Daniele

_______________________________________________
dev mailing list
dev@openvswitch.org
http://openvswitch.org/mailman/listinfo/dev

Reply via email to