Hi Xyxue,

Those are to be deprecated functionality which still exists in stats API.

from stats.c file, just above vl_api_want_interface_combined_stats_t_handler​:

/**********************************
 * ALL Interface Combined stats - to be deprecated
 **********************************/

Please use per interface combined stats instead. You need this patch 
https://gerrit.fd.io/r/#/c/9560/ to receive correct stats values.

Thanks,
Mohsin


________________________________
From: vpp-dev-boun...@lists.fd.io <vpp-dev-boun...@lists.fd.io> on behalf of 
薛欣颖 <xy...@fiberhome.com>
Sent: Friday, December 1, 2017 9:39 AM
To: vpp-dev@lists.fd.io
Subject: [vpp-dev] The Data in shared memory is tampered


Hi guys,

I'm testing the want_* command . There is some error in my test.
When I enabled 'want_interface_combined_stats', it  tampered the data in shared 
memory. I can't grt the trace of it ,  and can't use vpp unless  reboot my 
machine.
Is there any problem in my usage.  Is there anything I should do before 
configure this command?

    static void
  vl_api_want_interface_combined_stats_t_handler
  (vl_api_want_interface_combined_stats_t * mp)
{
  stats_main_t *sm = &stats_main;
  vpe_client_registration_t rp;
  vl_api_want_interface_combined_stats_reply_t *rmp;
  uword *p;
  i32 retval = 0;
  unix_shared_memory_queue_t *q;
  u32 swif;

  swif = ~0; //Using same mechanism as _per_interface_
  rp.client_index = mp->client_index;
  rp.client_pid = mp->pid;

  handle_client_registration (&rp, IDX_PER_INTERFACE_COMBINED_COUNTERS, swif,
      mp->enable_disable);

reply:
  q = vl_api_client_index_to_input_queue (mp->client_index);

  if (!q)
    {
      sm->enable_poller =
clear_client_for_stat (IDX_PER_INTERFACE_COMBINED_COUNTERS, swif,
       mp->client_index);
      return;
    }

  rmp = vl_msg_api_alloc (sizeof (*rmp));
  rmp->_vl_msg_id = ntohs (VL_API_WANT_INTERFACE_COMBINED_STATS_REPLY);
  rmp->context = mp->context;
  rmp->retval = retval;

  vl_msg_api_send_shmem (q, (u8 *) & rmp);
}

/* Per Interface Combined distribution to client */
static void
do_combined_per_interface_counters (stats_main_t * sm)
{
  vl_api_vnet_per_interface_combined_counters_t *mp = 0;
  vnet_interface_main_t *im = sm->interface_main;
  api_main_t *am = sm->api_main;
  vl_shmem_hdr_t *shmem_hdr = am->shmem_hdr;
  unix_shared_memory_queue_t *q = NULL;
  vlib_combined_counter_main_t *cm;
  /*
   * items_this_message will eventually be used to optimise the batching
   * of per client messages for each stat. For now setting this to 1 then
   * iterate. This will not affect API.
   *
   * FIXME instead of enqueueing here, this should be sent to a batch
   * storer for per-client transmission. Each "mp" sent would be a single entry
   * and if a client is listening to other sw_if_indexes for same, it would be
   * appended to that *mp
   */
  u32 items_this_message = 1;
  vnet_combined_counter_t *vp = 0;
  vlib_counter_t v;
  int i, j;
  u32 timestamp;
  vpe_client_stats_registration_t *reg;
  vpe_client_registration_t *client;
  u32 *sw_if_index = 0;

  /*
     FIXME(s):
     - capturing the timestamp of the counters "when VPP knew them" is 
important.
     Less so is that the timing of the delivery to the control plane be in the 
same
     timescale.

     i.e. As long as the control plane can delta messages from VPP and work out
     velocity etc based on the timestamp, it can do so in a more "batch mode".

     It would be beneficial to keep a "per-client" message queue, and then
     batch all the stat messages for a client into one message, with
     discrete timestamps.

     Given this particular API is for "per interface" one assumes that the scale
     is less than the ~0 case, which the prior API is suited for.
   */
  vnet_interface_counter_lock (im);

  timestamp = vlib_time_now (sm->vlib_main);

  vec_reset_length (sm->regs_tmp);
  pool_foreach (reg,
sm->stats_registrations[IDX_PER_INTERFACE_COMBINED_COUNTERS],
(
    {
    vec_add1 (sm->regs_tmp, reg);}));

  for (i = 0; i < vec_len (sm->regs_tmp); i++)
    {

      reg = sm->regs_tmp[i];
      if (reg->item == ~0)
{
  vnet_interface_counter_unlock (im);
  do_combined_interface_counters (sm);            //send to main thread!
  vnet_interface_counter_lock (im);
  continue;
}

Thanks,
Xyxue

________________________________
_______________________________________________
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Reply via email to