[dpdk-dev] rte_memzone: memzone_reserve_aligned_thread_unsafe failed when running two process at the same time.

2014-08-30 Thread zimeiw


hi,


Running primary process, it is successful.

$ sudo ./simple_mp -c 1 -n 1 --socket-mem=64  --proc-type=primary
EAL: Detected lcore 0 as core 0 on socket 0
EAL: Detected lcore 1 as core 0 on socket 0
EAL: Support maximum 64 logical core(s) by configuration.
EAL: Detected 2 lcore(s)
EAL: Setting up memory...
EAL: Ask a virtual area of 0x2600 bytes
EAL: Virtual area found at 0x7fffd120 (size = 0x2600)
EAL: Ask a virtual area of 0x220 bytes
EAL: Virtual area found at 0x7fffcee0 (size = 0x220)
EAL: Ask a virtual area of 0x20 bytes
EAL: Virtual area found at 0x7fffcea0 (size = 0x20)
EAL: Ask a virtual area of 0x1a0 bytes
EAL: Virtual area found at 0x7fffcce0 (size = 0x1a0)
EAL: Ask a virtual area of 0x220 bytes
EAL: Virtual area found at 0x7fffcaa0 (size = 0x220)
EAL: Ask a virtual area of 0x20 bytes
EAL: Virtual area found at 0x7fffca60 (size = 0x20)
EAL: Ask a virtual area of 0x20 bytes
EAL: Virtual area found at 0x7fffca20 (size = 0x20)
EAL: Ask a virtual area of 0x20 bytes
EAL: Virtual area found at 0x7fffc9e0 (size = 0x20)
EAL: Ask a virtual area of 0x20 bytes
EAL: Virtual area found at 0x7fffc9a0 (size = 0x20)
EAL: Ask a virtual area of 0x20 bytes
EAL: Virtual area found at 0x7fffc960 (size = 0x20)
EAL: Ask a virtual area of 0x20 bytes
EAL: Virtual area found at 0x7fffc920 (size = 0x20)
EAL: Requesting 32 pages of size 2MB from socket 0
EAL: TSC frequency is ~2793683 KHz
EAL: Master core 0 is ready (tid=f7fe4800)
EAL: PCI device :02:01.0 on NUMA socket -1
EAL:   probe driver: 8086:100f rte_em_pmd
EAL:   :02:01.0 not managed by UIO driver, skipping
EAL: PCI device :02:06.0 on NUMA socket -1
EAL:   probe driver: 8086:100f rte_em_pmd
EAL:   :02:06.0 not managed by UIO driver, skipping
proc primary
APP: Finished Process Init.

simple_mp >


Running secondary process, it is faied for memzone  already 
exists.
$ sudo ./simple_mp -c 1 -n 1   --proc-type=secondary --socket-mem=1
EAL: Detected lcore 0 as core 0 on socket 0
EAL: Detected lcore 1 as core 0 on socket 0
EAL: Support maximum 64 logical core(s) by configuration.
EAL: Detected 2 lcore(s)
EAL: Setting up memory...
EAL: Analysing 358 files
EAL: Mapped segment 0 of size 0x400
EAL: memzone_reserve_aligned_thread_unsafe(): memzone  
already exists
RING: Cannot reserve memory
EAL: TSC frequency is ~2793681 KHz
EAL: Master core 0 is ready (tid=f7fe4800)
EAL: PCI device :02:01.0 on NUMA socket -1
EAL:   probe driver: 8086:100f rte_em_pmd
EAL: Cannot find resource for device
EAL: PCI device :02:06.0 on NUMA socket -1
EAL:   probe driver: 8086:100f rte_em_pmd
EAL: Cannot find resource for device
proc secandary
APP: Finished Process Init.

simple_mp >





[dpdk-dev] DPDK and custom memory

2014-08-30 Thread Thomas Monjalon
Hello,

2014-08-29 18:40, Saygin, Artur:
> Imagine a PMD for an FPGA-based NIC that is limited to accessing certain
> memory regions .

Does it mean Intel is making an FPGA-based NIC?

> Is there a way to make DPDK use that exact memory?

Maybe I don't understand the question well, because it doesn't seem really
different of what other PMDs do.
Assuming your NIC is PCI, you can access it via uio (igb_uio) or VFIO.

> Perhaps this is more of a hugetlbfs question than DPDK but I thought I'd
> start here.

It's a pleasure to receive new drivers.
Welcome here :)

-- 
Thomas


[dpdk-dev] rte_memzone: memzone_reserve_aligned_thread_unsafe failed when running two process at the same time.

2014-08-30 Thread Zhang, Jerry
Hi,
'memzone  already exists' is not an error for the 
secondary DPDK process.

DPDK EAL initialization creates a mempool for log history.  This mempool 
creation includes creating a ring based on the memzone named RG_MP_log_history.
For the secondary DPDK process,  if the mempool for log history creation 
fails because the memzone RG_MP_log_history exits, it looks up the exited 
mempool and reuse it. 
Please refer to the implementation of rte_eal_common_log_init() function.

From the logs, it seems your secondary DPDK process has initialized 
successfully.

int
rte_eal_common_log_init(FILE *default_log)
{
STAILQ_INIT(&log_history);

/* reserve RTE_LOG_HISTORY*2 elements, so we can dump and
 * keep logging during this time */
log_history_mp = rte_mempool_create(LOG_HISTORY_MP_NAME, 
RTE_LOG_HISTORY*2,
LOG_ELT_SIZE, 0, 0,
NULL, NULL,
NULL, NULL,
SOCKET_ID_ANY, 0);

if ((log_history_mp == NULL) && 
((log_history_mp = rte_mempool_lookup(LOG_HISTORY_MP_NAME)) == 
NULL)){
RTE_LOG(ERR, EAL, "%s(): cannot create log_history mempool\n",
__func__);
return -1;
}

default_log_stream = default_log;
rte_openlog_stream(default_log);
return 0;
}


-Original Message-
From: dev [mailto:dev-boun...@dpdk.org] On Behalf Of zimeiw
Sent: Saturday, August 30, 2014 7:23 AM
To: dev at dpdk.org
Subject: [dpdk-dev] rte_memzone: memzone_reserve_aligned_thread_unsafe failed 
when running two process at the same time.



hi,


Running primary process, it is successful.

$ sudo ./simple_mp -c 1 -n 1 --socket-mem=64  --proc-type=primary
EAL: Detected lcore 0 as core 0 on socket 0
EAL: Detected lcore 1 as core 0 on socket 0
EAL: Support maximum 64 logical core(s) by configuration.
EAL: Detected 2 lcore(s)
EAL: Setting up memory...
EAL: Ask a virtual area of 0x2600 bytes
EAL: Virtual area found at 0x7fffd120 (size = 0x2600)
EAL: Ask a virtual area of 0x220 bytes
EAL: Virtual area found at 0x7fffcee0 (size = 0x220)
EAL: Ask a virtual area of 0x20 bytes
EAL: Virtual area found at 0x7fffcea0 (size = 0x20)
EAL: Ask a virtual area of 0x1a0 bytes
EAL: Virtual area found at 0x7fffcce0 (size = 0x1a0)
EAL: Ask a virtual area of 0x220 bytes
EAL: Virtual area found at 0x7fffcaa0 (size = 0x220)
EAL: Ask a virtual area of 0x20 bytes
EAL: Virtual area found at 0x7fffca60 (size = 0x20)
EAL: Ask a virtual area of 0x20 bytes
EAL: Virtual area found at 0x7fffca20 (size = 0x20)
EAL: Ask a virtual area of 0x20 bytes
EAL: Virtual area found at 0x7fffc9e0 (size = 0x20)
EAL: Ask a virtual area of 0x20 bytes
EAL: Virtual area found at 0x7fffc9a0 (size = 0x20)
EAL: Ask a virtual area of 0x20 bytes
EAL: Virtual area found at 0x7fffc960 (size = 0x20)
EAL: Ask a virtual area of 0x20 bytes
EAL: Virtual area found at 0x7fffc920 (size = 0x20)
EAL: Requesting 32 pages of size 2MB from socket 0
EAL: TSC frequency is ~2793683 KHz
EAL: Master core 0 is ready (tid=f7fe4800)
EAL: PCI device :02:01.0 on NUMA socket -1
EAL:   probe driver: 8086:100f rte_em_pmd
EAL:   :02:01.0 not managed by UIO driver, skipping
EAL: PCI device :02:06.0 on NUMA socket -1
EAL:   probe driver: 8086:100f rte_em_pmd
EAL:   :02:06.0 not managed by UIO driver, skipping
proc primary
APP: Finished Process Init.

simple_mp >


Running secondary process, it is faied for memzone  already 
exists.
$ sudo ./simple_mp -c 1 -n 1   --proc-type=secondary --socket-mem=1
EAL: Detected lcore 0 as core 0 on socket 0
EAL: Detected lcore 1 as core 0 on socket 0
EAL: Support maximum 64 logical core(s) by configuration.
EAL: Detected 2 lcore(s)
EAL: Setting up memory...
EAL: Analysing 358 files
EAL: Mapped segment 0 of size 0x400
EAL: memzone_reserve_aligned_thread_unsafe(): memzone  
already exists
RING: Cannot reserve memory
EAL: TSC frequency is ~2793681 KHz
EAL: Master core 0 is ready (tid=f7fe4800)
EAL: PCI device :02:01.0 on NUMA socket -1
EAL:   probe driver: 8086:100f rte_em_pmd
EAL: Cannot find resource for device
EAL: PCI device :02:06.0 on NUMA socket -1
EAL:   probe driver: 8086:100f rte_em_pmd
EAL: Cannot find resource for device
proc secandary
APP: Finished Process Init.

simple_mp >





[dpdk-dev] [PATCH 1/3] i40evf: support I40E_VIRTCHNL_OP_CONFIG_VSI_QUEUES_EX in DPDK PF host

2014-08-30 Thread Thomas Monjalon
Hi Helin,

The title mention i40evf but the patch is related to PF (for VF communication).
So I wonder wether it would be clearer to prefix it with i40e? Not sure.

2014-08-20 11:33, Helin Zhang:
> To configure VSI queues for VF, Linux PF host supports
> operation of I40E_VIRTCHNL_OP_CONFIG_VSI_QUEUES with
> limited configurations. To support more configurations
> (e.g configurable CRC stripping in VF), a new operation
> of I40E_VIRTCHNL_OP_CONFIG_VSI_QUEUES_EX has been
> supported in DPDK PF host.

This patch would be easier to read if you could split it in 3 patches:
1) renamings and line wrapping rework
2) introduction of new extended message/operation
3) crc config

> - int ret = I40E_SUCCESS;
> - struct i40e_virtchnl_vsi_queue_config_info *qconfig =
> - (struct i40e_virtchnl_vsi_queue_config_info *)msg;
> - int i;
> - struct i40e_virtchnl_queue_pair_info *qpair;
> -
> - if (msg == NULL || msglen <= sizeof(*qconfig) ||
> - qconfig->num_queue_pairs > vsi->nb_qps) {
> + struct i40e_virtchnl_vsi_queue_config_info *vc_vqci =
> + (struct i40e_virtchnl_vsi_queue_config_info *)msg;
> + struct i40e_virtchnl_queue_pair_info *vc_qpi;
> + struct i40e_virtchnl_queue_pair_extra_info *vc_qpei = NULL;
> + int i, ret = I40E_SUCCESS;
> +
> + if (msg == NULL || msglen <= sizeof(*vc_vqci) ||
> + vc_vqci->num_queue_pairs > vsi->nb_qps) {
>   PMD_DRV_LOG(ERR, "vsi_queue_config_info argument wrong\n");
>   ret = I40E_ERR_PARAM;
>   goto send_msg;
>   }
>  
> - qpair = qconfig->qpair;
> - for (i = 0; i < qconfig->num_queue_pairs; i++) {
> - if (qpair[i].rxq.queue_id > vsi->nb_qps - 1 ||
> - qpair[i].txq.queue_id > vsi->nb_qps - 1) {
> + vc_qpi = vc_vqci->qpair;
> + if (opcode == I40E_VIRTCHNL_OP_CONFIG_VSI_QUEUES_EX)
> + vc_qpei = (struct i40e_virtchnl_queue_pair_extra_info *)
> + (((uint8_t *)vc_qpi) +
> + (sizeof(struct i40e_virtchnl_queue_pair_info) *
> + vc_vqci->num_queue_pairs));
> +
> + for (i = 0; i < vc_vqci->num_queue_pairs; i++) {
> + if (vc_qpi[i].rxq.queue_id > vsi->nb_qps - 1 ||
> + vc_qpi[i].txq.queue_id > vsi->nb_qps - 1) {
>   ret = I40E_ERR_PARAM;
>   goto send_msg;
>   }

Mixing renaming and new feature makes it difficult to read.

>   case I40E_VIRTCHNL_OP_ENABLE_QUEUES:
>   PMD_DRV_LOG(INFO, "OP_ENABLE_QUEUES received\n");
> - i40e_pf_host_process_cmd_enable_queues(vf,
> - msg, msglen);
> + i40e_pf_host_process_cmd_enable_queues(vf, msg, msglen);
>   break;
>   case I40E_VIRTCHNL_OP_DISABLE_QUEUES:
>   PMD_DRV_LOG(INFO, "OP_DISABLE_QUEUE received\n");
> - i40e_pf_host_process_cmd_disable_queues(vf,
> - msg, msglen);
> + i40e_pf_host_process_cmd_disable_queues(vf, msg, msglen);
>   break;
>   case I40E_VIRTCHNL_OP_ADD_ETHER_ADDRESS:
>   PMD_DRV_LOG(INFO, "OP_ADD_ETHER_ADDRESS received\n");
> - i40e_pf_host_process_cmd_add_ether_address(vf,
> - msg, msglen);
> + i40e_pf_host_process_cmd_add_ether_address(vf, msg, msglen);
>   break;
>   case I40E_VIRTCHNL_OP_DEL_ETHER_ADDRESS:
>   PMD_DRV_LOG(INFO, "OP_DEL_ETHER_ADDRESS received\n");
> - i40e_pf_host_process_cmd_del_ether_address(vf,
> - msg, msglen);
> + i40e_pf_host_process_cmd_del_ether_address(vf, msg, msglen);
>   break;
>   case I40E_VIRTCHNL_OP_ADD_VLAN:
>   PMD_DRV_LOG(INFO, "OP_ADD_VLAN received\n");
> @@ -932,10 +951,9 @@ i40e_pf_host_handle_vf_msg(struct rte_eth_dev *dev,
>   case I40E_VIRTCHNL_OP_FCOE:
>   PMD_DRV_LOG(ERR, "OP_FCOE received, not supported\n");
>   default:
> - PMD_DRV_LOG(ERR, "%u received, not supported\n",
> - opcode);
> - i40e_pf_host_send_msg_to_vf(vf, opcode,
> - I40E_ERR_PARAM, NULL, 0);
> + PMD_DRV_LOG(ERR, "%u received, not supported\n", opcode);
> + i40e_pf_host_send_msg_to_vf(vf, opcode, I40E_ERR_PARAM,
> + NULL, 0);
>   break;
>   }

Line wrapping should go in a cleanup patch (like renaming).

Thanks
-- 
Thomas


[dpdk-dev] [PATCH 2/3] i40evf: support I40E_VIRTCHNL_OP_CONFIG_VSI_QUEUES_EX in i40e VF PMD

2014-08-30 Thread Thomas Monjalon
2014-08-20 11:33, Helin Zhang:
> To support configurable CRC in VF, use operation of
> I40E_VIRTCHNL_OP_CONFIG_VSI_QUEUES_EX to carry more
> information from VM to PF host, if the peer is DPDK
> PF host. Otherwise assume it is Linux PF host and
> just use operation of I40E_VIRTCHNL_OP_CONFIG_VSI_QUEUES.
[...]
> +/* It configures VSI queues to co-work with Linux PF host */
>  static int
> -i40evf_configure_queues(struct rte_eth_dev *dev)
> +i40evf_configure_vsi_queues(struct rte_eth_dev *dev)
>  {
>   struct i40e_vf *vf = I40EVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
> - struct i40e_virtchnl_vsi_queue_config_info *queue_info;
> - struct i40e_virtchnl_queue_pair_info *queue_cfg;
>   struct i40e_rx_queue **rxq =
>   (struct i40e_rx_queue **)dev->data->rx_queues;
>   struct i40e_tx_queue **txq =
>   (struct i40e_tx_queue **)dev->data->tx_queues;
> - int i, len, nb_qpairs, num_rxq, num_txq;
> - int err;
> + struct i40e_virtchnl_vsi_queue_config_info *vc_vqci;
> + struct i40e_virtchnl_queue_pair_info *vc_qpi;
>   struct vf_cmd_info args;
> - struct rte_pktmbuf_pool_private *mbp_priv;
> -
> - nb_qpairs = vf->num_queue_pairs;
> - len = sizeof(*queue_info) + sizeof(*queue_cfg) * nb_qpairs;
> - queue_info = rte_zmalloc("queue_info", len, 0);
> - if (queue_info == NULL) {
> - PMD_INIT_LOG(ERR, "failed alloc memory for queue_info\n");
> - return -1;
> + int size, i, nb_qp, ret;
> +
> + nb_qp = vf->num_queue_pairs;
> + size = sizeof(struct i40e_virtchnl_vsi_queue_config_info) +
> + sizeof(struct i40e_virtchnl_queue_pair_info) * nb_qp;
> + vc_vqci = rte_zmalloc("queue_info", size, 0);
> + if (!vc_vqci) {
> + PMD_DRV_LOG(ERR, "Failed to allocate memory for VF "
> + "configuring queues\n");
> + return -ENOMEM;
>   }
> - queue_info->vsi_id = vf->vsi_res->vsi_id;
> - queue_info->num_queue_pairs = nb_qpairs;
> - queue_cfg = queue_info->qpair;
> -
> - num_rxq = dev->data->nb_rx_queues;
> - num_txq = dev->data->nb_tx_queues;
> - /*
> -  * PF host driver required to configure queues in pairs, which means
> -  * rxq_num should equals to txq_num. The actual usage won't always
> -  * work that way. The solution is fills 0 with HW ring option in case
> -  * they are not equal.
> -  */
> - for (i = 0; i < nb_qpairs; i++) {
> - /*Fill TX info */
> - queue_cfg->txq.vsi_id = queue_info->vsi_id;
> - queue_cfg->txq.queue_id = i;
> - if (i < num_txq) {
> - queue_cfg->txq.ring_len = txq[i]->nb_tx_desc;
> - queue_cfg->txq.dma_ring_addr = 
> txq[i]->tx_ring_phys_addr;
> - } else {
> - queue_cfg->txq.ring_len = 0;
> - queue_cfg->txq.dma_ring_addr = 0;
> + vc_vqci->vsi_id = vf->vsi_res->vsi_id;
> + vc_vqci->num_queue_pairs = nb_qp;
> +
> + for (i = 0, vc_qpi = vc_vqci->qpair; i < nb_qp; i++, vc_qpi++) {
> + vc_qpi->txq.vsi_id = vc_vqci->vsi_id;
> + vc_qpi->txq.queue_id = i;
> + if (i < dev->data->nb_tx_queues) {
> + vc_qpi->txq.ring_len = txq[i]->nb_tx_desc;
> + vc_qpi->txq.dma_ring_addr = txq[i]->tx_ring_phys_addr;
>   }
>  
> - /* Fill RX info */
> - queue_cfg->rxq.vsi_id = queue_info->vsi_id;
> - queue_cfg->rxq.queue_id = i;
> - queue_cfg->rxq.max_pkt_size = vf->max_pkt_len;
> - if (i < num_rxq) {
> + vc_qpi->rxq.vsi_id = vc_vqci->vsi_id;
> + vc_qpi->rxq.queue_id = i;
> + vc_qpi->rxq.max_pkt_size = vf->max_pkt_len;
> + if (i < dev->data->nb_rx_queues) {
> + struct rte_pktmbuf_pool_private *mbp_priv;
> +
> + vc_qpi->rxq.ring_len = rxq[i]->nb_rx_desc;
> + vc_qpi->rxq.dma_ring_addr = rxq[i]->rx_ring_phys_addr;
>   mbp_priv = rte_mempool_get_priv(rxq[i]->mp);
> - queue_cfg->rxq.databuffer_size = 
> mbp_priv->mbuf_data_room_size -
> -RTE_PKTMBUF_HEADROOM;;
> - queue_cfg->rxq.ring_len = rxq[i]->nb_rx_desc;
> - queue_cfg->rxq.dma_ring_addr = 
> rxq[i]->rx_ring_phys_addr;;
> - } else {
> - queue_cfg->rxq.ring_len = 0;
> - queue_cfg->rxq.dma_ring_addr = 0;
> - queue_cfg->rxq.databuffer_size = 0;
> + vc_qpi->rxq.databuffer_size =
> + mbp_priv->mbuf_data_room_size -
> + RTE_PKTMBUF_HEADROOM;
>   }
> - queue_cfg++;
>   }

It's not clear why you reworked the legacy function.
Please explain it in