Author: mw
Date: Tue May 26 22:41:12 2020
New Revision: 361539
URL: https://svnweb.freebsd.org/changeset/base/361539

Log:
  MF11: r361467-361468,361534
  
  This patch upgrades the ENA driver to version 2.2.0.
  
  Approved by: re (gjb)
  Sponsored by: Amazon, Inc.

Added:
  releng/11.4/sys/dev/ena/ena_datapath.c
     - copied, changed from r361468, stable/11/sys/dev/ena/ena_datapath.c
  releng/11.4/sys/dev/ena/ena_datapath.h
     - copied, changed from r361468, stable/11/sys/dev/ena/ena_datapath.h
  releng/11.4/sys/dev/ena/ena_netmap.c
     - copied, changed from r361468, stable/11/sys/dev/ena/ena_netmap.c
  releng/11.4/sys/dev/ena/ena_netmap.h
     - copied, changed from r361468, stable/11/sys/dev/ena/ena_netmap.h
Modified:
  releng/11.4/share/man/man4/ena.4
  releng/11.4/sys/contrib/ena-com/ena_com.c
  releng/11.4/sys/contrib/ena-com/ena_com.h
  releng/11.4/sys/contrib/ena-com/ena_defs/ena_admin_defs.h
  releng/11.4/sys/contrib/ena-com/ena_defs/ena_common_defs.h
  releng/11.4/sys/contrib/ena-com/ena_defs/ena_eth_io_defs.h
  releng/11.4/sys/contrib/ena-com/ena_defs/ena_gen_info.h
  releng/11.4/sys/contrib/ena-com/ena_defs/ena_regs_defs.h
  releng/11.4/sys/contrib/ena-com/ena_eth_com.c
  releng/11.4/sys/contrib/ena-com/ena_eth_com.h
  releng/11.4/sys/contrib/ena-com/ena_plat.h
  releng/11.4/sys/dev/ena/ena.c
  releng/11.4/sys/dev/ena/ena.h
  releng/11.4/sys/dev/ena/ena_sysctl.c
  releng/11.4/sys/dev/ena/ena_sysctl.h
  releng/11.4/sys/modules/ena/Makefile
Directory Properties:
  releng/11.4/   (props changed)

Modified: releng/11.4/share/man/man4/ena.4
==============================================================================
--- releng/11.4/share/man/man4/ena.4    Tue May 26 19:34:05 2020        
(r361538)
+++ releng/11.4/share/man/man4/ena.4    Tue May 26 22:41:12 2020        
(r361539)
@@ -27,7 +27,7 @@
 .\"
 .\" $FreeBSD$
 .\"
-.Dd May 04, 2017
+.Dd August 16, 2017
 .Dt ENA 4
 .Os
 .Sh NAME
@@ -35,7 +35,7 @@
 .Nd "FreeBSD kernel driver for Elastic Network Adapter (ENA) family"
 .Sh SYNOPSIS
 To compile this driver into the kernel,
-place the following line in your
+place the following line in the
 kernel configuration file:
 .Bd -ragged -offset indent
 .Cd "device ena"
@@ -59,8 +59,9 @@ The driver supports a range of ENA devices, is link-sp
 (i.e., the same driver is used for 10GbE, 25GbE, 40GbE, etc.), and has
 a negotiated and extendable feature set.
 .Pp
-Some ENA devices support SR-IOV. This driver is used for both the
-SR-IOV Physical Function (PF) and Virtual Function (VF) devices.
+Some ENA devices support SR-IOV.
+This driver is used for both the SR-IOV Physical Function (PF) and Virtual
+Function (VF) devices.
 .Pp
 The ENA devices enable high speed and low overhead network traffic
 processing by providing multiple Tx/Rx queue pairs (the maximum number
@@ -82,8 +83,8 @@ to recover in a manner transparent to the application,
 debug logs.
 .Pp
 Some of the ENA devices support a working mode called Low-latency
-Queue (LLQ), which saves several more microseconds. This feature will
-be implemented for driver in future releases.
+Queue (LLQ), which saves several more microseconds.
+This feature will be implemented for driver in future releases.
 .Sh HARDWARE
 Supported PCI vendor ID/device IDs:
 .Pp
@@ -105,19 +106,23 @@ Supported PCI vendor ID/device IDs:
 Error occurred during initialization of the mmio register read request.
 .It ena%d: Can not reset device
 .Pp
-Device could not be reset; device may not be responding or is already
-during reset.
+Device could not be reset.
+.br
+Device may not be responding or is already during reset.
 .It ena%d: device version is too low
 .Pp
-Version of the controller is too low and it is not supported by the driver.
+Version of the controller is too old and it is not supported by the driver.
 .It ena%d: Invalid dma width value %d
 .Pp
-The controller is able to request dma transcation width. Device stopped
-responding or it demanded invalid value.
+The controller is able to request dma transaction width.
+.br
+Device stopped responding or it demanded invalid value.
 .It ena%d: Can not initialize ena admin queue with device
 .Pp
-Initialization of the Admin Queue failed; device may not be responding or there
-was a problem with initialization of the resources.
+Initialization of the Admin Queue failed.
+.br
+Device may not be responding or there was a problem with initialization of
+the resources.
 .It ena%d: Cannot get attribute for ena device rc: %d
 .Pp
 Failed to get attributes of the device from the controller.
@@ -141,11 +146,14 @@ Errors occurred when trying to configure AENQ groups.
 .It ena%d: could not allocate irq vector: %d
 .It ena%d: Unable to allocate bus resource: registers
 .Pp
-Resource allocation failed when initializing the device; driver will not
-be attached.
+Resource allocation failed when initializing the device.
+.br
+Driver will not be attached.
 .It ena%d: ENA device init failed (err: %d)
 .Pp
-Device initialization failed; driver will not be attached.
+Device initialization failed.
+.br
+Driver will not be attached.
 .It ena%d: could not activate irq vector: %d
 .Pp
 Error occurred when trying to activate interrupt vectors for Admin Queue.
@@ -157,13 +165,16 @@ Error occurred when trying to register Admin Queue int
 Error occurred during configuration of the Admin Queue interrupts.
 .It ena%d: Enable MSI-X failed
 .Pp
-Configuration of the MSI-X for Admin Queue failed; there could be lack
-of resources or interrupts could not have been configured; driver will
-not be attached.
+Configuration of the MSI-X for Admin Queue failed.
+.br
+There could be lack of resources or interrupts could not have been configured.
+.br
+Driver will not be attached.
 .It ena%d: VLAN is in use, detach first
 .Pp
-VLANs are being used when trying to detach the driver; VLANs should be detached
-first and then detach routine should be called again.
+VLANs are being used when trying to detach the driver.
+.br
+VLANs must be detached first and then detach routine have to be called again.
 .It ena%d: Unmapped RX DMA tag associations
 .It ena%d: Unmapped TX DMA tag associations
 .Pp
@@ -175,8 +186,9 @@ Error occurred when trying to destroy RX/TX DMA tag.
 .It ena%d: Cannot fill hash control
 .It ena%d: WARNING: RSS was not properly initialized, it will affect bandwidth
 .Pp
-Error occurred during initialization of one of RSS resources; device is still
-going to work but it will affect performance because all RX packets will be
+Error occurred during initialization of one of RSS resources.
+.br
+The device will work with reduced performance because all RX packets will be
 passed to queue 0 and there will be no hash information.
 .It ena%d: failed to tear down irq: %d
 .It ena%d: dev has no parent while releasing res for irq: %d
@@ -196,16 +208,20 @@ Requested MTU value is not supported and will not be s
 Device stopped responding and will be reset.
 .It ena%d: Found a Tx that wasn't completed on time, qid %d, index %d.
 .Pp
-Packet was pushed to the NIC but not sent within given time limit; it may
-be caused by hang of the IO queue.
+Packet was pushed to the NIC but not sent within given time limit.
+.br
+It may be caused by hang of the IO queue.
 .It ena%d: The number of lost tx completion is aboce the threshold (%d > %d). 
Reset the device
 .Pp
-If too many Tx wasn't completed on time the device is going to be reset; it may
-be caused by hanged queue or device.
+If too many Tx wasn't completed on time the device is going to be reset.
+.br
+It may be caused by hanged queue or device.
 .It ena%d: trigger reset is on
 .Pp
-Device will be reset; reset is triggered either by watchdog or if too many TX
-packets were not completed on time.
+Device will be reset.
+.br
+Reset is triggered either by watchdog or if too many TX packets were not
+completed on time.
 .It ena%d: invalid value recvd
 .Pp
 Link status received from the device in the AENQ handler is invalid.
@@ -220,7 +236,9 @@ Link status received from the device in the AENQ handl
 .It ena%d: could not allocate irq vector: %d
 .It ena%d: failed to register interrupt handler for irq %ju: %d
 .Pp
-IO resources initialization failed. Interface will not be brought up.
+IO resources initialization failed.
+.br
+Interface will not be brought up.
 .It ena%d: LRO[%d] Initialization failed!
 .Pp
 Initialization of the LRO for the RX ring failed.
@@ -228,20 +246,26 @@ Initialization of the LRO for the RX ring failed.
 .It ena%d: failed to add buffer for rx queue %d
 .It ena%d: refilled rx queue %d with %d pages only
 .Pp
-Allocation of resources used on RX path failed; if happened during
-initialization of the IO queue, the interface will not be brought up.
+Allocation of resources used on RX path failed.
+.br
+If happened during initialization of the IO queue, the interface will not be
+brought up.
 .It ena%d: ioctl promisc/allmulti
 .Pp
-IOCTL request for the device to work in promiscuous/allmulti mode; see
+IOCTL request for the device to work in promiscuous/allmulti mode.
+.br
+See
 .Xr ifconfig 8
 for more details.
 .It ena%d: too many fragments. Last fragment: %d!
 .Pp
 Packet with unsupported number of segments was queued for sending to the
-device; packet will be dropped.
+device.
+.br
+Packet will be dropped.
 .Sh SUPPORT
-If an issue is identified with the released source code with a supported 
adapter
-email the specific information related to the issue to
+If an issue is identified with the released source code with a supported
+adapter, please email the specific information related to the issue to
 .Aq Mt m...@semihalf.com
 and
 .Aq Mt m...@semihalf.com .

Modified: releng/11.4/sys/contrib/ena-com/ena_com.c
==============================================================================
--- releng/11.4/sys/contrib/ena-com/ena_com.c   Tue May 26 19:34:05 2020        
(r361538)
+++ releng/11.4/sys/contrib/ena-com/ena_com.c   Tue May 26 22:41:12 2020        
(r361539)
@@ -1,7 +1,7 @@
 /*-
  * BSD LICENSE
  *
- * Copyright (c) 2015-2017 Amazon.com, Inc. or its affiliates.
+ * Copyright (c) 2015-2020 Amazon.com, Inc. or its affiliates.
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
@@ -32,9 +32,6 @@
  */
 
 #include "ena_com.h"
-#ifdef ENA_INTERNAL
-#include "ena_gen_info.h"
-#endif
 
 /*****************************************************************************/
 /*****************************************************************************/
@@ -52,9 +49,6 @@
 #define ENA_EXTENDED_STAT_GET_QUEUE(_funct_queue) (_funct_queue >> 16)
 
 #endif /* ENA_EXTENDED_STATS */
-#define MIN_ENA_VER (((ENA_COMMON_SPEC_VERSION_MAJOR) << \
-               ENA_REGS_VERSION_MAJOR_VERSION_SHIFT) \
-               | (ENA_COMMON_SPEC_VERSION_MINOR))
 
 #define ENA_CTRL_MAJOR         0
 #define ENA_CTRL_MINOR         0
@@ -76,6 +70,10 @@
 
 #define ENA_REGS_ADMIN_INTR_MASK 1
 
+#define ENA_MIN_POLL_US 100
+
+#define ENA_MAX_POLL_US 5000
+
 /*****************************************************************************/
 /*****************************************************************************/
 /*****************************************************************************/
@@ -103,7 +101,7 @@ struct ena_com_stats_ctx {
        struct ena_admin_acq_get_stats_resp get_resp;
 };
 
-static inline int ena_com_mem_addr_set(struct ena_com_dev *ena_dev,
+static int ena_com_mem_addr_set(struct ena_com_dev *ena_dev,
                                       struct ena_common_mem_addr *ena_addr,
                                       dma_addr_t addr)
 {
@@ -112,8 +110,8 @@ static inline int ena_com_mem_addr_set(struct ena_com_
                return ENA_COM_INVAL;
        }
 
-       ena_addr->mem_addr_low = (u32)addr;
-       ena_addr->mem_addr_high = (u16)((u64)addr >> 32);
+       ena_addr->mem_addr_low = lower_32_bits(addr);
+       ena_addr->mem_addr_high = (u16)upper_32_bits(addr);
 
        return 0;
 }
@@ -127,7 +125,7 @@ static int ena_com_admin_init_sq(struct ena_com_admin_
                               sq->mem_handle);
 
        if (!sq->entries) {
-               ena_trc_err("memory allocation failed");
+               ena_trc_err("memory allocation failed\n");
                return ENA_COM_NO_MEM;
        }
 
@@ -149,7 +147,7 @@ static int ena_com_admin_init_cq(struct ena_com_admin_
                               cq->mem_handle);
 
        if (!cq->entries)  {
-               ena_trc_err("memory allocation failed");
+               ena_trc_err("memory allocation failed\n");
                return ENA_COM_NO_MEM;
        }
 
@@ -174,7 +172,7 @@ static int ena_com_admin_init_aenq(struct ena_com_dev 
                        aenq->mem_handle);
 
        if (!aenq->entries) {
-               ena_trc_err("memory allocation failed");
+               ena_trc_err("memory allocation failed\n");
                return ENA_COM_NO_MEM;
        }
 
@@ -204,7 +202,7 @@ static int ena_com_admin_init_aenq(struct ena_com_dev 
        return 0;
 }
 
-static inline void comp_ctxt_release(struct ena_com_admin_queue *queue,
+static void comp_ctxt_release(struct ena_com_admin_queue *queue,
                                     struct ena_comp_ctx *comp_ctx)
 {
        comp_ctx->occupied = false;
@@ -220,6 +218,11 @@ static struct ena_comp_ctx *get_comp_ctxt(struct ena_c
                return NULL;
        }
 
+       if (unlikely(!queue->comp_ctx)) {
+               ena_trc_err("Completion context is NULL\n");
+               return NULL;
+       }
+
        if (unlikely(queue->comp_ctx[command_id].occupied && capture)) {
                ena_trc_err("Completion context is occupied\n");
                return NULL;
@@ -249,7 +252,7 @@ static struct ena_comp_ctx *__ena_com_submit_admin_cmd
        tail_masked = admin_queue->sq.tail & queue_size_mask;
 
        /* In case of queue FULL */
-       cnt = ATOMIC32_READ(&admin_queue->outstanding_cmds);
+       cnt = (u16)ATOMIC32_READ(&admin_queue->outstanding_cmds);
        if (cnt >= admin_queue->q_depth) {
                ena_trc_dbg("admin queue is full.\n");
                admin_queue->stats.out_of_space++;
@@ -293,7 +296,7 @@ static struct ena_comp_ctx *__ena_com_submit_admin_cmd
        return comp_ctx;
 }
 
-static inline int ena_com_init_comp_ctxt(struct ena_com_admin_queue *queue)
+static int ena_com_init_comp_ctxt(struct ena_com_admin_queue *queue)
 {
        size_t size = queue->q_depth * sizeof(struct ena_comp_ctx);
        struct ena_comp_ctx *comp_ctx;
@@ -301,7 +304,7 @@ static inline int ena_com_init_comp_ctxt(struct ena_co
 
        queue->comp_ctx = ENA_MEM_ALLOC(queue->q_dmadev, size);
        if (unlikely(!queue->comp_ctx)) {
-               ena_trc_err("memory allocation failed");
+               ena_trc_err("memory allocation failed\n");
                return ENA_COM_NO_MEM;
        }
 
@@ -320,7 +323,7 @@ static struct ena_comp_ctx *ena_com_submit_admin_cmd(s
                                                     struct ena_admin_acq_entry 
*comp,
                                                     size_t comp_size_in_bytes)
 {
-       unsigned long flags;
+       unsigned long flags = 0;
        struct ena_comp_ctx *comp_ctx;
 
        ENA_SPINLOCK_LOCK(admin_queue->q_lock, flags);
@@ -332,7 +335,7 @@ static struct ena_comp_ctx *ena_com_submit_admin_cmd(s
                                              cmd_size_in_bytes,
                                              comp,
                                              comp_size_in_bytes);
-       if (unlikely(IS_ERR(comp_ctx)))
+       if (IS_ERR(comp_ctx))
                admin_queue->running_state = false;
        ENA_SPINLOCK_UNLOCK(admin_queue->q_lock, flags);
 
@@ -348,6 +351,7 @@ static int ena_com_init_io_sq(struct ena_com_dev *ena_
 
        memset(&io_sq->desc_addr, 0x0, sizeof(io_sq->desc_addr));
 
+       io_sq->dma_addr_bits = (u8)ena_dev->dma_addr_bits;
        io_sq->desc_entry_size =
                (io_sq->direction == ENA_COM_IO_QUEUE_DIRECTION_TX) ?
                sizeof(struct ena_eth_io_tx_desc) :
@@ -373,18 +377,21 @@ static int ena_com_init_io_sq(struct ena_com_dev *ena_
                }
 
                if (!io_sq->desc_addr.virt_addr) {
-                       ena_trc_err("memory allocation failed");
+                       ena_trc_err("memory allocation failed\n");
                        return ENA_COM_NO_MEM;
                }
        }
 
        if (io_sq->mem_queue_type == ENA_ADMIN_PLACEMENT_POLICY_DEV) {
                /* Allocate bounce buffers */
-               io_sq->bounce_buf_ctrl.buffer_size = 
ena_dev->llq_info.desc_list_entry_size;
-               io_sq->bounce_buf_ctrl.buffers_num = 
ENA_COM_BOUNCE_BUFFER_CNTRL_CNT;
+               io_sq->bounce_buf_ctrl.buffer_size =
+                       ena_dev->llq_info.desc_list_entry_size;
+               io_sq->bounce_buf_ctrl.buffers_num =
+                       ENA_COM_BOUNCE_BUFFER_CNTRL_CNT;
                io_sq->bounce_buf_ctrl.next_to_use = 0;
 
-               size = io_sq->bounce_buf_ctrl.buffer_size * 
io_sq->bounce_buf_ctrl.buffers_num;
+               size = io_sq->bounce_buf_ctrl.buffer_size *
+                       io_sq->bounce_buf_ctrl.buffers_num;
 
                ENA_MEM_ALLOC_NODE(ena_dev->dmadev,
                                   size,
@@ -395,11 +402,12 @@ static int ena_com_init_io_sq(struct ena_com_dev *ena_
                        io_sq->bounce_buf_ctrl.base_buffer = 
ENA_MEM_ALLOC(ena_dev->dmadev, size);
 
                if (!io_sq->bounce_buf_ctrl.base_buffer) {
-                       ena_trc_err("bounce buffer memory allocation failed");
+                       ena_trc_err("bounce buffer memory allocation failed\n");
                        return ENA_COM_NO_MEM;
                }
 
-               memcpy(&io_sq->llq_info, &ena_dev->llq_info, 
sizeof(io_sq->llq_info));
+               memcpy(&io_sq->llq_info, &ena_dev->llq_info,
+                      sizeof(io_sq->llq_info));
 
                /* Initiate the first bounce buffer */
                io_sq->llq_buf_ctrl.curr_bounce_buf =
@@ -408,6 +416,12 @@ static int ena_com_init_io_sq(struct ena_com_dev *ena_
                       0x0, io_sq->llq_info.desc_list_entry_size);
                io_sq->llq_buf_ctrl.descs_left_in_line =
                        io_sq->llq_info.descs_num_before_header;
+               io_sq->disable_meta_caching =
+                       io_sq->llq_info.disable_meta_caching;
+
+               if (io_sq->llq_info.max_entries_in_tx_burst > 0)
+                       io_sq->entries_in_tx_burst_left =
+                               io_sq->llq_info.max_entries_in_tx_burst;
        }
 
        io_sq->tail = 0;
@@ -451,7 +465,7 @@ static int ena_com_init_io_cq(struct ena_com_dev *ena_
        }
 
        if (!io_cq->cdesc_addr.virt_addr) {
-               ena_trc_err("memory allocation failed");
+               ena_trc_err("memory allocation failed\n");
                return ENA_COM_NO_MEM;
        }
 
@@ -500,12 +514,12 @@ static void ena_com_handle_admin_completion(struct ena
        cqe = &admin_queue->cq.entries[head_masked];
 
        /* Go over all the completions */
-       while ((cqe->acq_common_descriptor.flags &
+       while ((READ_ONCE8(cqe->acq_common_descriptor.flags) &
                        ENA_ADMIN_ACQ_COMMON_DESC_PHASE_MASK) == phase) {
                /* Do not read the rest of the completion entry before the
                 * phase bit was validated
                 */
-               rmb();
+               dma_rmb();
                ena_com_handle_single_admin_completion(admin_queue, cqe);
 
                head_masked++;
@@ -529,12 +543,9 @@ static int ena_com_comp_status_to_errno(u8 comp_status
        if (unlikely(comp_status != 0))
                ena_trc_err("admin command failed[%u]\n", comp_status);
 
-       if (unlikely(comp_status > ENA_ADMIN_UNKNOWN_ERROR))
-               return ENA_COM_INVAL;
-
        switch (comp_status) {
        case ENA_ADMIN_SUCCESS:
-               return 0;
+               return ENA_COM_OK;
        case ENA_ADMIN_RESOURCE_ALLOCATION_FAILURE:
                return ENA_COM_NO_MEM;
        case ENA_ADMIN_UNSUPPORTED_OPCODE:
@@ -546,23 +557,32 @@ static int ena_com_comp_status_to_errno(u8 comp_status
                return ENA_COM_INVAL;
        }
 
-       return 0;
+       return ENA_COM_INVAL;
 }
 
+static inline void ena_delay_exponential_backoff_us(u32 exp, u32 delay_us)
+{
+       delay_us = ENA_MAX32(ENA_MIN_POLL_US, delay_us);
+       delay_us = ENA_MIN32(delay_us * (1 << exp), ENA_MAX_POLL_US);
+       ENA_USLEEP(delay_us);
+}
+
 static int ena_com_wait_and_process_admin_cq_polling(struct ena_comp_ctx 
*comp_ctx,
                                                     struct ena_com_admin_queue 
*admin_queue)
 {
-       unsigned long flags, timeout;
+       unsigned long flags = 0;
+       ena_time_t timeout;
        int ret;
+       u32 exp = 0;
 
        timeout = ENA_GET_SYSTEM_TIMEOUT(admin_queue->completion_timeout);
 
        while (1) {
-                ENA_SPINLOCK_LOCK(admin_queue->q_lock, flags);
-                ena_com_handle_admin_completion(admin_queue);
-                ENA_SPINLOCK_UNLOCK(admin_queue->q_lock, flags);
+               ENA_SPINLOCK_LOCK(admin_queue->q_lock, flags);
+               ena_com_handle_admin_completion(admin_queue);
+               ENA_SPINLOCK_UNLOCK(admin_queue->q_lock, flags);
 
-                if (comp_ctx->status != ENA_CMD_SUBMITTED)
+               if (comp_ctx->status != ENA_CMD_SUBMITTED)
                        break;
 
                if (ENA_TIME_EXPIRE(timeout)) {
@@ -577,7 +597,7 @@ static int ena_com_wait_and_process_admin_cq_polling(s
                        goto err;
                }
 
-               ENA_MSLEEP(100);
+               ena_delay_exponential_backoff_us(exp++, 
admin_queue->ena_dev->ena_min_poll_delay_us);
        }
 
        if (unlikely(comp_ctx->status == ENA_CMD_ABORTED)) {
@@ -598,42 +618,121 @@ err:
        return ret;
 }
 
+/**
+ * Set the LLQ configurations of the firmware
+ *
+ * The driver provides only the enabled feature values to the device,
+ * which in turn, checks if they are supported.
+ */
+static int ena_com_set_llq(struct ena_com_dev *ena_dev)
+{
+       struct ena_com_admin_queue *admin_queue;
+       struct ena_admin_set_feat_cmd cmd;
+       struct ena_admin_set_feat_resp resp;
+       struct ena_com_llq_info *llq_info = &ena_dev->llq_info;
+       int ret;
+
+       memset(&cmd, 0x0, sizeof(cmd));
+       admin_queue = &ena_dev->admin_queue;
+
+       cmd.aq_common_descriptor.opcode = ENA_ADMIN_SET_FEATURE;
+       cmd.feat_common.feature_id = ENA_ADMIN_LLQ;
+
+       cmd.u.llq.header_location_ctrl_enabled = llq_info->header_location_ctrl;
+       cmd.u.llq.entry_size_ctrl_enabled = llq_info->desc_list_entry_size_ctrl;
+       cmd.u.llq.desc_num_before_header_enabled = 
llq_info->descs_num_before_header;
+       cmd.u.llq.descriptors_stride_ctrl_enabled = llq_info->desc_stride_ctrl;
+
+       if (llq_info->disable_meta_caching)
+               cmd.u.llq.accel_mode.u.set.enabled_flags |=
+                       BIT(ENA_ADMIN_DISABLE_META_CACHING);
+
+       if (llq_info->max_entries_in_tx_burst)
+               cmd.u.llq.accel_mode.u.set.enabled_flags |=
+                       BIT(ENA_ADMIN_LIMIT_TX_BURST);
+
+       ret = ena_com_execute_admin_command(admin_queue,
+                                           (struct ena_admin_aq_entry *)&cmd,
+                                           sizeof(cmd),
+                                           (struct ena_admin_acq_entry *)&resp,
+                                           sizeof(resp));
+
+       if (unlikely(ret))
+               ena_trc_err("Failed to set LLQ configurations: %d\n", ret);
+
+       return ret;
+}
+
 static int ena_com_config_llq_info(struct ena_com_dev *ena_dev,
-                                  struct ena_admin_feature_llq_desc *llq_desc)
+                                  struct ena_admin_feature_llq_desc 
*llq_features,
+                                  struct ena_llq_configurations 
*llq_default_cfg)
 {
        struct ena_com_llq_info *llq_info = &ena_dev->llq_info;
+       u16 supported_feat;
+       int rc;
 
        memset(llq_info, 0, sizeof(*llq_info));
 
-       switch (llq_desc->header_location_ctrl) {
-       case ENA_ADMIN_INLINE_HEADER:
-               llq_info->inline_header = true;
-               break;
-       case ENA_ADMIN_HEADER_RING:
-               llq_info->inline_header = false;
-               break;
-       default:
-               ena_trc_err("Invalid header location control\n");
+       supported_feat = llq_features->header_location_ctrl_supported;
+
+       if (likely(supported_feat & llq_default_cfg->llq_header_location)) {
+               llq_info->header_location_ctrl =
+                       llq_default_cfg->llq_header_location;
+       } else {
+               ena_trc_err("Invalid header location control, supported: 
0x%x\n",
+                           supported_feat);
                return -EINVAL;
        }
 
-       switch (llq_desc->entry_size_ctrl) {
-       case ENA_ADMIN_LIST_ENTRY_SIZE_128B:
-               llq_info->desc_list_entry_size = 128;
-               break;
-       case ENA_ADMIN_LIST_ENTRY_SIZE_192B:
-               llq_info->desc_list_entry_size = 192;
-               break;
-       case ENA_ADMIN_LIST_ENTRY_SIZE_256B:
-               llq_info->desc_list_entry_size = 256;
-               break;
-       default:
-               ena_trc_err("Invalid entry_size_ctrl %d\n",
-                           llq_desc->entry_size_ctrl);
-               return -EINVAL;
+       if (likely(llq_info->header_location_ctrl == ENA_ADMIN_INLINE_HEADER)) {
+               supported_feat = 
llq_features->descriptors_stride_ctrl_supported;
+               if (likely(supported_feat & llq_default_cfg->llq_stride_ctrl)) {
+                       llq_info->desc_stride_ctrl = 
llq_default_cfg->llq_stride_ctrl;
+               } else  {
+                       if (supported_feat & 
ENA_ADMIN_MULTIPLE_DESCS_PER_ENTRY) {
+                               llq_info->desc_stride_ctrl = 
ENA_ADMIN_MULTIPLE_DESCS_PER_ENTRY;
+                       } else if (supported_feat & 
ENA_ADMIN_SINGLE_DESC_PER_ENTRY) {
+                               llq_info->desc_stride_ctrl = 
ENA_ADMIN_SINGLE_DESC_PER_ENTRY;
+                       } else {
+                               ena_trc_err("Invalid desc_stride_ctrl, 
supported: 0x%x\n",
+                                           supported_feat);
+                               return -EINVAL;
+                       }
+
+                       ena_trc_err("Default llq stride ctrl is not supported, 
performing fallback, default: 0x%x, supported: 0x%x, used: 0x%x\n",
+                                   llq_default_cfg->llq_stride_ctrl,
+                                   supported_feat,
+                                   llq_info->desc_stride_ctrl);
+               }
+       } else {
+               llq_info->desc_stride_ctrl = 0;
        }
 
-       if ((llq_info->desc_list_entry_size & 0x7)) {
+       supported_feat = llq_features->entry_size_ctrl_supported;
+       if (likely(supported_feat & llq_default_cfg->llq_ring_entry_size)) {
+               llq_info->desc_list_entry_size_ctrl = 
llq_default_cfg->llq_ring_entry_size;
+               llq_info->desc_list_entry_size = 
llq_default_cfg->llq_ring_entry_size_value;
+       } else {
+               if (supported_feat & ENA_ADMIN_LIST_ENTRY_SIZE_128B) {
+                       llq_info->desc_list_entry_size_ctrl = 
ENA_ADMIN_LIST_ENTRY_SIZE_128B;
+                       llq_info->desc_list_entry_size = 128;
+               } else if (supported_feat & ENA_ADMIN_LIST_ENTRY_SIZE_192B) {
+                       llq_info->desc_list_entry_size_ctrl = 
ENA_ADMIN_LIST_ENTRY_SIZE_192B;
+                       llq_info->desc_list_entry_size = 192;
+               } else if (supported_feat & ENA_ADMIN_LIST_ENTRY_SIZE_256B) {
+                       llq_info->desc_list_entry_size_ctrl = 
ENA_ADMIN_LIST_ENTRY_SIZE_256B;
+                       llq_info->desc_list_entry_size = 256;
+               } else {
+                       ena_trc_err("Invalid entry_size_ctrl, supported: 
0x%x\n", supported_feat);
+                       return -EINVAL;
+               }
+
+               ena_trc_err("Default llq ring entry size is not supported, 
performing fallback, default: 0x%x, supported: 0x%x, used: 0x%x\n",
+                           llq_default_cfg->llq_ring_entry_size,
+                           supported_feat,
+                           llq_info->desc_list_entry_size);
+       }
+       if (unlikely(llq_info->desc_list_entry_size & 0x7)) {
                /* The desc list entry size should be whole multiply of 8
                 * This requirement comes from __iowrite64_copy()
                 */
@@ -642,35 +741,56 @@ static int ena_com_config_llq_info(struct ena_com_dev 
                return -EINVAL;
        }
 
-       if (llq_info->inline_header) {
-               llq_info->desc_stride_ctrl = llq_desc->descriptors_stride_ctrl;
-               if ((llq_info->desc_stride_ctrl != 
ENA_ADMIN_SINGLE_DESC_PER_ENTRY) &&
-                   (llq_info->desc_stride_ctrl != 
ENA_ADMIN_MULTIPLE_DESCS_PER_ENTRY)) {
-                       ena_trc_err("Invalid desc_stride_ctrl %d\n",
-                                   llq_info->desc_stride_ctrl);
-                       return -EINVAL;
-               }
-       } else {
-               llq_info->desc_stride_ctrl = ENA_ADMIN_SINGLE_DESC_PER_ENTRY;
-       }
-
-       if (llq_info->desc_stride_ctrl == ENA_ADMIN_SINGLE_DESC_PER_ENTRY)
+       if (llq_info->desc_stride_ctrl == ENA_ADMIN_MULTIPLE_DESCS_PER_ENTRY)
                llq_info->descs_per_entry = llq_info->desc_list_entry_size /
                        sizeof(struct ena_eth_io_tx_desc);
        else
                llq_info->descs_per_entry = 1;
 
-       llq_info->descs_num_before_header = 
llq_desc->desc_num_before_header_ctrl;
+       supported_feat = llq_features->desc_num_before_header_supported;
+       if (likely(supported_feat & 
llq_default_cfg->llq_num_decs_before_header)) {
+               llq_info->descs_num_before_header = 
llq_default_cfg->llq_num_decs_before_header;
+       } else {
+               if (supported_feat & ENA_ADMIN_LLQ_NUM_DESCS_BEFORE_HEADER_2) {
+                       llq_info->descs_num_before_header = 
ENA_ADMIN_LLQ_NUM_DESCS_BEFORE_HEADER_2;
+               } else if (supported_feat & 
ENA_ADMIN_LLQ_NUM_DESCS_BEFORE_HEADER_1) {
+                       llq_info->descs_num_before_header = 
ENA_ADMIN_LLQ_NUM_DESCS_BEFORE_HEADER_1;
+               } else if (supported_feat & 
ENA_ADMIN_LLQ_NUM_DESCS_BEFORE_HEADER_4) {
+                       llq_info->descs_num_before_header = 
ENA_ADMIN_LLQ_NUM_DESCS_BEFORE_HEADER_4;
+               } else if (supported_feat & 
ENA_ADMIN_LLQ_NUM_DESCS_BEFORE_HEADER_8) {
+                       llq_info->descs_num_before_header = 
ENA_ADMIN_LLQ_NUM_DESCS_BEFORE_HEADER_8;
+               } else {
+                       ena_trc_err("Invalid descs_num_before_header, 
supported: 0x%x\n",
+                                   supported_feat);
+                       return -EINVAL;
+               }
 
-       return 0;
-}
+               ena_trc_err("Default llq num descs before header is not 
supported, performing fallback, default: 0x%x, supported: 0x%x, used: 0x%x\n",
+                           llq_default_cfg->llq_num_decs_before_header,
+                           supported_feat,
+                           llq_info->descs_num_before_header);
+       }
+       /* Check for accelerated queue supported */
+       llq_info->disable_meta_caching =
+               llq_features->accel_mode.u.get.supported_flags &
+               BIT(ENA_ADMIN_DISABLE_META_CACHING);
 
+       if (llq_features->accel_mode.u.get.supported_flags & 
BIT(ENA_ADMIN_LIMIT_TX_BURST))
+               llq_info->max_entries_in_tx_burst =
+                       llq_features->accel_mode.u.get.max_tx_burst_size /
+                       llq_default_cfg->llq_ring_entry_size_value;
 
+       rc = ena_com_set_llq(ena_dev);
+       if (rc)
+               ena_trc_err("Cannot set LLQ configuration: %d\n", rc);
 
+       return rc;
+}
+
 static int ena_com_wait_and_process_admin_cq_interrupts(struct ena_comp_ctx 
*comp_ctx,
                                                        struct 
ena_com_admin_queue *admin_queue)
 {
-       unsigned long flags;
+       unsigned long flags = 0;
        int ret;
 
        ENA_WAIT_EVENT_WAIT(comp_ctx->wait_event,
@@ -687,16 +807,25 @@ static int ena_com_wait_and_process_admin_cq_interrupt
                admin_queue->stats.no_completion++;
                ENA_SPINLOCK_UNLOCK(admin_queue->q_lock, flags);
 
-               if (comp_ctx->status == ENA_CMD_COMPLETED)
-                       ena_trc_err("The ena device have completion but the 
driver didn't receive any MSI-X interrupt (cmd %d)\n",
-                                   comp_ctx->cmd_opcode);
-               else
-                       ena_trc_err("The ena device doesn't send any completion 
for the admin cmd %d status %d\n",
+               if (comp_ctx->status == ENA_CMD_COMPLETED) {
+                       ena_trc_err("The ena device sent a completion but the 
driver didn't receive a MSI-X interrupt (cmd %d), autopolling mode is %s\n",
+                                   comp_ctx->cmd_opcode, 
admin_queue->auto_polling ? "ON" : "OFF");
+                       /* Check if fallback to polling is enabled */
+                       if (admin_queue->auto_polling)
+                               admin_queue->polling = true;
+               } else {
+                       ena_trc_err("The ena device didn't send a completion 
for the admin cmd %d status %d\n",
                                    comp_ctx->cmd_opcode, comp_ctx->status);
-
-               admin_queue->running_state = false;
-               ret = ENA_COM_TIMER_EXPIRED;
-               goto err;
+               }
+               /* Check if shifted to polling mode.
+                * This will happen if there is a completion without an 
interrupt
+                * and autopolling mode is enabled. Continuing normal execution 
in such case
+                */
+               if (!admin_queue->polling) {
+                       admin_queue->running_state = false;
+                       ret = ENA_COM_TIMER_EXPIRED;
+                       goto err;
+               }
        }
 
        ret = ena_com_comp_status_to_errno(comp_ctx->comp_status);
@@ -715,7 +844,7 @@ static u32 ena_com_reg_bar_read32(struct ena_com_dev *
        volatile struct ena_admin_ena_mmio_req_read_less_resp *read_resp =
                mmio_read->read_resp;
        u32 mmio_read_reg, ret, i;
-       unsigned long flags;
+       unsigned long flags = 0;
        u32 timeout = mmio_read->reg_read_to;
 
        ENA_MIGHT_SLEEP();
@@ -736,15 +865,11 @@ static u32 ena_com_reg_bar_read32(struct ena_com_dev *
        mmio_read_reg |= mmio_read->seq_num &
                        ENA_REGS_MMIO_REG_READ_REQ_ID_MASK;
 
-       /* make sure read_resp->req_id get updated before the hw can write
-        * there
-        */
-       wmb();
+       ENA_REG_WRITE32(ena_dev->bus, mmio_read_reg,
+                       ena_dev->reg_bar + ENA_REGS_MMIO_REG_READ_OFF);
 
-       ENA_REG_WRITE32(ena_dev->bus, mmio_read_reg, ena_dev->reg_bar + 
ENA_REGS_MMIO_REG_READ_OFF);
-
        for (i = 0; i < timeout; i++) {
-               if (read_resp->req_id == mmio_read->seq_num)
+               if (READ_ONCE16(read_resp->req_id) == mmio_read->seq_num)
                        break;
 
                ENA_UDELAY(1);
@@ -761,7 +886,7 @@ static u32 ena_com_reg_bar_read32(struct ena_com_dev *
        }
 
        if (read_resp->reg_off != offset) {
-               ena_trc_err("Read failure: wrong offset provided");
+               ena_trc_err("Read failure: wrong offset provided\n");
                ret = ENA_MMIO_READ_TIMEOUT;
        } else {
                ret = read_resp->reg_val;
@@ -856,8 +981,9 @@ static void ena_com_io_queue_free(struct ena_com_dev *
        }
 
        if (io_sq->bounce_buf_ctrl.base_buffer) {
-               size = io_sq->llq_info.desc_list_entry_size * 
ENA_COM_BOUNCE_BUFFER_CNTRL_CNT;
-               ENA_MEM_FREE(ena_dev->dmadev, 
io_sq->bounce_buf_ctrl.base_buffer);
+               ENA_MEM_FREE(ena_dev->dmadev,
+                            io_sq->bounce_buf_ctrl.base_buffer,
+                            (io_sq->llq_info.desc_list_entry_size * 
ENA_COM_BOUNCE_BUFFER_CNTRL_CNT));
                io_sq->bounce_buf_ctrl.base_buffer = NULL;
        }
 }
@@ -865,9 +991,13 @@ static void ena_com_io_queue_free(struct ena_com_dev *
 static int wait_for_reset_state(struct ena_com_dev *ena_dev, u32 timeout,
                                u16 exp_state)
 {
-       u32 val, i;
+       u32 val, exp = 0;
+       ena_time_t timeout_stamp;
 
-       for (i = 0; i < timeout; i++) {
+       /* Convert timeout from resolution of 100ms to us resolution. */
+       timeout_stamp = ENA_GET_SYSTEM_TIMEOUT(100 * 1000 * timeout);
+
+       while (1) {
                val = ena_com_reg_bar_read32(ena_dev, ENA_REGS_DEV_STS_OFF);
 
                if (unlikely(val == ENA_MMIO_READ_TIMEOUT)) {
@@ -879,11 +1009,11 @@ static int wait_for_reset_state(struct ena_com_dev *en
                        exp_state)
                        return 0;
 
-               /* The resolution of the timeout is 100ms */
-               ENA_MSLEEP(100);
-       }
+               if (ENA_TIME_EXPIRE(timeout_stamp))
+                       return ENA_COM_TIMER_EXPIRED;
 
-       return ENA_COM_TIMER_EXPIRED;
+               ena_delay_exponential_backoff_us(exp++, 
ena_dev->ena_min_poll_delay_us);
+       }
 }
 
 static bool ena_com_check_supported_feature_id(struct ena_com_dev *ena_dev,
@@ -903,7 +1033,8 @@ static int ena_com_get_feature_ex(struct ena_com_dev *
                                  struct ena_admin_get_feat_resp *get_resp,
                                  enum ena_admin_aq_feature_id feature_id,
                                  dma_addr_t control_buf_dma_addr,
-                                 u32 control_buff_size)
+                                 u32 control_buff_size,
+                                 u8 feature_ver)
 {
        struct ena_com_admin_queue *admin_queue;
        struct ena_admin_get_feat_cmd get_cmd;
@@ -934,7 +1065,7 @@ static int ena_com_get_feature_ex(struct ena_com_dev *
        }
 
        get_cmd.control_buffer.length = control_buff_size;
-
+       get_cmd.feat_common.feature_version = feature_ver;
        get_cmd.feat_common.feature_id = feature_id;
 
        ret = ena_com_execute_admin_command(admin_queue,
@@ -954,19 +1085,45 @@ static int ena_com_get_feature_ex(struct ena_com_dev *
 
 static int ena_com_get_feature(struct ena_com_dev *ena_dev,
                               struct ena_admin_get_feat_resp *get_resp,
-                              enum ena_admin_aq_feature_id feature_id)
+                              enum ena_admin_aq_feature_id feature_id,
+                              u8 feature_ver)
 {
        return ena_com_get_feature_ex(ena_dev,
                                      get_resp,
                                      feature_id,
                                      0,
-                                     0);
+                                     0,
+                                     feature_ver);
 }
 
+int ena_com_get_current_hash_function(struct ena_com_dev *ena_dev)
+{
+       return ena_dev->rss.hash_func;
+}
+
+static void ena_com_hash_key_fill_default_key(struct ena_com_dev *ena_dev)
+{
+       struct ena_admin_feature_rss_flow_hash_control *hash_key =
+               (ena_dev->rss).hash_key;
+
+       ENA_RSS_FILL_KEY(&hash_key->key, sizeof(hash_key->key));
+       /* The key buffer is stored in the device in an array of
+        * uint32 elements. Therefore the number of elements can be derived
+        * by dividing the buffer length by the size of each array element.
+        * In current implementation each element is sized at uint32_t
+        * so it's actually a division by 4 but if the element size changes,
+        * there is no need to rewrite this code.
+        */
+       hash_key->keys_num = sizeof(hash_key->key) / sizeof(hash_key->key[0]);
+}
+
 static int ena_com_hash_key_allocate(struct ena_com_dev *ena_dev)
 {
        struct ena_rss *rss = &ena_dev->rss;
 
+       if (!ena_com_check_supported_feature_id(ena_dev, 
ENA_ADMIN_RSS_HASH_FUNCTION))
+               return ENA_COM_UNSUPPORTED;
+
        ENA_MEM_ALLOC_COHERENT(ena_dev->dmadev,
                               sizeof(*rss->hash_key),
                               rss->hash_key,
@@ -1030,7 +1187,7 @@ static int ena_com_indirect_table_allocate(struct ena_
        int ret;
 
        ret = ena_com_get_feature(ena_dev, &get_resp,
-                                 ENA_ADMIN_RSS_REDIRECTION_TABLE_CONFIG);
+                                 ENA_ADMIN_RSS_REDIRECTION_TABLE_CONFIG, 0);
        if (unlikely(ret))
                return ret;
 
@@ -1094,7 +1251,9 @@ static void ena_com_indirect_table_destroy(struct ena_
        rss->rss_ind_tbl = NULL;
 
        if (rss->host_rss_ind_tbl)
-               ENA_MEM_FREE(ena_dev->dmadev, rss->host_rss_ind_tbl);
+               ENA_MEM_FREE(ena_dev->dmadev,
+                            rss->host_rss_ind_tbl,
+                            ((1ULL << rss->tbl_log_size) * sizeof(u16)));
        rss->host_rss_ind_tbl = NULL;
 }
 
@@ -1195,63 +1354,29 @@ static int ena_com_ind_tbl_convert_to_device(struct en
        return 0;
 }
 
-static int ena_com_ind_tbl_convert_from_device(struct ena_com_dev *ena_dev)
-{
-       u16 dev_idx_to_host_tbl[ENA_TOTAL_NUM_QUEUES] = { (u16)-1 };
-       struct ena_rss *rss = &ena_dev->rss;
-       u8 idx;
-       u16 i;
-
-       for (i = 0; i < ENA_TOTAL_NUM_QUEUES; i++)
-               dev_idx_to_host_tbl[ena_dev->io_sq_queues[i].idx] = i;
-
-       for (i = 0; i < 1 << rss->tbl_log_size; i++) {
-               if (rss->rss_ind_tbl[i].cq_idx > ENA_TOTAL_NUM_QUEUES)
-                       return ENA_COM_INVAL;
-               idx = (u8)rss->rss_ind_tbl[i].cq_idx;
-
-               if (dev_idx_to_host_tbl[idx] > ENA_TOTAL_NUM_QUEUES)
-                       return ENA_COM_INVAL;
-
-               rss->host_rss_ind_tbl[i] = dev_idx_to_host_tbl[idx];
-       }
-
-       return 0;
-}
-
-static int ena_com_init_interrupt_moderation_table(struct ena_com_dev *ena_dev)
-{
-       size_t size;
-
-       size = sizeof(struct ena_intr_moder_entry) * ENA_INTR_MAX_NUM_OF_LEVELS;
-
-       ena_dev->intr_moder_tbl = ENA_MEM_ALLOC(ena_dev->dmadev, size);
-       if (!ena_dev->intr_moder_tbl)
-               return ENA_COM_NO_MEM;
-
-       ena_com_config_default_interrupt_moderation_table(ena_dev);
-
-       return 0;
-}
-
 static void ena_com_update_intr_delay_resolution(struct ena_com_dev *ena_dev,
                                                 u16 intr_delay_resolution)
 {
-       struct ena_intr_moder_entry *intr_moder_tbl = ena_dev->intr_moder_tbl;
-       unsigned int i;
+       u16 prev_intr_delay_resolution = ena_dev->intr_delay_resolution;
 
-       if (!intr_delay_resolution) {
+       if (unlikely(!intr_delay_resolution)) {
                ena_trc_err("Illegal intr_delay_resolution provided. Going to 
use default 1 usec resolution\n");
-               intr_delay_resolution = 1;
+               intr_delay_resolution = ENA_DEFAULT_INTR_DELAY_RESOLUTION;
        }
-       ena_dev->intr_delay_resolution = intr_delay_resolution;
 
        /* update Rx */
-       for (i = 0; i < ENA_INTR_MAX_NUM_OF_LEVELS; i++)
-               intr_moder_tbl[i].intr_moder_interval /= intr_delay_resolution;
+       ena_dev->intr_moder_rx_interval =
+               ena_dev->intr_moder_rx_interval *
+               prev_intr_delay_resolution /
+               intr_delay_resolution;
 
        /* update Tx */
-       ena_dev->intr_moder_tx_interval /= intr_delay_resolution;
+       ena_dev->intr_moder_tx_interval =
+               ena_dev->intr_moder_tx_interval *
+               prev_intr_delay_resolution /
+               intr_delay_resolution;
+
+       ena_dev->intr_delay_resolution = intr_delay_resolution;
 }
 

*** DIFF OUTPUT TRUNCATED AT 1000 LINES ***
_______________________________________________
svn-src-all@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/svn-src-all
To unsubscribe, send any mail to "svn-src-all-unsubscr...@freebsd.org"

Reply via email to