Are you going to send a FC support patch for nvme-cli as well?
I will, but concentrating on core support for now. I expect a fair
number of things need to be done in the cli.
+ assoc_rqst->assoc_cmd.ersp_ratio = cpu_to_be16(ersp_ratio);
+ assoc_rqst->assoc_cmd.sqsize = cpu_t
On 7/29/2016 3:10 PM, J Freyensee wrote:
+ /* TODO:
+* assoc_rqst->assoc_cmd.cntlid = cpu_to_be16(?);
+* strncpy(assoc_rqst->assoc_cmd.hostid, ?,
+* min(FCNVME_ASSOC_HOSTID_LEN, NVMF_NQN_SIZE));
+* strncpy(assoc_rqst->assoc_cmd.hostnqn, ?,
+*
This looks mostly fine, a few nitpicks below:
> +config NVME_FC
> + tristate "NVM Express over Fabrics FC host driver"
> + depends on BLK_DEV_NVME
This should be
select NVME_CORE
instead. The existing RDMA and loop drivers also get this wrong,
but I'll sned a patch to fix it up
On Fri, 2016-07-22 at 17:23 -0700, James Smart wrote:
A couple of minor comments:
> Add nvme-fabrics host FC transport support
>
> Implements the FC-NVME T11 definition of how nvme fabric capsules are
> performed on an FC fabric. Utilizes a lower-layer API to FC host
> adapters
> to send/receiv
Add nvme-fabrics host FC transport support
Implements the FC-NVME T11 definition of how nvme fabric capsules are
performed on an FC fabric. Utilizes a lower-layer API to FC host adapters
to send/receive FC-4 LS operations and FCP operations that comprise NVME
over FC operation.
The T11 definitio
5 matches
Mail list logo