On 4/20/20 1:13 PM, Jerin Jacob wrote:
> On Mon, Apr 20, 2020 at 1:29 PM Liang, Cunming <cunming.li...@intel.com> 
> wrote:
>>
>>
>>
>>> -----Original Message-----
>>> From: Jerin Jacob <jerinjac...@gmail.com>
>>> Sent: Friday, April 17, 2020 5:55 PM
>>> To: Fu, Patrick <patrick...@intel.com>
>>> Cc: Maxime Coquelin <maxime.coque...@redhat.com>; dev@dpdk.org; Ye,
>>> Xiaolong <xiaolong...@intel.com>; Hu, Jiayu <jiayu...@intel.com>; Wang,
>>> Zhihong <zhihong.w...@intel.com>; Liang, Cunming <cunming.li...@intel.com>
>>> Subject: Re: [dpdk-dev] [RFC] Accelerating Data Movement for DPDK vHost with
>>> DMA Engines
>>>
>>> On Fri, Apr 17, 2020 at 2:56 PM Fu, Patrick <patrick...@intel.com> wrote:
>>>>
>>>>
>> [...]
>>>>>>
>>>>>> I believe it doesn't conflict. The purpose of this RFC is to
>>>>>> create an async
>>>>> data path in vhost-user and provide a way for applications to work
>>>>> with this new path. dmadev is another topic which could be discussed
>>>>> separately. If we do have the dmadev available in the future, this
>>>>> vhost async data path could certainly be backed by the new dma
>>>>> abstraction without major interface change.
>>>>>
>>>>> Maybe that one advantage of a dmadev class is that it would be
>>>>> easier and more transparent for the application to consume.
>>>>>
>>>>> The application would register some DMA devices, pass them to the
>>>>> Vhost library, and then rte_vhost_submit_enqueue_burst and
>>>>> rte_vhost_poll_enqueue_completed would call the dmadev callbacks directly.
>>>>>
>>>>> Do you think that could work?
>>>>>
>>>> Yes, this is a workable model. As I said in previous reply, I have no 
>>>> objection to
>>> make the dmadev. However, what we currently want to do is creating the async
>>> data path for vhost, and we actually have no preference to the underlying 
>>> DMA
>>> device model. I believe our current design of the API proto type /data 
>>> structures
>>> are quite common for various DMA acceleration solutions and there is no 
>>> blocker
>>> for any new DMA device to adapt to these APIs or extend to a new one.
>>>
>>> IMO, as a driver writer,  we should not be writing TWO DMA driver. One for 
>>> vhost
>>> and other one for rawdev.
>> It's the most simplest case if statically 1:1 mapping driver (e.g. {port, 
>> queue}) to a vhost session {vid, qid}. However, it's not enough scalable to 
>> integrate device model with vhost library. There're a few intentions belong 
>> to app logic rather than driver, e.g. 1:N load balancing, various device 
>> type usages (e.g. vhost zcopy via ethdev) and etc.
> 
> 
> Before moving to reply to comments, Which DMA engine you are planning
> to integrate with vHOST?
> Is is ioat? if not ioat(drivers/raw/ioat/), How do you think, how we
> can integrate this IOAT DMA engine to vHOST as a use case?
> 

I guess it could be done in the vhost example.


> 
>>
>> It was not asking to writing two drivers. Each driver remains to offer 
>> provider for its own device class, which is independent. App provides the 
>> intension (adapter) to associate various device capability to vhost session.
>>
>>> If vhost is the first consumer of DMA needed then I think, it make sense to 
>>> add
>>> dmadev first.
>> On the other hand, it's risky to define 'dmadev' according to vhost's flavor 
>> before not getting aware of any other candidates. Comparing with kern Async 
>> TX DMA API (async_memcpy), RFC is very much focus on S/G buffer but not a 
>> async_memcpy.
>>
>>> The rawdev DMA driver to dmadev DMA driver conversion will be the driver 
>>> owner
>>> job.
>> It's true when it's necessary. Even that is the case, it's better for vhost 
>> to be independent with any device model, moreover vhost usage doesn't have 
>> broad enough coverage for a new device class.
>>
>>> I think, it makes sense to define the dmadev API and then costume by virtio 
>>> to
>>> avoid integration issues.
>> Vhost is a library but not a app. We'd better avoid to intro either overkill 
>> integration logic or extra device model dependence.
>>
>> Thanks,
>> Steve
>>
>>>
>>>
>>>
>>>>
>>>> Thanks,
>>>>
>>>> Patrick
>>>>
> 

Reply via email to